text
stringlengths
1
2.25M
--- abstract: 'Using bifurcation theory on a dynamical system simulating the interaction of a particle with an obliquely propagating wave in relativistic regimes, we demonstrate that uniform acceleration arises as a consequence of Hopf bifurcations of Landau resonant particles. The acceleration process arises as a form of surfatron established through the locking in pitch angle, gyrophase, and physical trapping along the wave-vector direction. Integrating the dynamical system for large amplitudes ($\delta B/B_0\sim0.1$) obliquely propagating waves, we find that electrons with initial energies in the keV range can be accelerated to MeV energies on timescales of the order of milliseconds. The Hopf condition of Landau resonant particles could underlie some of the most efficient energization of particles in space and astrophysical plasmas.' author: - 'A. Osmane and A. M. Hamza' title: 'Relativistic acceleration of Landau resonant particles as a consequence of Hopf bifurcations.' --- Introduction ============ The main purpose of this Letter is to study the wave-particle interaction in the general case of oblique propagation and in the relativistic limit. The more recent inclusion of oblique waves in modeling is not fortuitous [@Hamza06; @Araneda08; @Osmane10]. Obliquely propagating waves are not only theoretically predicted [@Hollweg02b] but are also observed in numerous and diverse space plasma regimes, e.g., in fast streams of the solar wind [@Bavassano82], upstream of the bow shock [@Meziane01], magnetosheath [@Perri09], cometary environment [@Neubauer93], the Earth’s plasma sheet boundary [@Broughton08] and the Van Allen radiation belt [@Catell08]. The interest in obliquely propagating waves mainly resides in the inclusion of an electric field along the background magnetic field. Unlike the parallel propagation case and due to constraints imposed by Maxwell-Lorentz invariant quantities, it is impossible to find an inertial frame of reference for which the electric field vanishes[@Jackson]. The electric field of the wave can therefore provide acceleration mechanisms and/or physical trapping along the background magnetic field. Similarly, the study of relativistic regimes to modeling space and astrophysical plasmas now appears more than ever as absolute necessity, and more so for the numerous problems (i.e. cosmic rays, radiation belt electrons) where the acceleration or/and injection of charged particles to relativistic energies remains misunderstood. In the following report, we ignore the more commonly studied case of cyclotron-resonant particles ($\omega \sim\Omega$) and concentrate instead on the parameter space where the Landau resonance ($\omega \ll \Omega$) is accessible. Even though cyclotron-resonant interactions are observed and understood to play an important role in the kinetic description of space and astrophysical plasmas, other means of energy exchange between waves and particles have been lurked behind a predominant focus on cyclotron-resonance. For instance, the perpendicular heating in fast streams of the solar wind could as well result from trapped particles caught in broadened Landau resonance instead of the commonly assumed cyclotron resonance [@Lehe09; @Osmane10]. This report also aims at providing further ground for such views, but in the context of relativistic and weakly collisional plasmas.\ Dynamical System ================ This problem is addressed by using dynamical systems’ theory, which although lacking levels of self-consistency that simulations provide, can facilitate the understanding of complex systems such as plasmas and provide for an intuitive leap between theoretical models and simulations. Hence, our study begins by writing the equation of motion for a particle in an electromagnetic field as follow : $$\frac{d\textbf{p}}{dt}= e\bigg{[}\textbf{E}(\textbf{x},t)+\frac{\textbf{p}}{m\gamma c} \times \textbf{B}(\textbf{x},t)\bigg{]}$$ for a particle of momentum $\textbf{p}=m\gamma\textbf{v}$, rest mass $m$, charge $e$ and Lorentz contraction factor $\gamma=\sqrt{1+p^2/m^2c^2}$. The fields topology consist of an electromagnetic obliquely propagating wave of amplitude $(\delta \mathbf{E}, \delta \mathbf{B})$ superposed on a background magnetic field $\mathbf{B}_0$. We choose the electromagnetic wave vector $\mathbf{k}$ to point in the $\hat{z}$ direction and the background magnetic field to lie in the $y-z$ plane. Hence, the propagation angle, $\theta$, denoting the obliqueness of the wave, is defined as $\mathbf{k} \cdot \mathbf{B}_0=kB_0\cos(\theta)$. The magnetic field components of the wave are written as $$\left\{ \begin{array}{l l} \delta B_x= \delta B \sin(kz-\omega t)\\ \delta B_y=\delta B \cos(kz-\omega t),\\ \end{array} \right.$$ with the electric components provided by Faraday’s law, $c\mathbf{k} \times \delta \mathbf{E} (\mathbf{k},\omega) =\omega \delta \mathbf{B}(\mathbf{k},\omega)$. We can then write the dynamical system equations for the chosen electromagnetic field topology in terms of the following variables : $v_\Phi=\omega/k$, $p_\Phi=m\gamma v_\Phi$, $\Omega_1=e\delta B/mc\gamma$, $\Omega_0=e B_0/mc\gamma$. Hence, obtaining the following set of equations : $$\left\{ \begin{array}{l l} \dot{p}_x=p_y\Omega_0\cos(\theta)+(p_\Phi-p_z)\Omega_1\cos(kz-\omega t) +p_z\Omega_0\sin(\theta)\\ \dot{p}_y=-p_x\Omega_0\cos(\theta)+(p_z-p_\Phi)\Omega_1\sin(kz-\omega t)\\ \dot{p}_z=-p_x\Omega_0 \sin(\theta)+p_x\Omega_1\cos(kz-\omega t) -p_y\Omega_1\sin(kz-\omega t)\\ \dot{z}=p_z v_\Phi/p_\Phi \end{array} \right.$$ It follows that the dynamical gyrofrequencies $(\Omega_0, \Omega_1)$ can be tracked as the fifth variable of the dynamical system : $$\dot{\Omega}_0=\frac{d}{dt}\bigg{(}\frac{eB_0}{mc\gamma}\bigg{)} \nonumber =-\Omega_0 \frac{pc^2}{m^2c^4+p^2c^2}\dot{p}.$$ We now proceed by eliminating the time dependence by making the following change of variables : $$p_x'=p_x, \hspace{.5mm}p_y'=p_y, \hspace{.5mm} p_z'=(p_z-p_\phi), \hspace{.5mm} z'=(z-v_\phi t).$$ Hence, we write the the dynamical system in terms of the primed variables as follow : $$\label{eq:ds_in_ps} \left\{ \begin{array}{l l} \dot{p}_x'=\Omega_0 p_y'\cos(\theta)-\Omega_1p_z' \cos(kz') +\Omega_0 (p_z'+p_\phi) \sin(\theta)\\ \dot{p}_y'=-\Omega_0p_x'\cos(\theta)+\Omega_1p_z' \sin(kz')\\ \dot{p}_z'=-\Omega_0p_x'\sin(\theta)+\Omega_1(\frac{n^2-1}{n^2})(p_x'\cos(kz') -p_y'\sin(kz'))\\ \dot{z}'=p_z'v_\Phi/p_\Phi\\ \end{array} \right.$$ with the refractive index $n^2=c^2/v_\Phi^2$. In addition to equation $(4)$, one can compute the particle orbits for a class of parameters $\theta, n, \delta_1=\Omega_0/\Omega_1$ and $\delta_2=\omega/\Omega_0\gamma$.\ Hopf bifurcations of Landau resonant trapped orbits— ==================================================== Despite the apparent simplicity of the dynamical system, the inclusion of the relativistic terms results in a number of interesting properties that extend beyond the scope of this report. We hereafter focus on one of these properties arising from the bifurcation in stability of fixed (stationary) points. Indeed, it can easily be shown that the set of equations $(6)$ possesses a class of fixed (stationary) points that can be represented as follow : $$\begin{aligned} \label{eq:FP5} p_{x0}'=p_{z0}'=0; \hspace{8mm} p_{y0}' = -p_\Phi\tan(\theta)\nonumber \\ \gamma_0=\frac{1}{\sqrt{1-\frac{v_\Phi^2}{c^2}(1+\tan^2(\theta))}}; \hspace{8mm} Z=kz'=0,\pi. \nonumber\end{aligned}$$ ![Eigenvalues’ dependence on the propagation angle $\theta$ for fixed parameters $\delta_1=0.1$, $\delta_2=0.0696$, $n^2=2$ and the fixed point of component $Z_0=0$. The bifurcation through the positive real axis takes place for propagation angle $\theta_c=60^o$. The fixed point is stable for $\theta < \theta_c$ and unstable for $\theta>\theta_c$.[]{data-label="fig:example"}](eig){width="45.00000%"} These points are informative of the values for which a particle is physically trapped by the electromagnetic field. Their values in velocity space correspond to the Landau resonance condition. This can be more clearly seen if one applies the inverse of the translation in equation (5) followed by a rotation in a system of coordinate with the $z$ axis parallel to the background magnetic field[@Osmane10]. The next fundamental step in dynamical system theory is to investigate the stability of the fixed points. In order to do so, we apply a basic Lyapunov linear analysis that can be found in any textbook on dynamical systems[@Regev]. Hence, solving the eigenvalue problem $(\mathbf{J}-\lambda\mathbf{I})=0$ for the Jacobian $\mathbf{J}$ and eigenvalue $\lambda$, we find a bi-quadratic polynomial function in $\lambda$ that can be written as $\chi(\lambda)=\lambda^4+\eta_1\lambda^2+\eta_2=0$, with the constant coefficients $\eta_1$ and $\eta_2$ given by the following expressions : $$\begin{aligned} \eta_1&=&\frac{\delta_1}{\delta_2\gamma_0}\frac{n^2-1}{n^2}\tan(\theta)+\frac{\cos^2(\theta)}{\delta_2^2\gamma_0^2}\nonumber\\ &-&\frac{\delta_1}{\delta_2^2\gamma_0^2}\bigg{(}-\frac{n^2-1}{n^2}\pm2\sin(\theta)-\frac{\sin^2(\theta)}{\delta_1}\mp\frac{\sin(\theta)}{n^2}\bigg{)}\nonumber\\ \eta_2&=&\frac{\delta_1}{\delta_2^2\gamma_0^2}\frac{n^2-1}{n^2}\sin(\theta)\cos(\theta),\end{aligned}$$ ![Particle orbits for parameters $\delta_1=0.1$, $\delta_2=0.0696$, $n^2=4$, $\theta=\theta_c-1^o$ and initial conditions $v_{x0}'=0$, $v_{y0}'=-v_\Phi\tan(\theta)-1.6v_\Phi$, $v_{z0}'=-v_\Phi$, $Z_0'=0$.[]{data-label="fig:example3"}](hopf1){width="45.00000%"} with the $\pm$ symbol denoting the values for $Z=0$ and $Z=\pi$. A close look at the coefficients of equation set $(7)$ shows that all four eigenvalues will cross the zero real axis when the condition $$n^2-1=\tan^2(\theta)$$ is respected. That is, for parameter values corresponding to $\gamma_0^{-1}=0$ and resulting in $\lambda^4=0$. This condition can be more clearly expressed through Figure 1 where we plotted the dependence of the real and imaginary part of all four eigenvalues as a function of $\theta$, while keeping the remaining parameters ($\delta_1, \delta_2, n^2$) constant. We observe that when the condition in equation(8) is respected, the equilibrium evolves from stable to unstable equilibrium since the real part of one of the eigenvalues becomes positive. This type of bifurcation, where pairs of complex conjugate eigenvalues cross through the imaginary axis, is the well-known Hopf bifurcation [@Regev]. The fixed point for $Z=\pi$ is linearly unstable for every parameter range and values chosen in Figure 1 with the exception of the parameters for which equation (8) is respected. The eigenvalue profile and the following conclusions do not differ significantly for low-frequencies ($\delta_2 < 1$) and large-amplitudes ($\delta_1 \leq 1$), that is the relevant range of parameters in space and astrophysical plasmas.\ ![Particle orbits for parameters $\delta_1=0.1$, $\delta_2=0.0696$, $n^2=4$, $\theta=\theta_c$. The orbit is locked in phase-space and trapped along $Z$.[]{data-label="fig:example1"}](Hopf_Z){width="45.00000%"} ![Particle orbits for parameters $\delta_1=0.1$, $\delta_2=0.0696$, $n^2=4$, $\theta=\theta_c$. The orbit is uniformly accelerated once locked in phase-space.[]{data-label="fig:example2"}](Hopf_P){width="45.00000%"} ![Particle orbits for $\theta>\theta_c$. A torus emerge from the two Hopf bifurcations.[]{data-label="fig:example4"}](twoDDtorus){width="45.00000%"} Numerical integration— ====================== We now investigate the effects of the bifurcation on the particles belonging to the basin of attraction of the fixed point with component $Z=0$. For the sake of clarity and in order to underlie the physical processes at play, we choose to represent the particle momentum in terms of spherical coordinates $(p',\alpha, \Phi)$ instead of the cartesian coordinates $(p_x', p_y', p_z')$ in equation set $(6)$. The transformation from one expression of the momentum through the other can be made using the following definitions for the magnitude $p'=\sqrt{p^{'2}_x+p^{'2}_y+p^{'2}_z}$, the pitch angle $\tan (\alpha) = \frac{p'_\perp}{p'_{\parallel}}$, and the dynamical gyrophase $\tan (\Phi) = \frac{p'_{\perp 1}}{p'_{\perp 2}}=\frac{p_x'}{p_y'\cos(\theta)+p_z'\sin(\theta)}$, where the parallel and perpendicular symbols are with respect to $\mathbf{B_0}$. In Figure 2 the orbit of a particle interacting with a large amplitude and low-frequency wave is shown for $\alpha$, $\Phi$ and $Z$ for a propagation angle nearly obeying the condition of equation $(8)$. For such parameter values, the fixed point is linearly stable and a particle belonging to the basin of attraction will remain physically trapped along the wave-vector. The orbit eventually closes onto itself as the particle bounces back and forth in the wave potential, alternatively loosing and gaining energy with no net gain over one period. If we modify the propagation angle such that the condition in equation $(8)$ is respected, we can see from Figures 3 and 4 that a particle will asymptotically converge into a point in the $(\alpha, \Phi)$ phase-space while it gets trapped along $Z$ and subsequently diverges to infinity in momentum. That is, the fixed point becomes an attractor along $(\alpha, \Phi)$, and the particles initially belonging to the basin of attraction will be locked forever in phase-space and experience uniform acceleration through the constant electric field observed by the particle. If we increase the propagation angle further, such that $\theta>\theta_c$, we find that two dimensional torus are created (see Figure 5) and that particles can be neither physically trapped nor uniformly accelerated. However, even though the dynamics for $\theta>\theta_c$ deserves a study of its own, we now focus on the case $\theta\sim\theta_c$ for which particles can be energized irreversibly.\ ![Lorentz factor $\gamma$ as a function of time for $\delta_1=(0.04, 0.05, 0.06)$, $\delta_2=0.1$, $n^2=9$, $\theta=\theta_c$ and initial condition $v_x=v_z\sim 0, v_y=-v_\Phi\tan(\theta_c)-v_\Phi/3, Z=kz'=0$. The particle can be accelerated to MeV energies for time scales of less than a millisecond.[]{data-label="fig:example4"}](micro){width="40.00000%"} Discussion— =========== Phase-locking and trapping of a particle while submitted to a constant electric field has been predicted and observed before in previous models using Hamiltonian and/or asymptotic approaches. Indeed, similar processes for a wide range of electrostatic and/or electromagnetic topologies have been referred to as [*[surfatron]{}*]{} . The novel result reported here is that the mechanism arises as a result of Hopf bifurcations at the Landau resonance. Qualitatively, one could describe this result by the statement that there are [*[resonances of Landau resonances]{}*]{} resulting in efficient energization.\ In order for this mechanism to be reproducible in space and astrophysical plasmas, the acceleration should take place for realistic wave amplitudes, and belong to a sufficiently wide volume of velocity-space to affect a portion of a distribution functions. In other words, the basin of attraction should be wide enough, and particles of moderate energies should be able to be attracted into it. Choosing a highly oblique wave $\theta\sim 71^o$, with a wave frequency given by $\delta_2=0.1$ and amplitudes of the order of a percent of the background magnetic field, we integrate the dynamical system for a few wave periods. The results are shown in Figure 6 for different wave amplitudes. Whereas the irreversible acceleration, coinciding with the Hopf bifurcations in phase space, requires a wave amplitude of $\delta_1\geq 0.06$ capable of physical trapping along $Z$; particles can still be brought to relativistic energies if they are caught in sufficiently close to the fixed point. The size of the basin of attraction varies significantly as a function of the parameter $\delta_2$ but we nonetheless find that particles with moderate energies ranging from few keV to hundreds of keV, can be accelerated to relativistic levels on timescales comparable to the gyroperiod. The basin of attraction is therefore wide enough to affect a significant portion of a distribution function. In the case shown in Figure 6 for $\delta_2=0.1$, electrons with few hundreds keV are accelerated to MeV energies on timescales less than a millisecond. Such timescales of energization for large-amplitudes $\delta_1\sim 0.06$ and low frequency $\delta_2\sim 0.1$ suggest that this mechanism could be of interest for studies of space (e.g. radiation belts) and cosmic (e.g. galactic plasma) plasmas if permeated by large-amplitude oblique waves. Further work is now currently underway to apply methods presented in this paper to the specific problem of electron acceleration in the planetary radiation belts. We thank Dr. K. Meziane for helpful discussions. This work was supported by the Natural Sciences and Engineering Research Council of Canada (NSERC). [16]{} natexlab\#1[\#1]{}bibnamefont \#1[\#1]{}bibfnamefont \#1[\#1]{}citenamefont \#1[\#1]{}url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{} , , , ****, (). , , , ****, (). , , , ****, (). , ****, (). , , , , , ****, (). , , , , , , , ****, (). , , , , , , , ****, (). , , , , ****, (). , , , , , , , , ****, (). , , , , , , , , , , , ****, (). , ** (). , , , ****, (), . , ** (, ). , ****, (). , , , , ****, (). , , , ****, ().
[[R]{}]{} \[theorem\][Definition]{} \[theorem\][Lemma]{} \[theorem\][Corollary]{} \[theorem\][Fact]{} \[theorem\][Proposition]{} \[theorem\][Conjecture]{} \[theorem\][Observation]{} \[theorem\][Claim]{}
--- author: - Alain Girault - Gregor Gössler - Rachid Guerraoui - | \ Jad Hamza - 'Dragos-Adrian Seredinschi' bibliography: - 'references.bib' title: Monotonic Prefix Consistency in Distributed Systems ---
--- abstract: | We study the dynamics of a tagged particle in an infinite particle environment. Such processes have been studied in e.g. [@GP85], [@DeM89] and [@Os98]. I.e., we consider the heuristic system of stochastic differential equations $$\begin{aligned} &d\xi(t)=\sum_{i=1}^\infty\nabla\phi(y_i(t))\,dt+\sqrt{2}\,dB_1(t),\quad t\ge 0,\tag{TP}\\ &\left. \begin{matrix} dy_i(t)=\big(-\sum_{\stackunder{j\not=i}{j=1}}^\infty\nabla\phi(y_i(t)-y_j(t))-\nabla\phi(y_i(t))-\sum_{j=1}^\infty\nabla\phi(y_j(t))\big)\,dt\\ +\sqrt{2}\,d(B_{i+1}(t)-B_1(t)),\quad t\ge 0,\quad i\in\mathbb{N} \end{matrix}\right\}.\tag{ENV}\end{aligned}$$ This system realizes the coupling of the motion of the tagged particle, described by (TP), and the motion of the environment seen from the tagged particle, described by (ENV). As we can observe in (TP) the solution to (ENV), the so-called environment process, is driving the tagged particle. Thus our strategy is to study (ENV) at first and afterwards the coupled process, i.e., (TP) and (ENV) simultaneously. Here the analysis and geometry on configuration spaces developed in [@AKR98a] and [@AKR98b] plays an important role. Furthermore, the harmonic analysis on configuration spaces derived in [@KK99a] is very useful for our considerations. First we derive an integration by parts formula with respect to the standard gradient $\nabla^\Gamma$ on configuration spaces $\Gamma$ for a general class of grand canonical Gibbs measures $\mu$, corresponding to pair potentials $\phi$ and intensity measures $\sigma=z\,\exp(-\phi)\,dx,~0<z<\infty$, having correlation functions fulfilling a Ruelle bound. Furthermore, we use a second integration by parts formula with respect to the gradient ${\nabla^{\Gamma}_{\gamma}}$, generating the uniform translations on $\Gamma$, for a (non-empty) subclass of the Gibbs measures $\mu$ as above which is provided in [@CoKu09]. Combining these two gradients by Dirichlet form techniques we can construct the environment process and the coupled process, respectively. Scaling limits of such dynamics have been studied e.g. in [@DeM89], [@GP85] and [@Os98]. Our results give the first mathematically rigorous and complete construction of the tagged particle process in continuum with interaction potential. In particular, we can treat interaction potentials which might have a singularity at the origin, non-trivial negative part and infinite range as e.g. the Lennard–Jones potential. address: ' Torben Fattler, Mathematics Department, University of Kaiserslautern, P.O.Box 3049, 67653 Kaiserslautern, Germany. [`Email: [email protected]`, `URL: http://www.mathematik.uni-kl.de/ wwwfktn/homepage/fattler.html`]{} Martin Grothaus, Mathematics Department, University of Kaiserslautern, P.O.Box 3049, 67653 Kaiserslautern, Germany. ' author: - 'Torben Fattler, Martin Grothaus' title: Tagged Particle Process in Continuum with Singular Interactions --- \[section\] \[theorem\][Proposition]{} \[theorem\][Corollary]{} \[theorem\][Condition]{} \[theorem\][Example]{} \[theorem\][Definition]{} \[theorem\][Lemma]{} \[theorem\][Remark]{} \[theorem\][Notation]{} [^1] Introduction ============ We consider a system of infinitely many Brownian particles in $\mathbb{R}^d,~d\in\mathbb{N}$, interacting via the gradient of a symmetric pair potential $\phi$. Since each particle can move through each position in space, the system is called continuous and is used for modeling suspensions, gases or fluids. The infinite volume, infinite particle, stochastic dynamics $(x(t))_{t\ge 0}$ heuristically solves the following infinite system of stochastic differential equations: $$\begin{aligned} \label{sde} dx_i(t)&=-\sum_{\stackunder{j\not=i}{j=1}}^\infty\nabla\phi(x_i(t)-x_j(t))dt+\sqrt{2}dB_i(t),\quad t\ge 0,\quad i\in\mathbb{N},\end{aligned}$$ where $x(t)=\{x_1(t),x_2(t),\ldots\}$, $t\ge 0$, and $(B_i)_{i\in\mathbb{N}}$ is a *sequence of independent Brownian motions*. Its informal generator is given by $$\begin{aligned} \label{genivipdif} L_{\scriptscriptstyle{gsd}}=\sum_{i=1}^\infty\partial_{x_i}^2-\sum_{i=1}^\infty\Big(\sum_{\stackunder{j\not=i}{j=1}}^\infty\nabla\phi(x_i-x_j)\Big)\partial_{x_i}.\end{aligned}$$ Using $$\begin{aligned} \label{densityivipdif} \varrho_{\scriptscriptstyle{\infty}}(x_1,x_2,\ldots)=\exp\Big(-\frac{1}{2}\sum_{i\not=j}\phi(x_i-x_j)\Big)\end{aligned}$$ we have $$\begin{aligned} L_{\scriptscriptstyle{gsd}}=\sum_{i=1}^\infty\partial_{x_i}\ln(\varrho_{\scriptscriptstyle{\infty}})\partial_{x_i}+\sum_{i=1}^\infty\partial^2_{x_i}.\end{aligned}$$ Note that $L_{\scriptscriptstyle{gsd}}$ in this form is not well-defined. The construction of such diffusions has been initiated by R. Lang [@La77], who considered the case $\phi\in C^3_0(\mathbb{R}^d)$ using finite dimensional approximations and stochastic differential equations. More singular $\phi$, which are of particular interest in physics, as e.g. the Lennard–Jones potential, have been treated by H. Osada, [@Os96], and M. Yoshida, [@Y96]. Osada and Yoshida were the first to use Dirichlet forms for the construction of such processes. However, they could not write down the corresponding generators or martingale problems explicitly, hence could not prove that their processes actually solve (\[sde\]) weakly. This, however, was proved in [@AKR98b] by showing an integration by parts formula for the respective grand canonical Gibbs measures. Another approach not using an integration by parts can be found in [@MaRo00]. In [@GKR04] the authors provide an $N/V$-limit for the infinite volume, infinite particle stochastic dynamics with singular interactions in continuous particle systems on $\mathbb{R}^d,~d\ge 1$. Their construction is the first covering the case $d=1$ in the space of single configurations (only one particle at one site for all times $t\ge 0$). In this paper we study the tagged particle process in continuum with singular interactions. The underlying model can be described as follows. Consider the infinite system of Brownian particles described by (\[sde\]). Coloring any one particle from the system blue and all the rest of the particles yellow, we investigate the motion of this *tagged particle* in the random sea of all the yellow ones. In [@GP85] this model and a scaling limit of it is studied for Brownian particles in $\mathbb{R}^d$ interacting via the gradient of a smooth, finite range, symmetric, positive pair potential. In [@Os98] the author considers the tagged particle process for more singular potentials, including the Lennard–Jones potential, using Dirichlet form techniques. However, there the author is also mainly interested in obtaining a scaling limit for the tagged particle process. Showing existence of the stochastic dynamics in the above cited articles has been left open. Osada gives reference to a forthcoming paper on his own, but as far as we know, it has never been published. Thus in our opinion there is a need to construct the tagged particle process with interaction potential rigorously. For other strategies to obtain the tagged particle process see e.g. [@DeM89 Sect. 6] and the references therein. But note that these are not worked out in detail. We start with an heuristic approach just to clarify the way of posing the problem. After doing so the whole analysis will be done on a strictly rigorous level. Assume we are given a solution $x(t),~t\ge 0$, of (\[sde\]). Using the coordinate transformation $$\begin{aligned} \label{ctrans} &\xi(t):=x_1(t)\quad\mbox{and}\nonumber\\ &y_i(t):=x_{i+1}(t)-x_1(t),~i\in\mathbb{N},\end{aligned}$$ we can rewrite (\[sde\]) and obtain $$\begin{aligned} &d\xi(t)=\sum_{i=1}^\infty\nabla\phi(y_i(t))\,dt+\sqrt{2}\,dB_1(t)\label{tppro}\\ &\left. \begin{matrix} dy_i(t)=\big(-\sum_{\stackunder{j\not=i}{j=1}}^\infty\nabla\phi(y_i(t)-y_j(t))-\nabla\phi(y_i(t))-\sum_{j=1}^\infty\nabla\phi(y_j(t))\big)\,dt\\ +\sqrt{2}\,d(B_{i+1}(t)-B_1(t)),\quad t\ge 0,\quad i\in\mathbb{N} \end{matrix}\right\}.\label{envpro}\end{aligned}$$ To derive the informal generator of the process $\{\xi(t),y_1(t),y_2(t),\ldots\}$ corresponding to (\[tppro\]) and (\[envpro\]), we use again the coordinate transformation (\[ctrans\]) and obtain $$\begin{aligned} &\partial_{x_1}=\partial_{\xi}-\sum_{i=1}^\infty\partial_{y_i},\\ &\partial_{x_{i+1}}=\partial_{y_i},\quad i\ge 1.\end{aligned}$$ Plugging this into the representation of $L_{\scriptscriptstyle{gsd}}$ in (\[genivipdif\]) yields $$\begin{aligned} L_{\scriptscriptstyle{\text{coup}}}=\sum_{i=1}^\infty\partial_{y_i}^2+\left(\partial_\xi-\sum_{i=1}^\infty\partial_{y_i}\right)^2+\sum_{i=1}^\infty\nabla\phi(y_i)\left(\partial_\xi-\sum_{i=1}^\infty\partial_{y_i}\right)\nonumber\\ -\sum_{i=1}^\infty\Big(\nabla\phi(y_i)+\sum_{\stackunder{j\not=i}{j=1}}^\infty\nabla\phi(y_i-y_j)\Big)\partial_{y_i}.\end{aligned}$$ By setting $$\begin{aligned} \label{denscoup} \hat{\varrho}_{\scriptscriptstyle{\infty}}(y_1,y_2,\ldots):=\exp\Big(-\frac{1}{2}\sum_{i\not=j}\phi(y_i-y_j)\underbrace{-\sum_{i=1}^\infty\phi(y_i)}_{\text{\tiny additional term}}\Big)\end{aligned}$$ we obtain $$\begin{gathered} L_{\scriptscriptstyle{\text{coup}}}=\sum_{i=1}^\infty\partial_{y_i}\ln(\hat{\varrho}_{\scriptscriptstyle{\infty}})\partial_{y_i}+\sum_{i=1}^\infty\partial^2_{y_i} +\partial_\xi^2-\partial_\xi\sum_{i=1}^\infty\partial_{y_i}-\sum_{i=1}^\infty\partial_{y_i}\ln(\hat{\varrho}_{\scriptscriptstyle{\infty}})\partial_{\xi}-\sum_{i=1}^\infty\partial_{y_i}\partial_\xi\\ +\sum_{i=1}^\infty\ln(\hat{\varrho}_{\scriptscriptstyle{\infty}})\sum_{i=1}^\infty\partial_{y_i}+\left(\sum_{i=1}^\infty\partial_{y_i}\right)^2.\end{gathered}$$ Hence $L_{\scriptscriptstyle{\text{coup}}}$ splits into $$\begin{aligned} \label{igencoup} L_{\scriptscriptstyle{\text{coup}}}=L_{\scriptscriptstyle{\text{env}}}+\partial_\xi^2-\partial_\xi\sum_{i=1}^\infty\partial_{y_i}-\sum_{i=1}^\infty\partial_{y_i}\ln(\hat{\varrho}_{\scriptscriptstyle{\infty}})\partial_{\xi}-\sum_{i=1}^\infty\partial_{y_i}\partial_\xi,\end{aligned}$$ where $$\begin{aligned} L_{\scriptscriptstyle{\text{env}}}=\sum_{i=1}^\infty\partial_{y_i}^2+\sum_{i=1}^\infty\partial_{y_i}\ln(\hat{\varrho}_{\scriptscriptstyle{\infty}})\partial_{y_i}+\left(\sum_{i=1}^\infty\partial_{y_i}\right)^2+\sum_{i=1}^\infty\partial_{y_i}\ln(\hat{\varrho}_{\scriptscriptstyle{\infty}})\sum_{i=1}^\infty\partial_{y_i}.\end{aligned}$$ For $y(t):=\{y_1(t),y_2(t),\ldots\}$, $(y(t))_{t\ge 0}$ is called *environment process*. It is the marginal of the $(\xi,y)$-process describing the environment seen from the tagged particle and having $L_{\scriptscriptstyle{\text{env}}}$ as informal generator. $L_{\scriptscriptstyle{\text{env}}}$ can be written as $$\begin{aligned} \label{igenenv} L_{\scriptscriptstyle{\text{env}}}=L_{\scriptscriptstyle{\text{gsdad}}}+\left(\sum_{i=1}^\infty\partial_{y_i}\right)^2+\sum_{i=1}^\infty\partial_{y_i}\ln(\hat{\varrho}_{\scriptscriptstyle{\infty}})\sum_{i=1}^\infty\partial_{y_i},\end{aligned}$$ where $$\begin{aligned} \label{igenmgsd} L_{\scriptscriptstyle{\text{gsdad}}}:=\sum_{i=1}^\infty\partial_{y_i}^2+\sum_{i=1}^\infty\partial_{y_i}\ln(\hat{\varrho}_{\scriptscriptstyle{\infty}})\partial_{y_i}\end{aligned}$$ is the informal generator of a gradient stochastic dynamics with additional drift term (compare (\[densityivipdif\]) and (\[denscoup\])). In the sequel we call the dynamics corresponding to $L_{\scriptscriptstyle{\text{gsdad}}}$ a *gradient stochastic dynamics with additional drift*. Now (\[tppro\]) with $\xi(0)=0$ describes the motion $\xi(t)$ of the tagged particle which is determined by the environment process $(y(t))_{t\ge 0}$. Thus $L_{\scriptscriptstyle{\text{coup}}}$ is the informal generator of the diffusion process coupling the motion of the tagged particle in $\mathbb{R}^d$ and the motion of the environment seen from this particle. The tagged particle process is then obtained by a projection of the coupled process generated informally by $L_{\scriptscriptstyle{\text{coup}}}$. On a rigorous level the infinite volume, infinite particle, stochastic dynamics in continuous particle systems can be realized as an infinite dimensional diffusion process taking values in the configuration space $$\begin{aligned} \Gamma := \left\{ \gamma \subset {\mathbb R}^{d} \big| \, |\gamma \cap K| < \infty \, \,\, \mbox{for any compact} \, K \subset {\mathbb R}^{d} \right\}\end{aligned}$$ and having a grand canonical Gibbs measure as an invariant measure. In [@AKR98b] the generator realizing $L_{\scriptscriptstyle{gsd}}$ (see (\[genivipdif\])) is given by $$\begin{gathered} L^{\scriptscriptstyle{\Gamma,\mu_{\scriptscriptstyle{0}}}}_{\scriptscriptstyle{\text{gsd}}} F(\gamma)=\sum_{i,j=1}^N\partial_i\partial_jg_{\scriptscriptstyle{F}}(\langle f_1,\gamma\rangle,\ldots,\langle f_N,\gamma\rangle)\left\langle(\nabla f_i,\nabla f_j)_{\mathbb{R}^d},\gamma\right\rangle\\ +\sum_{j=1}^N\partial_j g_{\scriptscriptstyle{F}}(\langle f_1,\gamma\rangle,\ldots,\langle f_N,\gamma\rangle)\Big(\langle\Delta f_j,\gamma\rangle -\sum_{\{x,y\}\subset\gamma}(\nabla\phi(x-y),\nabla f_j(x)-\nabla f_j(y))_{\mathbb{R}^d}\Big)\\ \mbox{for }\mu_{\scriptscriptstyle{0}}\mbox{-a.e.~}\gamma\in\Gamma\mbox{ and }F=g_{\scriptscriptstyle{F}}(\langle f_1,\cdot\rangle,\ldots,\langle f_N,\cdot\rangle)\in\mathcal{F}C_b^\infty(C^\infty_0(\mathbb{R}^d),\Gamma).\end{gathered}$$ It is obtained by carrying out an integration by parts of $$\begin{aligned} \mathcal{E}^{\scriptscriptstyle{\Gamma,{\mu}}_{\scriptscriptstyle{0}}}_{\scriptscriptstyle{\text{gsd}}}(F,G)=\int_\Gamma\left(\nabla^\Gamma F(\gamma),\nabla^\Gamma G(\gamma)\right)_{T_\gamma\Gamma}\,d\mu_{\scriptscriptstyle{0}}(\gamma),\quad F,G\in\mathcal{F}C_b^\infty(C^\infty_0(\mathbb{R}^d),\Gamma),\end{aligned}$$ with respect to a grand canonical Gibbs measure $\mu_{\scriptscriptstyle{0}}$ corresponding to an intensity measure $\sigma=z\,dx,~0<z<\infty$. We start our analysis by considering the operator realizing $L_{\scriptscriptstyle{\text{gsdad}}}$ (see (\[igenmgsd\])). It is given by $$\begin{gathered} L^{\scriptscriptstyle{\Gamma,\mu}}_{\scriptscriptstyle{\text{gsdad}}} F(\gamma)=\sum_{i,j=1}^N\partial_i\partial_jg_{\scriptscriptstyle{F}}(\langle f_1,\gamma\rangle,\ldots,\langle f_N,\gamma\rangle)\left\langle(\nabla f_i,\nabla f_j)_{\mathbb{R}^d},\gamma\right\rangle\\ +\sum_{j=1}^N\partial_j g_{\scriptscriptstyle{F}}(\langle f_1,\gamma\rangle,\ldots,\langle f_N,\gamma\rangle)\Big(\langle\Delta f_j,\gamma\rangle+\sum_{x\in\gamma}(\nabla\phi(x),\nabla f_j(x))_{\mathbb{R}^d} \\-\sum_{\{x,y\}\subset\gamma}(\nabla\phi(x-y),\nabla f_j(x)-\nabla f_j(y))_{\mathbb{R}^d}\Big)\\ \mbox{for }\mu\mbox{-a.e.~}\gamma\in\Gamma\mbox{ and }F=g_{\scriptscriptstyle{F}}(\langle f_1,\cdot\rangle,\ldots,\langle f_N,\cdot\rangle)\in\mathcal{F}C_b^\infty(C^\infty_0(\mathbb{R}^d),\Gamma), \end{gathered}$$ where $\mu\in\mathcal{G}_{\scriptscriptstyle{Rb}}^{\scriptscriptstyle{gc}}(\Phi_{\scriptscriptstyle{\phi}},z\exp(-\phi))$, i.e. a grand canonical Gibbs measure corresponding to a pair potential $\phi$ and an intensity measure $\sigma=z\,\exp(-\phi) dx,~0<z<\infty$, with corresponding correlation measures fulfilling a Ruelle bound. The associated symmetric bilinear form is given by $$\begin{aligned} \mathcal{E}^{\scriptscriptstyle{\Gamma,\mu}}_{\scriptscriptstyle{\text{gsdad}}}(F,G)=\int_\Gamma\left(\nabla^\Gamma F(\gamma),\nabla^\Gamma G(\gamma)\right)_{T_\gamma\Gamma}\,d\mu(\gamma),\quad F,G\in\mathcal{F}C_b^\infty(C^\infty_0(\mathbb{R}^d),\Gamma).\end{aligned}$$ Having a Ruelle bound enables us to prove an integration by parts formula for cylinder functions on the configuration space $\Gamma$ with respect to the underlying grand canonical Gibbs measure $\mu$ for a general class of pair potentials $\phi$. This is done in Section \[secintbp1\], see Theorem \[thmintbyparts\]. Using this result we can identify $L^{\scriptscriptstyle{\Gamma,\mu}}_{\scriptscriptstyle{\text{gsdad}}}$ as generator of $\mathcal{E}^{\scriptscriptstyle{\Gamma,\mu}}_{\scriptscriptstyle{\text{gsdad}}}$ on $\mathcal{F}C_b^\infty(C^\infty_0(\mathbb{R}^d),\Gamma)$. Moreover, showing that $(\mathcal{E}^{\scriptscriptstyle{\Gamma,\mu}}_{\scriptscriptstyle{\text{gsdad}}},D(\mathcal{E}^{\scriptscriptstyle{\Gamma,\mu}}_{\scriptscriptstyle{\text{gsdad}}}))$ is a conservative, local, quasi-regular, symmetric Dirichlet form we have the existence of a conservative diffusion process $\mathbf{M}_{\scriptscriptstyle{\text{gsdad}}}^{\scriptscriptstyle{\Gamma,\mu}}$ solving the associated martingale problem. To tackle $\mathcal{E}^{\scriptscriptstyle{\Gamma,\mu}}_{\scriptscriptstyle{\text{gsdad}}}$ we got many ideas from [@AKR98b], but due to the more general intensity measure $\sigma$ according to $\mu$, we have to deal with additional technical problems. Next step is to investigate the operator realizing $L_{\scriptscriptstyle{\text{env}}}$ (see (\[igenenv\])). Hence we consider $$\begin{gathered} L^{\scriptscriptstyle{\Gamma,\mu}}_{\scriptscriptstyle{\text{env}}}F(\gamma)=L^{\scriptscriptstyle{\Gamma,\mu}}_{\scriptscriptstyle{\text{gsdad}}}F(\gamma) +\sum_{i,j=1}^N\partial_i\partial_j g_{\scriptscriptstyle{F}}\left(\langle f_1,\gamma\rangle,\ldots,\langle f_N,\gamma\rangle\right)\big(\langle\nabla f_i,\gamma\rangle,\langle\nabla f_j,\gamma\rangle\big)_{\mathbb{R}^d}\\ +\sum_{j=1}^N \partial_j g_{\scriptscriptstyle{F}}\left(\langle f_1,\gamma\rangle,\ldots,\langle f_N,\gamma\rangle\right)\Big(\langle\Delta f_j,\gamma\rangle-\big(\langle\nabla\phi,\gamma\rangle,\langle\nabla f_j,\gamma\rangle\big)_{\mathbb{R}^d}\Big)\quad\mbox{ for }\mu\mbox{-a.e.~}\gamma\in\Gamma,\end{gathered}$$ $F=g_{\scriptscriptstyle{F}}(\langle f_1,\cdot\rangle,\ldots,\langle f_N,\cdot\rangle)\in\mathcal{F}C_b^\infty(C^\infty_0(\mathbb{R}^d),\Gamma)$ and the associated symmetric bilinear form $$\begin{gathered} \mathcal{E}^{\scriptscriptstyle{\Gamma,{\mu}}}_{\scriptscriptstyle{\text{env}}}(F,G)=\int_\Gamma\Big(\nabla^\Gamma F(\gamma),\nabla^\Gamma G(\gamma)\Big)_{T_\gamma\Gamma}+\Big({\nabla^{\Gamma}_{\gamma}}F(\gamma),{\nabla^{\Gamma}_{\gamma}}G(\gamma)\Big)_{\scriptscriptstyle{\mathbb{R}^d}}\,d{\mu}(\gamma),\\ F,G\in\mathcal{F}C_b^\infty(C^\infty_0(\mathbb{R}^d),\Gamma).\end{gathered}$$ Here $\mu$ is again as above. For an activity $0<z<\infty$ and a general class of pair potentials $\phi$ in [@CoKu09] for a non-empty subset of $\mathcal{G}_{\scriptscriptstyle{Rb}}^{\scriptscriptstyle{gc}}(\Phi_{\scriptscriptstyle{\phi}},z\exp(-\phi))$ an integration by parts formula with respect to ${\nabla^{\Gamma}_{\gamma}}$ is shown. In the sequel we denote this subset of $\mathcal{G}_{\scriptscriptstyle{Rb}}^{\scriptscriptstyle{gc}}(\Phi_{\scriptscriptstyle{\phi}},z\exp(-\phi))$ by $\mathcal{G}_{\scriptscriptstyle{ibp}}^{\scriptscriptstyle{gc}}(\Phi_{\scriptscriptstyle{\phi}},z\exp(-\phi))$. Hence for $\mu\in \mathcal{G}_{\scriptscriptstyle{ibp}}^{\scriptscriptstyle{gc}}(\Phi_{\scriptscriptstyle{\phi}},z\exp(-\phi))$ the bilinear form $(\mathcal{E}^{\scriptscriptstyle{\Gamma,{\mu}}}_{\scriptscriptstyle{\text{env}}},D(\mathcal{E}^{\scriptscriptstyle{\Gamma,{\mu}}}_{\scriptscriptstyle{\text{env}}}))$ is closable. Furthermore, together with the results we obtained for $(\mathcal{E}^{\scriptscriptstyle{\Gamma,\mu}}_{\scriptscriptstyle{\text{gsdad}}},D(\mathcal{E}^{\scriptscriptstyle{\Gamma,\mu}}_{\scriptscriptstyle{\text{gsdad}}}))$ we prove that $(\mathcal{E}^{\scriptscriptstyle{\Gamma,\mu}}_{\scriptscriptstyle{\text{env}}},D(\mathcal{E}^{\scriptscriptstyle{\Gamma,\mu}}_{\scriptscriptstyle{\text{env}}}))$ is a conservative, local, quasi-regular, symmetric Dirichlet form. Thus we obtain a conservative diffusion process $\mathbf{M}_{\scriptscriptstyle{\text{env}}}^{\scriptscriptstyle{\Gamma,\mu}}$ solving the associated martingale problem. Hence $\mathbf{M}_{\scriptscriptstyle{\text{env}}}^{\scriptscriptstyle{\Gamma,\mu}}$ solves (\[envpro\]) weakly and describes the motion of the environment seen from the tagged particle. Finally, as the operator realizing $L_{\scriptscriptstyle{\scriptscriptstyle{\text{coup}}}}$ (see (\[igencoup\])) we consider $$\begin{gathered} L^{\scriptscriptstyle{\mathbb{R}^d\times\Gamma,\hat{\mu}}}_{\scriptscriptstyle{\text{coup}}}\mathfrak{F}(\xi,\gamma)=L^{\scriptscriptstyle{\Gamma,\mu}}_{\scriptscriptstyle{\text{env}}}F(\gamma)\,f(\xi)-2\,\Big({\nabla^{\Gamma}_{\gamma}}F(\gamma),\nabla f(\xi)\Big)_{\mathbb{R}^d}\\ +\sum_{x\in\gamma}\Big(\nabla\phi(x),\nabla f(\xi)\Big)_{\mathbb{R}^d}+\Delta f(\xi)\,F(\gamma) \quad\mbox{for }d\xi\otimes\mu\mbox{-a.e.~}(\xi,\gamma)\in\mathbb{R}^d\times\Gamma,\end{gathered}$$ where $\mathfrak{F}\in C_0^\infty(\mathbb{R}^d)\otimes \mathcal{F}C_b^{\infty}(C^{\infty}_0(\mathbb{R}^d),\Gamma)$ with $\mathfrak{F}(x,\gamma)=f(x)\,F(\gamma)$ for $f\in C_0^\infty(\mathbb{R}^d)$, $F\in \mathcal{F}C_b^{\infty}(C^{\infty}_0(\mathbb{R}^d),\Gamma)$ and $\mu\in\mathcal{G}_{\scriptscriptstyle{ibp}}^{\scriptscriptstyle{gc}}(\Phi_{\scriptscriptstyle{\phi}},z\exp(-\phi))$, $0<z<\infty$. The associated symmetric bilinear form is given by $$\begin{gathered} \mathcal{E}^{\scriptscriptstyle{\mathbb{R}^d\times\Gamma,\hat{\mu}}}_{\scriptscriptstyle{\text{coup}}}(\mathfrak{F},\mathfrak{G}) =\int_{\mathbb{R}^d\times\Gamma}\Big(({\nabla^{\Gamma}_{\gamma}}-\nabla)\mathfrak{F}(\xi,\gamma),({\nabla^{\Gamma}_{\gamma}}-\nabla)\mathfrak{G}(\xi,\gamma)\Big)_{\mathbb{R}^d} \\+\Big(\nabla^\Gamma \mathfrak{F}(\xi,\gamma),\nabla^\Gamma \mathfrak{G}(\xi,\gamma)\Big)_{T_\gamma\Gamma}\,d\xi\,d{\mu}(\gamma),\quad\\ \mathfrak{F},\mathfrak{G}\in C_0^\infty(\mathbb{R}^d)\otimes \mathcal{F}C_b^{\infty}(C^{\infty}_0(\mathbb{R}^d),\Gamma).\end{gathered}$$ Applying a strategy as used for tackling $\mathcal{E}^{\scriptscriptstyle{\Gamma,\mu}}_{\scriptscriptstyle{\text{gsdad}}}$ and $\mathcal{E}^{\scriptscriptstyle{\Gamma,\mu}}_{\scriptscriptstyle{\text{env}}}$ we have that also $(\mathcal{E}^{\scriptscriptstyle{\mathbb{R}^d\times\Gamma,\hat{\mu}}}_{\scriptscriptstyle{\text{coup}}},D(\mathcal{E}^{\scriptscriptstyle{\mathbb{R}^d\times\Gamma,\hat{\mu}}}_{\scriptscriptstyle{\text{coup}}}))$ is a conservative, local, quasi-regular Dirichlet form, where $\hat{\mu}:=d\xi\otimes\mu$. Therefore, there exists a conservative diffusion process $\mathbf{M}_{\scriptscriptstyle{\text{coup}}}^{\scriptscriptstyle{\mathbb{R}^d\times\Gamma,\hat{\mu}}}$ taking values in $\mathbb{R}^d\times\Gamma$ for $d\ge 2$ (for $d=1$ the process exists only in the larger space $\mathbb{R}^d\times\ddot{\Gamma}$, where $\ddot{\Gamma}$ is the configuration space of multiple configurations) solving the martingale problem associated to (\[tppro\]) and (\[envpro\]). Thus $\mathbf{M}_{\scriptscriptstyle{\text{coup}}}^{\scriptscriptstyle{\mathbb{R}^d\times\Gamma,\hat{\mu}}}$ realizes the coupling of the motion of the tagged particle and the motion of the environment seen from the tagged particle. Then we obtain the tagged particle process by a projection of the process $\mathbf{M}_{\scriptscriptstyle{\text{coup}}}^{\scriptscriptstyle{\mathbb{R}^d\times\Gamma,\hat{\mu}}}$ to its first component. Note that the resulting process in general is no longer a Markov process. The progress achieved in this paper may be summarized by the following list of core results: - We prove an integration by parts formula for $\nabla^\Gamma$ with respect to grand canonical Gibbs measures $\mu$ fulfilling a Ruelle bound and having $\sigma=z\,\exp(-\phi)\,dx,~0<z<\infty$, as intensity measure, see Theorems \[thmintbyparts\]. - We provide a rigorous explicit representation of the generator $L^{\scriptscriptstyle{\mathbb{R}^d\times\Gamma,\hat{\mu}}}_{\scriptscriptstyle{\text{coup}}}$ of the coupled process for functions in $C_0^\infty(\mathbb{R}^d)\otimes \mathcal{F}C_b^{\infty}(C^{\infty}_0(\mathbb{R}^d),\Gamma)$, see Theorem \[thmexcoup\]. - We prove quasi-regularity for $(\mathcal{E}^{\scriptscriptstyle{\Gamma,\mu}}_{\scriptscriptstyle{\text{env}}},D(\mathcal{E}^{\scriptscriptstyle{\Gamma,\mu}}_{\scriptscriptstyle{\text{env}}}))$, the Dirichlet form corresponding to the environment process, see Lemma \[lemE22\]. - We show the existence of the tagged particle process with interaction potential rigorously by using Dirichlet form techniques, see Theorem \[thmexprocoup\] and Remark \[remextppro\]. - The process we construct is conservative and the unique solution to the martingale problem corresponding to the Friedrichs’ extension of $(L^{\scriptscriptstyle{\mathbb{R}^d\times\Gamma,\hat{\mu}}}_{\scriptscriptstyle{\text{coup}}},C_0^\infty(\mathbb{R}^d)\otimes\mathcal{F}C_b^{\infty}(C^{\infty}_0(\mathbb{R}^d),\Gamma))$, see Theorem \[thmexprocoup\]. - Our results give the first mathematically rigorous and complete construction of the tagged particle process in continuum with interaction potential. Here we would like to stress that all the above results hold for a very general class of interaction potentials. We only have to assume that the interaction potential is super stable (SS), integrable (I), lower regular (LR), differentiable and $L^q$ (D$\text{L}^\text{q}$), $q>d\ge 1$, and locally summable (LS). Hence we can treat interaction potentials which might have a singularity at the origin, non-trivial negative part and infinite range as e.g. the Lennard–Jones potential. Configuration spaces and Gibbs measures {#defcangm} ======================================= Configuration space and Poisson measure --------------------------------------- Let ${\mathbb R}^{d},~d \in \mathbb{N}$, be equipped with the norm $|\cdot|_{{\mathbb R}^{d}}$ given by the Euclidean scalar product $(\cdot, \cdot)_{{\mathbb R}^{d}}$. By ${\mathcal B}({\mathbb R}^{d})$ we denote the corresponding Borel $\sigma$-algebra. ${\mathcal O}_c({\mathbb R}^{d})$ denotes the system of all open sets in ${\mathbb R}^{d}$, which have compact closure and $\mathcal{B}_c(\mathbb{R}^d)$ the sets from $\mathcal{B}(\mathbb{R}^d)$ having compact closure. The Lebesgue measure on the measurable space $({\mathbb R}^{d}, {\mathcal B}({\mathbb R}^{d}))$ we denote by $dx$. The *configuration space* $\Gamma$ over ${\mathbb R}^{d}$ is defined by $$\begin{aligned} \Gamma := \left\{ \gamma \subset {\mathbb R}^{d} \big| \, |\gamma \cap K| < \infty \, \,\, \mbox{for any compact} \, K \subset {\mathbb R}^{d} \right\}.\end{aligned}$$ Here $|A|$ denotes the cardinality of a set $A$. Via the identification of $\gamma \in \Gamma$ with $\sum_{x \in \gamma} \varepsilon_{x} \in {\mathcal M}_p({\mathbb R}^{d})$, where $\varepsilon_{x}$ denotes the Dirac measure in $x \in {\mathbb R}^{d}$, $\Gamma$ can be considered as a subset of the set ${\mathcal M}_p({\mathbb R}^{d})$ of all positive, integer-valued Radon measures on ${\mathbb R}^{d}$. Hence $\Gamma$ can be topologized by the vague topology, i.e., the topology generated by maps $$\begin{aligned} \label{liftmap} \gamma \mapsto \, \langle f,\gamma \rangle \, := \int_{{\mathbb R}^{d}} f(x) \,d\gamma(x) = \sum_{x \in \gamma} f(x),\end{aligned}$$ where $f \in C_{0}({\mathbb R}^{d})$, the set of continuous functions on ${\mathbb R}^{d}$ with compact support. We denote by ${\mathcal B}({\Gamma})$ the corresponding Borel $\sigma$-algebra. For a fixed intensity measure $\sigma$ on $(\mathbb{R}^d,\mathcal{B}(\mathbb{R}^d))$ we denote by $\pi_{\scriptscriptstyle{\sigma}}$ the Poisson measure on $(\Gamma,\mathcal{B}(\Gamma))$ with intensity measure $\sigma$. Fore more details, see e.g. [@AKR98a], [@Ka83] and [@KMM78]. Grand canonical and canonical Gibbs measures {#gcgm} -------------------------------------------- Let $\phi$ be a symmetric pair potential, i.e., a measurable function $\phi: {\mathbb R}^d \to {\mathbb R} \cup \{+ \infty \}$ such that $\phi(x) = \phi(-x)\in\mathbb{R}$ for $x\in\mathbb{R}^d\setminus\{0\}$. Any pair potential $\phi$ defines a potential $\Phi_{\scriptscriptstyle{\phi}}$ as follows. We set $$\begin{aligned} \Phi_{\scriptscriptstyle{\phi}}(\gamma):=0\mbox{ for }|\gamma|\not=2\quad\text{and}\quad\Phi_{\scriptscriptstyle{\phi}}(\gamma)=\phi(x-y)\mbox{ for }\gamma=\{x,y\}\subset\mathbb{R}^d.\end{aligned}$$ For a given pair potential $\phi$ we define the *potential energy* $E:\Gamma\to\mathbb{R}\cup\{+\infty\}$ by $$\begin{aligned} \gamma\mapsto E(\gamma):= \left\{\begin{array}{ll} \sum_{\{x,y\}\subset\gamma}\phi(x-y), & \text{if }\sum_{\{x,y\}\subset\gamma}|\phi(x-y)|<\infty\\ +\infty, & \text{otherwise} \end{array}\right.,\end{aligned}$$ where the sum over the empty set is defined to be zero. The *interaction energy* between to configurations $\gamma$ and $\eta$ from $\Gamma$ is defined by $$\begin{aligned} W(\gamma\,|\,\eta):= \left\{\begin{array}{ll} \sum_{x\in\gamma,y\in\eta}\phi(x-y), & \text{if }\sum_{x\in\gamma,y\in\eta}|\phi(x-y)|<\infty\\ +\infty, & \text{otherwise} \end{array}\right.\end{aligned}$$ (typically we have $\gamma\cap\eta=\varnothing$). In our terminology for any $\Lambda\in\mathcal{O}_c(\mathbb{R}^d)$ the *conditional energy* $E_{\scriptscriptstyle{\Lambda}}:\Gamma\to\mathbb{R}\cup\{+\infty\}$ is given by $$\begin{aligned} \gamma\mapsto E_{\scriptscriptstyle{\Lambda}}(\gamma):=E(\gamma_\Lambda)+W(\gamma_\Lambda\,|\,\gamma_{\Lambda^c}).\end{aligned}$$ To introduce *grand canonical Gibbs measures* on $(\Gamma,\mathcal{B}(\Gamma))$ we need the notion of a *Gibbsian specification*. For any $\Lambda\in\mathcal{O}_c(\mathbb{R}^d)$ the specification $\Pi^{\scriptscriptstyle{\sigma}}_{\scriptscriptstyle{\Lambda}}$ is defined for any $\gamma\in\Gamma$, $\Delta\in\mathcal{B}(\Gamma)$, by (see e.g. [@Pr76]) $$\begin{aligned} \Pi^{\scriptscriptstyle{\sigma}}_{\scriptscriptstyle{\Lambda}}(\gamma,\Delta):=1_{\scriptscriptstyle{\left\{Z^{\scriptscriptstyle{\sigma}}_{\scriptscriptstyle{\Lambda}}<\infty\right\}}}(\gamma)\left(Z^{\scriptscriptstyle{\sigma}}_{\scriptscriptstyle{\Lambda}}(\gamma)\right)^{-1}\!\!\int_{\Gamma} \!\!1_{\Delta}(\gamma_{{\Lambda}^c}\!\cup\!\gamma'_{{\Lambda}})\exp\left(\!-\!E_{\scriptscriptstyle{\Lambda}}(\gamma_{{\Lambda}^c}\!\cup\!\gamma'_{{\Lambda}})\right)d\pi_{\scriptscriptstyle{\sigma}}(\gamma'),\end{aligned}$$ where $$\begin{aligned} Z^{\scriptscriptstyle{\sigma}}_{\scriptscriptstyle{\Lambda}}(\gamma):=\int_{\Gamma}\exp\left(\!-\!E_{\scriptscriptstyle{\Lambda}}(\gamma_{{\Lambda}^c}\!\cup\!\gamma'_{{\Lambda}})\right)d\pi_{\scriptscriptstyle{\sigma}}(\gamma')\end{aligned}$$ and $1_{\scriptscriptstyle{\left\{Z^{\scriptscriptstyle{\sigma}}_{\scriptscriptstyle{\Lambda}}<\infty\right\}}}$ denotes the indicator function of the set $\{\gamma\in \Gamma\,|\,Z^{\scriptscriptstyle{\sigma}}_{\scriptscriptstyle{\Lambda}}(\gamma)<\infty\}$. A probability measure $\mu$ on $(\Gamma,\mathcal{B}(\Gamma))$, we write $\mu\in\mathcal{M}^1(\Gamma,\mathcal{B}(\Gamma))$, is called a grand canonical Gibbs measure corresponding to the potential $\Phi_{\scriptscriptstyle{\phi}}$ and the intensity measure $\sigma$ if it satisfies the *Dobrushin-Lanford-Ruelle-equation (DLR)*: $$\begin{aligned} \mu\,\Pi^{\scriptscriptstyle{\sigma}}_{\scriptscriptstyle{\Lambda}}=\mu\quad\mbox{for all }\Lambda\in\mathcal{O}_c(\mathbb{R}^d). \end{aligned}$$ For $\Lambda\in\mathcal{O}_c(\mathbb{R}^d)$ define for $\gamma\in\Gamma,~\Delta\in\mathcal{B}(\Gamma)$ $$\begin{aligned} \hat{\Pi}^{\scriptscriptstyle{\sigma}}_{\scriptscriptstyle{\Lambda}}(\gamma,\Delta):=\left\{ \begin{array}{ll} \frac{\Pi^{\scriptscriptstyle{\sigma}}_{\scriptscriptstyle{\Lambda}}(\gamma,\Delta\cap\{\eta\in\Gamma\,|\,\eta(\Lambda)=\gamma(\Lambda)\})}{\Pi^{\scriptscriptstyle{\sigma}}_{\scriptscriptstyle{\Lambda}}(\gamma,\{\eta\in\Gamma\,|\,\eta(\Lambda)=\gamma(\Lambda)\})}, & \mbox{if }\Pi^{\scriptscriptstyle{\sigma}}_{\scriptscriptstyle{\Lambda}}(\gamma,\{\eta\in\Gamma\,|\,\eta(\Lambda)=\gamma(\Lambda)\})>0\\ 0, & \mbox{otherwise} \end{array} \right..\end{aligned}$$ A probability measure $\mu$ on $(\Gamma,\mathcal{B}(\Gamma))$ is called a [*canonical Gibbs measure*]{} to the potential $\Phi_{\scriptscriptstyle{\phi}}$ and the intensity $\sigma$ if $$\begin{aligned} \mu\,\hat{\Pi}^{\scriptscriptstyle{\sigma}}_{\scriptscriptstyle{\Lambda}}=\mu\quad\mbox{for all }\Lambda\in\mathcal{O}_c(\mathbb{R}^d).\end{aligned}$$ In the sequel we assume that the intensity measure $\sigma$ is absolutely continuous with respect to the Lebesgue measure with a bounded, non-negative density $\varrho$ and an activity parameter $0<z<\infty$, i.e., $\frac{d\sigma}{dx}=z\varrho$, $0<z<\infty$. We then denote by $\mathcal{G}^{\scriptscriptstyle{gc}}(\Phi_{\scriptscriptstyle{\phi}},z\varrho)$, $0<z<\infty$, the set of corresponding grand canonical Gibbs measures and by $\mathcal{G}^{\scriptscriptstyle{c}}(\Phi_{\scriptscriptstyle{\phi}},\varrho)$, the set of corresponding canonical Gibbs measures. Due to [@Pr79 Prop. 2.1] we have for given potential $\Phi_{\scriptscriptstyle{\phi}}$ and a bounded, non-negative density function $\varrho$ that $$\begin{aligned} \label{inclu} \mathcal{G}^{\scriptscriptstyle{\text{gc}}}(\Phi_{\scriptscriptstyle{\phi}},z\varrho)\subset\mathcal{G}^{\scriptscriptstyle{\text{c}}}(\Phi_{\scriptscriptstyle{\phi}},\varrho),~0<z<\infty.\end{aligned}$$ $K$-transform and correlation measures {#ss23} -------------------------------------- Next, we recall the definition of correlation functions using the concept of the $K$-transform, see [@KK99a] for a detailed study. Denote by $\Gamma_0$ the space of finite configurations over $\mathbb{R}^d$: $$\begin{aligned} \Gamma_0 := \bigsqcup_{n=0}^\infty \Gamma_{0}^{(n)},\quad\Gamma^{(n)}_{0}:=\{\eta\subset \mathbb{R}^d\,|\,|\eta|=n\},\quad\Gamma^{(0)}_{0}:=\{\varnothing\}.\end{aligned}$$ Let $\widetilde{\mathbb{R}^{d\times n}}:=\{(x_1,\ldots,x_n)\in \mathbb{R}^{d\times n}\,|\,x_k\not=x_j,~j\not=k\}$ and let $S^n$ denote the group of all permutations of $\{1,\dots,n\}$. Through the natural bijection $\widetilde{\mathbb{R}^{d\times n}}/S^n \longleftrightarrow \Gamma_{0}^{(n)}$ one defines a topology on $\Gamma^{(n)}_{0}$. Let ${\mathcal B}(\Gamma^{(n)}_{0})$ denote the Borel $\sigma$-algebra on $\Gamma^{(n)}_{0}$. We equip $\Gamma_0$ with the topology $\mathcal{O}(\Gamma_0)$ of disjoint union. The Borel $\sigma$-algebra we denote by $\mathcal{B}(\Gamma_0)$. A ${\mathcal B}(\Gamma_0)$-measurable function $G \colon \Gamma_0 \to {\mathbb R}$, $G\in L^0(\Gamma_0)$ for short, is said to have bounded support if there exist $\Lambda \in {\mathcal O}_c({\mathbb R}^{d})$ and $N \in {\mathbb N}$ such that $\mbox{supp}(G) \subset \bigsqcup_{n=0}^N \Gamma_{0, \Lambda}^{(n)}$, where $\Gamma_{0, \Lambda}^{(n)}:=\{\eta\subset\Lambda\,|\,|\eta|=n\}$. For any $\gamma \in {\Gamma}$ let $\sum_{\eta \Subset \gamma}$ denote the summation over all $\eta \subset \gamma$ such that $|\eta| < \infty$. For a function $G: \Gamma_0 \to {\mathbb R}$, the $K$-transform of $G$ is defined by $$\begin{aligned} \label{eq9} (KG)(\gamma):= \sum_{\eta \Subset \gamma} G(\eta)\end{aligned}$$ for each $\gamma \in \Gamma$ such that at least one of the series $\sum_{\eta \Subset \gamma} G^+(\eta)$ or $\sum_{\eta \Subset \gamma} G^-(\eta)$ converges, where $G^{+} := \max \{ 0, G\}$ and $G^{-} := -\min \{ 0, G\}$. The convolution $\star$ is defined by $$\begin{gathered} \label{propconv} \star:L^0(\Gamma_0)\times L^0(\Gamma_0)\to L^0(\Gamma_0)\\ (G_1,G_2)\mapsto (G_1\star G_2)(\eta):=\sum_{(\xi_1,\xi_2,\xi_3)\in{\mathcal{P}^3_{\varnothing}}(\eta)}G_1(\xi_1\cup\xi_2)G_2(\xi_2\cup\xi_3),\end{gathered}$$ where $\mathcal{P}^3_{\varnothing}(\eta)$ denotes the set of all partitions $(\xi_1,\xi_2,\xi_3)$ of $\eta\in\Gamma_0$ in $3$ parts, i.e., all triples $\xi_i\subset\eta,~\xi_i\cap\xi_j=\varnothing$ if $i\not=j$, and $\xi_1\cup\xi_2\cup\xi_3=\eta$. We say $G\in L^0_{\text{ls}}(\Gamma_0)$ iff $G\in L^0(\Gamma_0)$ and there exists $\Lambda\in\mathcal{O}_c(\mathbb{R}^d)$ such that $G|_{\Gamma_0\setminus\Gamma_\Lambda}=0$. I.e., functions in $L^0_{\text{ls}}(\Gamma_0)$ are locally supported. Let $G_1, G_2\in L^0_{\text{ls}}(\Gamma_0)$. Then due to [@KK99a Prop. 3.11]. $$\begin{aligned} K(G_1\star G_2)=KG_1\,KG_2.\end{aligned}$$ Let $\mu$ be a probability measure on $(\Gamma,{\mathcal B}(\Gamma))$. The correlation measure corresponding to $\mu$ is defined by $$\begin{aligned} \rho_\mu(A) := \int_{\Gamma}(K1_A)(\gamma) \,d\mu(\gamma), \qquad A \in {\mathcal B}(\Gamma_0).\end{aligned}$$ $\rho_\mu$ is a measure on $(\Gamma_0, {\mathcal B} (\Gamma_0))$ (see [@KK99a] for details, in particular, measurability issues). Let $G \in L^1(\Gamma_0,\rho_\mu)$, then $ \| KG \|_{L^1(\Gamma,\mu)} \le \| K|G| \|_{L^1(\Gamma,\mu)} = \| G \|_{L^1(\Gamma_0,\rho_\mu)}$, hence $KG \in L^1(\Gamma,\mu)$ and $KG(\gamma)$ is for $\mu$-a.e. $\gamma \in \Gamma$ absolutely convergent. Moreover, then obviously $$\begin{aligned} \label{eq302} \int_{\Gamma_0} G(\eta) \,d\rho_\mu(\eta) = \int_{\Gamma}(KG)(\gamma) \,d\mu(\gamma),\end{aligned}$$ see [@KK99a], [@Len75a], [@Len75b]. For any $\mu\in\mathcal{G}^{\scriptscriptstyle{gc}}(\Phi_{\scriptscriptstyle{\phi}},z\,\varrho),~0<z<\infty$, the correlation measure $\rho_\mu$ is absolutely continuous with respect to the Lebesgue-Poisson measure $\lambda_{\sigma}$, see e.g. [@KK99a Rem. 4.4] and the references therein. Its Radon-Nikodym derivative $$\begin{aligned} \rho_{\mu}(\eta) := \frac{d\rho_\mu}{d\lambda_{\sigma}}(\eta), \qquad \eta \in \Gamma_0,\end{aligned}$$ with respect to $\lambda_{\sigma}$ we denote by the same symbol and the functions $$\begin{aligned} \rho_{\mu}^{(n)}(x_1,\dots,x_n) := \rho_{\mu}(\{x_1,\dots,x_n\}), \quad x_1,\dots,x_n \in {\mathbb R}^d, \,\, x_i \neq x_j \,\, \mbox {if} \,\, i \neq j,\end{aligned}$$ are called the $n$-th order correlation functions of the measure $\mu$. We put the following restriction on the correlation measures under consideration. (RB) : We say that a correlation measure $\rho_\mu:\mathcal{B}(\Gamma_0)\to (0,\infty)$ corresponding to a measure $\mu$ on $(\Gamma,\mathcal{B}(\Gamma))$ fulfills the *Ruelle-bound*, if for some $C_R\in (0,\infty)$ $$\begin{aligned} \rho_\mu(\gamma)\le(C_R)^{|\gamma|},\quad\mbox{for }\lambda_\sigma\mbox{-a.a. }\gamma\in\Gamma_0.\end{aligned}$$ Denote by $\mathcal{G}_{\scriptscriptstyle{Rb}}^{\scriptscriptstyle{gc}}(\Phi_{\scriptscriptstyle{\phi}},z\varrho)$, $0<z<\infty$, the set of all grand canonical Gibbs measures from $\mathcal{G}^{\scriptscriptstyle{gc}}(\Phi_{\scriptscriptstyle{\phi}},z\varrho)$, $0<z<\infty$, which fulfill (RB). Conditions on the interactions {#subcondpot} ------------------------------ For every $r = (r_1, \ldots, r_d) \in {\mathbb Z}^d$ we define a cube $$\begin{aligned} Q_r = \Big{\{} x \in {\mathbb R}^d \, \Big{|} \, r_i - 1/2 \le x_i < r_i + 1/2 \Big{\}}.\end{aligned}$$ These cubes form a partition of ${\mathbb R}^d$. For any $\gamma \in \Gamma$ we set $\gamma_r := \gamma_{Q_r}, \, r \in {\mathbb Z}^d$. Additionally, we introduce for $n \in {\mathbb N}$ the cube $\Theta_n$ with side length $2n -1$ centered at the origin in ${\mathbb R}^d$. (SS) : ([*Superstability*]{}) There exist $0<A<\infty,~0\le B <\infty$ such that, if $\gamma = \gamma_{\Theta_n}$ for some $n \in {\mathbb N}$, then $$\begin{aligned} E_{\Theta_n}(\gamma) \, \ge \, \sum_{r \in {\mathbb Z}^d} \Big{(} A |\gamma_r|^2 - B |\gamma_r| \Big{)}.\end{aligned}$$ (SS) obviously implies: (S) : ([*Stability*]{}) For any $\Lambda \in {\mathcal O}_c({\mathbb R}^{d})$ and for all $\gamma \in \Gamma$ we have $$\begin{aligned} E_{\Lambda}(\gamma) \, \ge \, -B |\gamma_{\Lambda}|,\quad 0\le B <\infty.\end{aligned}$$ As a consequence of (S), in turn, we have that $\phi$ is bounded from below. We also need (I) : ([*Integrability*]{}) We have: $$\begin{aligned} \int_{{\mathbb R}^{d}} | \exp(- \phi(x)) - 1 | \,dx < \infty.\end{aligned}$$ <!-- --> (LR) : ([*Lower Regularity*]{}) There exists a decreasing positive function $a: {\mathbb N} \to (0,\infty)$ such that $$\begin{aligned} \sum_{r \in {\mathbb Z}^d} a(\| r \|_{\max}) < \infty\end{aligned}$$ and for any $\Lambda^{\prime}, \Lambda^{\prime \prime}$ which are finite unions of cubes of the form $Q_r$ and disjoint, $$\begin{aligned} W(\gamma^{\prime} \mid \gamma^{\prime \prime}) \ge - \sum_{r^{\prime}, r^{\prime \prime} \in {\mathbb Z}^d} a(\| r^{\prime} - r^{\prime \prime} \|_{\max}) \, |\gamma^{\prime}_{r^\prime}| \, |\gamma^{\prime \prime}_{r^{\prime \prime}}|,\end{aligned}$$ provided $\gamma^{\prime} = \gamma^{\prime}_{\Lambda^{\prime}}, \, \gamma^{\prime \prime} = \gamma^{\prime \prime}_{\Lambda^{\prime \prime}}$.\ Here and below $\|x\|_{\max}:=\max_{1\le i\le d}|x_i|,\quad x=(x_1,\ldots,x_d)\in\mathbb{R}^d$. Using an argumentation as in [@KK01 Prop. 2.17] the notion of *Lower Regularity* (LR) given here implies the one defined in [@KK01 Sect. 2.5]. Note that we are dealing with an intensity measure $\sigma=z\varrho\,dx$, $0<z<\infty$, where $\varrho$ is a bounded, non-negative density. (D[**$\text{L}^\text{q}$**]{}) : ([*Differentiability and $L^q$*]{}) The function $\exp(-\phi)$ is weakly differentiable on $\mathbb{R}^d$, $\phi$ is weakly differentiable on $\mathbb{R}^d\setminus\{0\}$. The gradient $\nabla\phi$, considered as a $dx$-a.e. defined function on $\mathbb{R}^d$, satisfies $$\begin{aligned} \nabla\phi \in L^1(\mathbb{R}^d,\exp(-\phi)\,dx)\cap L^q(\mathbb{R}^d,\exp(-\phi)dx),\quad 1\le q<\infty.\end{aligned}$$ Note that for many typical potentials in Statistical Physics we have $\phi\in C^\infty(\mathbb{R}^d\setminus\{0\})$. For such regular outside the originpotentials condition (D$\text{L}^\text{2}$) nevertheless does not exclude a singularity at the point $0\in\mathbb{R}^d$. Let $(\Omega_n)_{n\in\mathbb{N}}$ be a partition of $\mathbb{R}^d$ in $\mathcal{B}_c(\mathbb{R}^d)$, i.e. $\Omega_n\cap\Omega_m=\varnothing$ for $m\not=n$, $n,m\in\mathbb{N}$, and $\bigsqcup_{n=1}^\infty\Omega_n=\mathbb{R}^d$. We set $$\begin{aligned} \Gamma_{\scriptscriptstyle{\text{fd}}}\big((\Omega_n)_{n\in\mathbb{N}}\big):=\bigcup_{M\in\mathbb{N}}\bigcap_{n\in\mathbb{N}}\big\{\gamma\in\Gamma\,\big|\,|\gamma_{\Omega_n}|\le M\sigma(\Omega_n)\big\}.\end{aligned}$$ $\Gamma_{\scriptscriptstyle{\text{fd}}}\big((\Omega_n)_{n\in\mathbb{N}}\big)$ is called the *set of configurations of finite density*. Furthermore, we set $\Lambda_n:=B_n$, $n\in\mathbb{N}$, where $B_r$, $r\in(0,\infty)$, denotes the open ball with radius $r$ around the origin with respect to the euclidean norm on $\mathbb{R}^d$. (LS) : ([*Local Summability*]{}) Let $\Omega_1:=\Lambda_1$ and $\Omega_n:=\Lambda_n\setminus\Lambda_{n-1}$ for $n\ge 2$. Assume that $\sigma(\Omega_n)\ge \kappa\,(n+1)$, for some $\kappa\in(0,\infty)$ and all $n\in\mathbb{N}$. For all $\Lambda$ in $\mathcal{O}_c(\mathbb{R}^d)$ and all $\gamma\in \Gamma_{\scriptscriptstyle{\text{fd}}}\big((\Omega_n)_{n\in\mathbb{N}}\big)$ we have $$\begin{aligned} \lim_{n\to\infty}\sum_{y\in\gamma_{\scriptscriptstyle{\Lambda_n\setminus\Lambda}}}\nabla\phi(\cdot-y)\mbox{ exists in $L^1_{\text{loc}}(\Lambda,\sigma)$}.\end{aligned}$$ $ $ 1. Note that in the case $\varrho=\exp(-\phi)$ the assumption $\sigma(\Omega_n)\ge \kappa\,(n+1)$ for some $\kappa\in (0,\infty)$ and all $n\in\mathbb{N}$, is fulfilled, whenever the potential $\phi$ is bounded outside of a set $\Lambda\in\mathcal{B}_c(\mathbb{R}^d)$. 2. In the case $\sigma(\Omega_n)\ge \kappa\,(n+1)$ for some $\kappa\in (0,\infty)$ and all $n\in\mathbb{N}$, one has for $\mu\in\mathcal{G}_{\scriptscriptstyle{Rb}}^{\scriptscriptstyle{gc}}(\Phi_{\scriptscriptstyle{\phi}},z\varrho),~0<z<\infty$, that $\mu(\Gamma_{\scriptscriptstyle{\text{fd}}}\big((\Omega_n)_{n\in\mathbb{N}}\big))=1$, due to [@KK01 Theo. 5.4]. In this case the grand canonical Gibbs measure $\mu$ is called *tempered*. 3. Condition (LS) seems to be more complicated to check. In [@AKR98b Exam. 4.1], however, it is shown that the assumption $$\begin{aligned} \|\nabla\phi(x)\|_{\max}\le\frac{C}{\| x\|_{\max}^\alpha},\quad \|x\|_{\max} \ge R,\end{aligned}$$ for some $0<R,C<\infty,\alpha>d+1$, together with (D$\text{L}^\text{2}$) implies (LS). In our setting the proof is exactly the same as given there. A concrete example fulfilling our assumptions is the *Lennard–Jones potential* (see Figure \[figpot\] below). ![\[figpot\]A typical example: The $(6,12)$-Lennard–Jones potential, i.e. $\phi(x)=0.04\left(\frac{1}{|x|^{12}}-\frac{1}{|x|^{6}}\right),\quad x\in\mathbb{R}^d\setminus\{0\}$.](lenjo2){height="5cm"} Analysis and geometry on configuration spaces {#secgeometry} --------------------------------------------- On $\Gamma$ we define the set of smooth cylinder functions $$\begin{aligned} \mathcal{F}C_b^\infty(C_0^\infty(\mathbb{R}^d),\Gamma):=\!\!\Big\{g(\langle f_1,\gamma\rangle,\ldots,\langle f_N,\gamma\rangle)\,\!\!\left|\!\,N\in\mathbb{N},~g\in C_b^\infty(\mathbb{R}^N), f_1,\ldots, f_N\in C_0^\infty(\mathbb{R}^d)\right.\!\!\Big\}.\end{aligned}$$ Clearly, $\mathcal{F}C_b^\infty( C_0^\infty(\mathbb{R}^d),\Gamma)$ is dense in $L^2(\Gamma,\mathcal{B}(\Gamma),\pi_{{\sigma}})$. Let $V_0(\mathbb{R}^d)$ denote the set of smooth vector fields on $\mathbb{R}^d$. For $v\in V_0(\mathbb{R}^d)$ the *directional derivatives on* $\Gamma$ for any $F=g_{\scriptscriptstyle{F}}(\langle f_1,\cdot\rangle,\ldots,\langle f_N,\cdot\rangle)\in\mathcal{F}C_b^\infty( C_0^\infty(\mathbb{R}^d),\Gamma)$ are given by $$\begin{gathered} \label{equtbundle} \nabla_v^\Gamma F(\gamma)=\sum_{i=1}^N\partial_i g_{\scriptscriptstyle{F}}\left(\langle f_1,\gamma\rangle,\ldots,\langle f_N,\gamma\rangle\right)\langle\nabla_v f_i,\gamma\rangle\\ =\int_{\mathbb{R}^d}\left(\sum_{i=1}^N\partial_i g_{\scriptscriptstyle{F}}\left(\langle f_1,\gamma\rangle,\ldots,\langle f_N,\gamma\rangle\right)\nabla f_i,v\right)_{\mathbb{R}^d}\,d\gamma =\left(\nabla^\Gamma F(\gamma),v\right)_{\scriptscriptstyle{L^2(\mathbb{R}^d\to\mathbb{R}^d,\gamma)}},\end{gathered}$$ with $\nabla_v f_i:=(\nabla f_i,v)_{\scriptscriptstyle{\mathbb{R}^d}},~1\le i\le N$, $\gamma\in\Gamma$. Here $\nabla$ denotes the gradient on $\mathbb{R}^d$, $\partial_i$ the directional derivative with respect to the $i$-th coordinate for $1\le i\le N$ and $L^2(\mathbb{R}^d\to\mathbb{R}^d,\gamma)$ the space of $\gamma$-square integrable vector fields on $\mathbb{R}^d$. Next we define a gradient for functions in $\mathcal{F}C_b^\infty( C_0^\infty(\mathbb{R}^d),\Gamma)$ which corresponds to the directional derivatives in (\[equtbundle\]). So let $F=g_{\scriptscriptstyle{F}}(\langle f_1,\cdot\rangle,\ldots,\langle f_N,\cdot\rangle)\in\mathcal{F}C_b^\infty( C_0^\infty(\mathbb{R}^d),\Gamma),~v\in V_0(\mathbb{R}^d)$ and $\gamma\in\Gamma$. The *gradient* $\nabla^\Gamma$ of $F\in\mathcal{F}C_b^\infty( C_0^\infty(\mathbb{R}^d),\Gamma)$ at $\gamma\in\Gamma$ is defined by $$\begin{aligned} \label{defgrad} \Gamma\ni\gamma\mapsto\nabla^\Gamma F(\gamma):=\sum_{i=1}^N\partial_i g_{\scriptscriptstyle{F}}\left(\langle f_1,\gamma\rangle,\ldots,\langle f_N,\gamma\rangle\right)\nabla f_i\in L^2(\mathbb{R}^d\to\mathbb{R}^d,\gamma).\end{aligned}$$ Equation (\[equtbundle\]) immediately leads to the appropriate *tangent space to* $\Gamma$, namely $$\begin{aligned} T_{{\gamma}}\Gamma:=L^2(\mathbb{R}^d\to\mathbb{R}^d,\gamma),\quad\gamma\in\Gamma,\end{aligned}$$ equipped with the usual $L^2$-inner product. Note that $\nabla^\Gamma F$ is independent of the representation of $F$ in (\[defgrad\]) and $\nabla^\Gamma F(\gamma)\in T_{{\gamma}}\Gamma$. The corresponding *tangent bundle* is $$\begin{aligned} T\Gamma=\bigcup_{\gamma\in\Gamma}T_\gamma\Gamma.\end{aligned}$$ *Finitely based vector fields on* $(\Gamma,T\Gamma)$ can be defined as follows: $$\begin{aligned} \Gamma\ni\gamma\mapsto\sum_{i=1}^N F_i(\gamma)v_i\in V_0(\mathbb{R}^d),\end{aligned}$$ where $F_1,\ldots, F_N\in\mathcal{F}C_b^\infty( C_0^\infty(\mathbb{R}^d),\Gamma),~v_1,\ldots,v_N\in V_0(\mathbb{R}^d)$. Let $\mathcal{FV}C_b^\infty( C_0^\infty(\mathbb{R}^d),\Gamma)$ be the set of all such maps. Note that $\nabla^\Gamma F\in \mathcal{FV}C_b^\infty( C_0^\infty(\mathbb{R}^d),\Gamma)$ for all $F\in\mathcal{F}C_b^\infty( C_0^\infty(\mathbb{R}^d),\Gamma)$ and that each $v\in V_0(\mathbb{R}^d)$ is identified with the vector field $\gamma\mapsto v$ in $T\Gamma$ which is constant modulo taking $\gamma$-classes. For details we refer to [@AKR98a], [@AKR98b]. An Integration by parts formula {#secintbp1} =============================== In this section our aim is to prove an integration by parts formula for functions in\ $\mathcal{F}C_b^\infty(C_0^\infty(\mathbb{R}^d),\Gamma)$ with respect to $\mu\in \mathcal{G}_{\scriptscriptstyle{\text{Rb}}}^{\scriptscriptstyle{\text{gc}}}(\Phi_\phi,z\exp(-\phi)),~0<z<\infty$, where $\phi$ fulfills (SS), (I) and (LR). Note that $\mathcal{G}_{\scriptscriptstyle{\text{Rb}}}^{\scriptscriptstyle{\text{gc}}}(\Phi_\phi,z\exp(-\phi)),~0<z<\infty$, is not empty see e.g. [@CoKu09]. The following considerations are along the lines of [@AKR98b Chap. 4.3]. We start with a technical lemma. \[lemconv\] Let $\phi$ be a pair potential satisfying conditions (SS), (I), (LR) and (D$\text{L}^\text{2}$). For any vector field $v\in V_0(\mathbb{R}^d)$ we consider the function $$\begin{aligned} \Gamma\ni\gamma\mapsto L_{v,k}^\phi(\gamma):=\left(\sum_{x\in\gamma_{\Lambda_k}}\!\!\!\!\big(\nabla\phi(x),v(x)\big)_{\mathbb{R}^d}\right)\!\!+\!\!\left(-\!\sum_{\{x,y\}\subset\gamma_{\Lambda_k}}\!\!\!\!\!\!\big(\nabla\phi(x-y),v(x)-v(y)\big)_{\mathbb{R}^d}\right)\in\mathbb{R}.\end{aligned}$$ Then for any $\mu\in\mathcal{G}^{\scriptscriptstyle{gc}}_{\scriptscriptstyle{\text{Rb}}}(\Phi_{\scriptscriptstyle{\phi}},z\exp(-\phi)),~0<z<\infty$, and all $v\in V_0(\mathbb{R}^d)$ we have that $$\begin{aligned} L_v^{\phi,\mu}:=\lim_{k\to\infty}L_{v,k}^\phi\end{aligned}$$ exists in $L^2(\Gamma,\mu)$. Here $\Lambda_k,~k\in\mathbb{N}$, is defined as in Section \[defcangm\]. Let us at first consider the second summand. We set $$\begin{aligned} \varphi^{\scriptscriptstyle{(2)}}_k(x,y):=\left|\big(1_{\Lambda_k}(x)1_{\Lambda_k}(y)\nabla\phi(x-y),v(x)-v(y)\big)_{\mathbb{R}^d}\right|\end{aligned}$$ and define $$\begin{aligned} V^{\scriptscriptstyle{(2)}}_k(\gamma):=\left\{ \begin{array}{cc} \varphi_k^{\scriptscriptstyle{(2)}}(x,y),\quad &\mbox{if }\gamma=\{x,y\}\in\Gamma^{\scriptscriptstyle(2)}_{0}\\ 0,&\mbox{otherwise} \end{array}\right..\end{aligned}$$ Then by using (\[propconv\]) and (\[eq302\]), $$\begin{gathered} \int_{\Gamma}\Big|-\!\!\!\!\sum_{\{x,y\}\subset\gamma_{\Lambda_k}}\!\!\!\!\!\!\big(\nabla\phi(x-y),v(x)-v(y)\big)_{\mathbb{R}^d}\Big|^2\,d\mu(\gamma)\\ \le \int_{\Gamma}\Big(\sum_{\{x,y\}\subset\gamma}\big|\big(1_{\Lambda_k}(x)1_{\Lambda_k}(y)\nabla\phi(x-y),v(x)-v(y)\big)_{\mathbb{R}^d}\big|\Big)^2\,d\mu(\gamma)\\ =\int_\Gamma \Big(\big(KV^{\scriptscriptstyle{(2)}}_k\big)(\gamma)\Big)^2\,d\mu(\gamma) =\int_\Gamma \big(K(V^{\scriptscriptstyle{(2)}}_k\star V^{\scriptscriptstyle{(2)}}_k)\big)(\gamma)\,d\mu(\gamma)\\ =\int_{\Gamma_0}(V^{\scriptscriptstyle{(2)}}_k\star V^{\scriptscriptstyle{(2)}}_k)(\eta)\,d\rho_\mu(\eta) =\int_{\Gamma_0}\sum_{(\xi_1,\xi_2,\xi_3)\in\mathcal{P}^3_{\varnothing}(\eta)}V^{\scriptscriptstyle{(2)}}_k(\xi_1\cup\xi_2)\,V^{\scriptscriptstyle{(2)}}_{k}(\xi_2\cup \xi_3)\,d\rho_\mu(\eta)\\ =\frac{1}{4!}\int_{\mathbb{R}^{4d}}\varphi^{\scriptscriptstyle{(2)}}_k(x_1,x_2)\varphi^{\scriptscriptstyle{(2)}}_k(x_3,x_4)\,\rho_\mu^{\scriptscriptstyle{(4)}}(x_1,x_2,x_3,x_4)\,d\sigma^{\otimes 4}\\ +\frac{1}{3!}\int_{\mathbb{R}^{3d}}\varphi^{\scriptscriptstyle{(2)}}_k(x_1,x_2)\varphi^{\scriptscriptstyle{(2)}}_k(x_2,x_3)\,\rho_\mu^{\scriptscriptstyle{(3)}}(x_1,x_2,x_3)\,d\sigma^{\otimes 3}\\ +\frac{1}{2!}\int_{\mathbb{R}^{2d}}\varphi^{\scriptscriptstyle{(2)}}_k(x_1,x_2)^2\,\rho_\mu^{\scriptscriptstyle{(2)}}(x_1,x_2)\,d\sigma^{\otimes 2}\\ \le C^{\scriptscriptstyle{(1)}}\int_{\mathbb{R}^{4d}}\varphi^{\scriptscriptstyle{(2)}}_k(x_1,x_2)\varphi^{\scriptscriptstyle{(2)}}_k(x_3,x_4)\,\rho_\mu^{\scriptscriptstyle{(4)}}(x_1,x_2,x_3,x_4)\,dx_1\ldots dx_4\\ +C^{\scriptscriptstyle{(2)}}\int_{\mathbb{R}^{3d}}\varphi^{\scriptscriptstyle{(2)}}_k(x_1,x_2)\varphi^{\scriptscriptstyle{(2)}}_k(x_2,x_3)\,\rho_\mu^{\scriptscriptstyle{(3)}}(x_1,x_2,x_3)\,dx_1\ldots dx_3\\ +C^{\scriptscriptstyle{(3)}}\int_{\mathbb{R}^{2d}}\varphi^{\scriptscriptstyle{(2)}}_k(x_1,x_2)^2\,\rho_\mu^{\scriptscriptstyle{(2)}}(x_1,x_2)\,dx_1\,dx_2,\end{gathered}$$ where in the last step we have used the boundedness of the density function $\varrho=\exp(-\phi)$ and $0<C^{\scriptscriptstyle{(m)}}<\infty,~m\in\{1,2,3\}$. The Mayer-Montroll equation for correlation measures, see e.g. [@KK99a], together with (RB) and (I), gives $$\begin{aligned} |\rho_\mu(x_1,\ldots x_p)|\le R_p\exp\left(-\sum_{i<j}\phi(x_j-x_i)\right),\quad 0<R_p<\infty,\end{aligned}$$ for all $p\in\mathbb{N},~x_1,\ldots,x_p\in\mathbb{R}^d$. From this point on we can proceed as in the proof of [@AKR98b Lem. 4.1].\ For the first summand we set $$\begin{aligned} \varphi^{\scriptscriptstyle{(1)}}_k(x):=\left|\big(1_{\Lambda_k}(x)\nabla\phi(x),v(x)\big)_{\mathbb{R}^d}\right|\end{aligned}$$ and define correspondingly $$\begin{aligned} V^{\scriptscriptstyle{(1)}}_k(\gamma):=\left\{ \begin{array}{cc} \varphi_k^{\scriptscriptstyle{(1)}}(x),\quad &\mbox{if }\gamma=\{x\}\in\Gamma^{\scriptscriptstyle(1)}_{0}\\ 0,&\mbox{otherwise} \end{array}\right..\end{aligned}$$ Thus we obtain by using (\[propconv\]) and (\[eq302\]), $$\begin{gathered} \int_{\Gamma}\Big|-\sum_{x\in\gamma_{\Lambda_k}}\big(\nabla\phi(x),v(x)\big)_{\mathbb{R}^d}\Big|^2\,d\mu(\gamma)\\\le\int_{\Gamma_0}\sum_{(\xi_1,\xi_2,\xi_3)\in\mathcal{P}^3_{\varnothing}(\eta)}V^{\scriptscriptstyle{(1)}}_k(\xi_1\cup \xi_2)V^{\scriptscriptstyle{(1)}}_{k}(\xi_2\cup \xi_3)\,d\rho_\mu(\eta)\\ =\frac{z^2}{2}\int_{\mathbb{R}^{2d}}\varphi^{\scriptscriptstyle{(1)}}_k(x_1)\varphi^{\scriptscriptstyle{(1)}}_k(x_2)\,\rho_\mu^{\scriptscriptstyle{(2)}}(x_1,x_2)\exp(-\phi(x_1))\exp(-\phi(x_2))\,dx_1\,dx_2\\ +z\int_{\mathbb{R}^d}\varphi^{\scriptscriptstyle{(1)}}_k(x_1)^2\,\rho_\mu^{\scriptscriptstyle{(1)}}(x_1)\exp(-\phi(x_1))\,dx_1\\ \le C^{\scriptscriptstyle{(4)}}\left(\int_{\Lambda_k^2}\|\nabla\phi(x)\|_{\max}\exp(-\phi(x))\,\| v(x)\|_{\max}\,\,dx\right)^2\\ +C^{\scriptscriptstyle{(5)}}\int_{\Lambda_k}\|\nabla\phi(x)\|_{\max}^2\exp(-\phi(x))\,\| v(x)\|_{\max}^2\,dx\\ \le C^{\scriptscriptstyle{(6)}}(v)\|\nabla\phi\|^2_{L^1(\Lambda_k,\exp(-\phi)dx)}+C^{\scriptscriptstyle{(7)}}(v)\|\nabla\phi\|^2_{L^2(\Lambda_k,\exp(-\phi)dx)}<\infty\end{gathered}$$ with $C^{\scriptscriptstyle{(4)}}, C^{\scriptscriptstyle{(5)}}, C^{\scriptscriptstyle{(6)}}(v),C^{\scriptscriptstyle{(7)}}(v)\in(0,\infty),$ due to condition (D$\text{L}^\text{2}$) and $v\in V_0(\mathbb{R}^d)$. Finally since $\Lambda_k\uparrow\mathbb{R}^d$ as $k\to\infty$, it easily follows that $\left(L^{\phi}_{v,k}\right)_{k\in\mathbb{N}}$ is a Cauchy sequence in $L^2(\Gamma,\mu)$ and since this space is complete, the limit exists. \[BL2\] Let $\phi$ be a pair potential satisfying conditions (SS), (I), (LR) and (D$\text{L}^\text{2}$). For $v\in V_0(\mathbb{R}^d)$ and $\mu\in\mathcal{G}^{\scriptscriptstyle{gc}}_{\scriptscriptstyle{\text{Rb}}}(\Phi_{\scriptscriptstyle{\phi}},z\exp(-\phi)),~0<z<\infty$, we define $$\begin{aligned} B_v^{\phi,\mu}:=L^{\phi,\mu}_v+\big\langle\text{div~}v,\cdot\big\rangle\in L^2(\Gamma,\mu).\end{aligned}$$ Note that $\big\langle\text{div~}v,\cdot\big\rangle\in L^2(\Gamma,\mu)$, since $\mu\in\mathcal{G}^{\scriptscriptstyle{gc}}_{\scriptscriptstyle{\text{Rb}}}(\Phi_{\scriptscriptstyle{\phi}},z\exp(-\phi)),~0<z<\infty$. Now we are able to formulate an important result which is essential for our applications below. \[thmintbyparts\] Suppose that the pair potential $\phi$ satisfies (SS), (I), (LR), (D$\text{L}^\text{2}$) and (LS). Let $\mu\in\mathcal{G}^{\scriptscriptstyle{gc}}_{\scriptscriptstyle{\text{Rb}}}(\Phi_{\scriptscriptstyle{\phi}},z\exp(-\phi)),~0<z<\infty$. Then for $v\in V_0(\mathbb{R}^d)$ and $F,G\in\mathcal{F}C_b^\infty(C_0^\infty(\mathbb{R}^d),\Gamma)$ the following integration by parts formula holds: $$\begin{aligned} \int_\Gamma\nabla^\Gamma_vF\,G\,d\mu(\gamma)=-\int_\Gamma F\,\nabla_v^\Gamma G\,d\mu(\gamma)-\int_\Gamma F\,G\,B_v^{\phi,\mu}\,d\mu(\gamma).\end{aligned}$$ Let $F=g_F(\langle f_1,\cdot\rangle,\ldots,\langle f_N,\cdot\rangle)\in\mathcal{F}C_b^\infty(C_0^\infty(\mathbb{R}^d),\Gamma)$, $v\in V_0(\mathbb{R}^d)$ and choose $\Lambda\in\mathcal{O}_c(\mathbb{R}^d)$ such that $\bigcup_{i=1}^N\text{supp~}f_i\cup\text{supp~}v\subset\Lambda$. Using (\[inclu\]) we have $\mathcal{G}^{\scriptscriptstyle{gc}}_{\scriptscriptstyle{\text{Rb}}}(\Phi_{\scriptscriptstyle{\phi}},z\exp(-\phi))\subset\mathcal{G}^{\scriptscriptstyle{c}}(\Phi_{\scriptscriptstyle{\phi}},\exp(-\phi))$, $0<z<\infty$. Hence $$\begin{gathered} \label{numerator} \int_\Gamma\nabla^\Gamma_v F\,d\mu(\gamma)=\int_\Gamma\hat{\Pi}_{\scriptscriptstyle{\Lambda}}^{\scriptscriptstyle{\sigma,\phi}}(\nabla_v^\Gamma F)\,d\mu(\gamma)\\ =\!\!\!\!\int_\Gamma\!\!\!\frac{\left(\int_{\Lambda^{\gamma(\Lambda)}}\!\!\!\nabla^\Gamma_vF(\gamma_{\Lambda^c}\!\!\cup\!\{x_1,\!\ldots\!, x_{\gamma(\Lambda)}\})\!\exp\!\!\left(\!-\!E_{\scriptscriptstyle{\Lambda}}(\gamma_{{\Lambda}^c}\!\cup\!\{x_1,\!\ldots\!,x_{\gamma(\Lambda)}\})\!\right)\!\!\varrho^{\otimes \gamma(\Lambda)}\!\!dx_1\!\ldots\! dx_{\gamma(\Lambda)}\right)}{\int_{\Lambda^{\gamma(\Lambda)}}\exp\left(\!-\!E_{\scriptscriptstyle{\Lambda}}(\gamma_{{\Lambda}^c}\!\cup\!\{x_1,\!\ldots\!,x_{\gamma(\Lambda)}\})\right)\,\varrho^{\otimes \gamma(\Lambda)}\,dx_1\!\ldots\! dx_{\gamma(\Lambda)}}\!d\mu(\gamma),\end{gathered}$$ where $\varrho^{\otimes\gamma(\Lambda)}:=\varrho(x_1)\cdot\ldots\cdot\varrho(x_{\gamma(\Lambda)})$, see Section \[defcangm\]. Fix $n\in\mathbb{N}$ and $\gamma\in\{\eta\in\Gamma\,|\,\eta(\Lambda)=n\}\cap \Gamma_{\text{fd}}\big((\Omega_m)_{m\in\mathbb{N}}\big)$, where $(\Omega_m)_{m\in\mathbb{N}}$ corresponds to $(\Lambda_m)_{m\in\mathbb{N}}$ as in (LS). Using [@KK01 Coro. 5.8] the numerator of the integrand in (\[numerator\]) for such $\gamma$ equals to $$\begin{gathered} \lim_{m\to\infty}\!\int_{\Lambda^n}\!\!\!\!\nabla_v^\Gamma F\big(\gamma_{\Lambda^c}\cup\{x_1,\!\ldots\!,x_n\}\big)\exp\!\Big(-\!E_{\scriptscriptstyle{\Lambda}}\big(\gamma_{{\Lambda_m}\setminus\Lambda}\!\cup\!\{x_1,\!\ldots\!,x_n\}\big)\Big)\!\varrho^{\otimes n}\!(x_1,\ldots,x_n)dx_1\!\ldots\! dx_n\\ =\lim_{m\to\infty}\int_{\Lambda^n}\sum_{i=1}^N\Bigg(\partial_i g_F\!\bigg(\sum_{j=1}^n f_1(x_j),\ldots,\sum_{j=1}^n f_N(x_j)\bigg)\sum_{k=1}^n\nabla_v^{\mathbb{R}^d}f_i(x_k)\Bigg)\\ \times\,\exp\Big(\!-\!E_{\scriptscriptstyle{\Lambda}}\big(\gamma_{{\Lambda_m}\setminus\Lambda}\!\cup\!\{x_1,\ldots,x_n\}\big)\Big)\,\varrho^{\otimes n}(x_1,\ldots,x_n)\,dx_1\ldots dx_n\\ =\lim_{m\to\infty}\sum_{k=1}^n\int_{\Lambda^n}\Bigg(\nabla_{x_k}g_F\bigg(\sum_{j=1}^n f_1(x_j),\ldots,\sum_{j=1}^n f_N(x_j)\bigg),v(x_k)\Bigg)_{\mathbb{R}^d}\\ \times\,\exp\left(\!-\!E_{\scriptscriptstyle{\Lambda}}(\gamma_{{\Lambda_m}\setminus\Lambda}\!\cup\!\{x_1,\ldots,x_n\})\right)\varrho^{\otimes n}(x_1,\ldots,x_n)\,dx_1\ldots dx_n.\end{gathered}$$ Here $\nabla_{x_k},~1\le k\le n$, denotes the gradient with respect to the $x_k$-th variable $(x_k\in\Lambda)$. Integrating by parts with respect to $x_k,~1\le k\le n$, we obtain $$\begin{gathered} \label{calcintbyp} -\lim_{m\to\infty}\sum_{k=1}^n\int_{\Lambda^n}g_F\bigg(\sum_{j=1}^n f_1(x_j),\ldots, \sum_{j=1}^n f_N(x_j)\bigg)\\ \times\Bigg(\bigg(\!\nabla_{x_k}\!\Big(\sum_{1\le i<j}^n\!\!\phi(x_i-x_j)+\sum_{i=1}^n\sum_{y\in\gamma_{{\Lambda_m}\setminus\Lambda}}\!\!\phi(x_i-y)\Big),v(x_k)\!\!\bigg)_{\mathbb{R}^d}\\ \times\exp\Big(\!-\!E_{\scriptscriptstyle{\Lambda}}\big(\gamma_{{\Lambda_m}\setminus\Lambda}\!\cup\!\{x_1,\ldots x_n\}\big)\Big)\varrho^{\otimes n}(x_1,\ldots,x_n)\\ +\Big(\nabla_{x_k}\varrho^{\otimes n}(x_1,\ldots x_n),v(x_k)\Big)_{\mathbb{R}^d}\exp\Big(\!-\!E_{\scriptscriptstyle{\Lambda}}\big(\gamma_{{\Lambda_m}\setminus\Lambda}\!\cup\!\{x_1,\ldots, x_n\}\big)\Big)\\ +\exp\Big(\!-\!E_{\scriptscriptstyle{\Lambda}}\big(\gamma_{{\Lambda_m}\setminus\Lambda}\!\cup\!\{x_1,\ldots,x_n\}\big)\Big)\,\varrho^{\otimes n}(x_1,\ldots,x_n)\,\text{div~}v(x_k)\Bigg)\,dx_1\ldots dx_n\\ =-\lim_{m\to\infty}\int_{\Lambda^n}g_F\bigg(\sum_{j=1}^n f_1(x_j),\ldots, \sum_{j=1}^n f_N(x_j)\bigg)\\ \times\Bigg(\Bigg(\sum_{1\le i<j}^n\Big(\nabla\phi(x_i-x_j),v(x_i)-v(x_j)\Big)_{\mathbb{R}^d}+\sum_{i=1}^n\sum_{y\in\gamma_{{\Lambda_m}\setminus\Lambda}}\!\!\Big(\nabla\phi(x_i-y),v(x_i)\Big)_{\mathbb{R}^d}\Bigg)\\ \times\exp\Big(\!-\!E_{\scriptscriptstyle{\Lambda}}\big(\gamma_{{\Lambda_m}\setminus\Lambda}\!\cup\!\{x_1,\ldots,x_n\}\big)\Big)\varrho^{\otimes n}(x_1,\ldots,x_n)\\ +\Big(\sum_{i=1}^n\Big(-\nabla\phi(x_i),v(x_i)\Big)_{\mathbb{R}^d}\varrho^{\otimes n}(x_1,\ldots, x_n)\Big)\exp\Big(\!-\!E_{\scriptscriptstyle{\Lambda}}\big(\gamma_{{\Lambda_m}\setminus\Lambda}\!\cup\!\{x_1,\ldots,x_n\}\big)\Big)\\ +\exp\Big(\!-\!E_{\scriptscriptstyle{\Lambda}}\big(\gamma_{{\Lambda_m}\setminus\Lambda}\!\cup\!\{x_1,\ldots,x_n\}\big)\Big)\,\varrho^{\otimes n}(x_1,\ldots, x_n)\,\sum_{i=1}^n\text{div~}v(x_i)\Bigg)\,dx_1\ldots dx_n\\ =-\lim_{m\to\infty}\int_{\Lambda^n}\Bigg(F(\{x_1,\ldots,x_n\})\Bigg(\bigg(\sum_{1\le i<j}^n\Big(\nabla\phi(x_i-x_j),v(x_i)-v(x_j)\Big)_{\mathbb{R}^d}\\ +\!\!\sum_{i=1}^n\sum_{y\in\gamma_{{\Lambda_m}\setminus\Lambda}}\!\!\Big(\nabla\phi(x_i-y),v(x_i)\Big)_{\mathbb{R}^d}\bigg)-\bigg(\sum_{i=1}^n\Big(\nabla\phi(x_i),v(x_i)\Big)_{\mathbb{R}^d}\bigg)+\sum_{i=1}^n\text{div~}v(x_i)\Bigg)\\ \times\exp\Big(\!-\!E_{\scriptscriptstyle{\Lambda}}\big(\gamma_{{\Lambda_m}\setminus\Lambda}\!\cup\!\{x_1,\ldots,x_n\}\big)\Big)\,\varrho^{\otimes n}(x_1,\ldots,x_n)\,dx_1\ldots dx_n\\ =-\int_{\Lambda^n}\Bigg(F(\{x_1,\ldots,x_n\})\bigg(\sum_{1\le i<j}^n\Big(\nabla\phi(x_i-x_j),v(x_i)-v(x_j)\Big)_{\mathbb{R}^d}\\ +\!\!\sum_{i=1}^n\sum_{y\in\gamma_{{\Lambda}^c}}\!\!\Big(\nabla\phi(x_i-y),v(x_i)\Big)_{\mathbb{R}^d}-\sum_{i=1}^n\Big(\nabla\phi(x_i),v(x_i)\Big)_{\mathbb{R}^d} +\sum_{i=1}^n\text{div~}v(x_i)\bigg)\Bigg)\\ \times\exp\Big(\!-\!E_{\scriptscriptstyle{\Lambda}}\big(\gamma_{{\Lambda}^c}\!\cup\!\{x_1,\ldots,x_n\}\big)\Big)\,\varrho^{\otimes n}(x_1,\ldots,x_n)\,dx_1\ldots dx_n.\end{gathered}$$ In the last step we have used (LS). Thus by (\[calcintbyp\]), Lemma \[lemconv\] and Definition \[BL2\] we obtain that (\[numerator\]) equals $$\begin{aligned} \int_\Gamma\!\!\!\frac{\int_{\Lambda^n}\!\!FB_v^{\phi,\mu}(\gamma_{\Lambda^c}\!\!\cup\!\{x_1,\!\ldots\!, x_{n}\})\!\exp\!\!\left(\!-\!E_{\scriptscriptstyle{\Lambda}}(\gamma_{{\Lambda}^c}\!\cup\!\{x_1,\!\ldots\!,x_{n}\})\right)\!\!\varrho^{\otimes n} dx_1\!\ldots\! dx_{n}}{\int_{\Lambda^{n}}\exp\left(\!-\!E_{\scriptscriptstyle{\Lambda}}(\gamma_{{\Lambda}^c}\!\cup\!\{x_1,\!\ldots\!,x_{n}\})\right)\,\varrho^{\otimes n}\,dx_1\!\ldots\! dx_{n}}d\mu(\gamma).\end{aligned}$$ Therefore, $$\begin{aligned} \label{equintbyparts} \int_\Gamma\nabla_v^\Gamma F\,d\mu(\gamma)=-\int_\Gamma\hat{\Pi}_{\scriptscriptstyle{\Lambda}}^{\scriptscriptstyle{\sigma,\phi}}(FB_v^{\phi,\mu})\,d\mu(\gamma)=-\int_\Gamma FB_v^{\phi,\mu}\,d\mu(\gamma).\end{aligned}$$ By the product rule for $\nabla_v^\Gamma$ on $\Gamma$ we obtain $$\begin{aligned} \int_\Gamma\nabla_v^\Gamma(FG)\,d\mu(\gamma)=\int_\Gamma\nabla_v^\Gamma F\,G\,d\mu(\gamma)+\int_\Gamma F\,\nabla_v^\Gamma G\,d\mu(\gamma)\end{aligned}$$ and by (\[equintbyparts\]) $$\begin{aligned} -\int_\Gamma FGB_v^{\phi,\mu}\,d\mu(\gamma)=\int_\Gamma\nabla_v^\Gamma F\,G\,d\mu(\gamma)+\int_\Gamma F\,\nabla_v^\Gamma G\,d\mu(\gamma).\end{aligned}$$ For $V:=\sum_{i=1}^N F_iv_i\in\mathcal{FV}C_b^\infty(C^\infty_0(\mathbb{R}^d),\Gamma)$ we define $$\begin{aligned} \label{equdiv} \text{div}^{\scriptscriptstyle{\Gamma,{\mu}}} V:=\sum_{i=1}^N\left(\nabla^\Gamma_{v_i} F_i+B^{\phi,\mu}_{{v_i}}F_i\right)\end{aligned}$$ and for $F\in\mathcal{F}C_b^\infty(C^\infty_0(\mathbb{R}^d),\Gamma)$ $$\begin{aligned} \label{equL} L^{\scriptscriptstyle{\Gamma,\mu}} F:=\text{div}^{\scriptscriptstyle{\Gamma,{\mu}}} \nabla^\Gamma F.\end{aligned}$$ Note that $\nabla^\Gamma F\in\mathcal{FV}C_b^\infty(C^\infty_0(\mathbb{R}^d),\Gamma)$, since $$\begin{aligned} (\nabla^\Gamma F)(\gamma,x)=\sum_{i=1}^N\partial_i g_F(\langle f_1,\gamma\rangle,\ldots\langle f_N,\gamma\rangle)\nabla f_i(x),\quad\gamma\in\Gamma,\quad x\in\mathbb{R}^d.\end{aligned}$$ \[corintbyparts\] Under the assumptions of Theorem \[thmintbyparts\] we have for all\ $F\in\mathcal{F}C_b^\infty(C^\infty_0(\mathbb{R}^d),\Gamma),~V\in\mathcal{FV}C_b^\infty(C^\infty_0(\mathbb{R}^d),\Gamma)$ $$\begin{aligned} \int_\Gamma\left(\nabla^\Gamma F,V\right)_{T_\gamma\Gamma}\,d\mu(\gamma)=-\int_\Gamma F\,\text{div}^{\scriptscriptstyle{\Gamma,\mu}} V\,d\mu(\gamma).\end{aligned}$$ Let $F\in\mathcal{F}C_b^\infty(C^\infty_0(\mathbb{R}^d),\Gamma),~V\in\mathcal{FV}C_b^\infty(C^\infty_0(\mathbb{R}^d),\Gamma)$. Hence $V(\gamma)=\sum_{i=1}^N G_i(\gamma)v_i$ for all $\gamma\in\Gamma$ and for some $G_i\in\mathcal{F}C_b^\infty(C^\infty_0(\mathbb{R}^d),\Gamma),~v_i\in V_0(\mathbb{R}^d),~1\le i\le N$. By (\[equtbundle\]) $$\begin{aligned} \int_\Gamma\left(\nabla^\Gamma F,V\right)_{T_\gamma\Gamma}\,d\mu(\gamma)=\sum_{i=1}^N\int_\Gamma\nabla_{v_i}^\Gamma F\,G_i\,d\mu(\gamma).\end{aligned}$$ Now we apply Theorem \[thmintbyparts\] and by (\[equdiv\]) the statement follows. Infinite Interacting Particle Systems {#secipp} ===================================== Suppose that the pair potential $\phi$ satisfies (SS), (I), (LR), (D$\text{L}^\text{2}$) and (LS).\ The gradient stochastic dynamics with additional drift ------------------------------------------------------ We start with $$\begin{aligned} \mathcal{E}^{\scriptscriptstyle{\Gamma,\mu}}_{\scriptscriptstyle{gsdad}}(F,G):=\int_\Gamma\left(\nabla^\Gamma F(\gamma),\nabla^\Gamma G(\gamma)\right)_{T_\gamma\Gamma}\,d\mu(\gamma),\quad F,G\in\mathcal{F}C_b^\infty(C^\infty_0(\mathbb{R}^d),\Gamma).\end{aligned}$$ Our aim is to show that the closure $(\mathcal{E}^{\scriptscriptstyle{\Gamma,\mu}}_{\scriptscriptstyle{\text{gsdad}}},D(\mathcal{E}^{\scriptscriptstyle{\Gamma,\mu}}_{\scriptscriptstyle{\text{gsdad}}}))$ of $(\mathcal{E}^{\scriptscriptstyle{\Gamma,\mu}}_{\scriptscriptstyle{\text{gsdad}}},\mathcal{F}C_b^\infty(C^\infty_0(\mathbb{R}^d),\Gamma))$ is a conservative, local, quasi-regular Dirichlet form. By definition it is the classical gradient Dirichlet form on $L^2(\Gamma,\mu)$, but in our situation $\mu$ is a grand canonical Gibbs measure corresponding to the intensity measure $\sigma=z\,\exp(-\phi)\,dx,~0<z<\infty$. This is different to the classical situation, where grand canonical Gibbs measures $\mu$ corresponding to $\sigma=z\,dx,~0<z<\infty$, are considered, see e.g. [@AKR98b]. \[remsymbimgsd\] $\left(\nabla^\Gamma F,\nabla^\Gamma G\right)_{T_\cdot\Gamma} \in L^1(\Gamma,\mu)$ because $\mu\in\mathcal{G}^{\scriptscriptstyle{gc}}_{\scriptscriptstyle{\text{Rb}}}(\Phi_{\scriptscriptstyle{\phi}},z\exp(-\phi)),~0<z<\infty$. Due to Theorem \[thmintbyparts\] we have that $\nabla^\Gamma$ respects the $\mu$-classes $\mathcal{F}C_b^{\infty,\mu}(C^\infty_0(\mathbb{R}^d),\Gamma)$ determined by\ $\mathcal{F}C_b^\infty(C^\infty_0(\mathbb{R}^d),\Gamma)$, i.e., $\nabla^\Gamma F=\nabla^\Gamma G~\mu$-a.e provided $F,G\in\mathcal{F}C_b^\infty(C^\infty_0(\mathbb{R}^d),\Gamma)$ satisfy $F=G~\mu$-a.e.. Furthermore, it is easy to check that the $\mu$-equivalence classes $\mathcal{FV}C_b^{\infty,\mu}(C^\infty_0(\mathbb{R}^d),\Gamma)$ determined by $\mathcal{FV}C_b^{\infty}(C^\infty_0(\mathbb{R}^d),\Gamma)$ are dense in $L^2(\mathbb{R}^d\to\mathbb{R}^d,\mu)$. Hence $\left(\mathcal{E}^{\scriptscriptstyle{\Gamma,\mu}}_{\scriptscriptstyle{\text{gsdad}}},\mathcal{F}C_b^{\infty,\mu}(C^\infty_0(\mathbb{R}^d),\Gamma)\right)$ is a densely defined positive definite symmetric bilinear form on $L^2(\Gamma,\mu)$. The major part of the analysis (concerning closability) is already done by the derivation of the corresponding integration by parts formula in Section \[secintbp1\]. \[corgen\] Under the assumptions of Theorem \[thmintbyparts\]. We have $$\begin{aligned} \mathcal{E}^{\scriptscriptstyle{\Gamma,\mu}}_{\scriptscriptstyle{\text{gsdad}}}(F,G)=\int_\Gamma\left(\nabla^\Gamma F(\gamma),\nabla^\Gamma G(\gamma)\right)_{T_\gamma\Gamma}\,d\mu(\gamma)=\int_\Gamma -L^{\scriptscriptstyle{\Gamma,\mu}}_{\scriptscriptstyle{\text{gsdad}}}F\,G\,d\mu\end{aligned}$$ for all $F,G\in\mathcal{F}C_b^\infty(C^\infty_0(\mathbb{R}^d),\Gamma)$. In particular, $$\begin{gathered} L^{\scriptscriptstyle{\Gamma,\mu}}_{\scriptscriptstyle{\text{gsdad}}} F(\gamma)=\sum_{i,j=1}^N\partial_i\partial_jg_{\scriptscriptstyle{F}}\Big(\langle f_1,\gamma\rangle,\ldots,\langle f_N,\gamma\rangle\Big)\left\langle\Big(\nabla f_i,\nabla f_j\Big)_{\mathbb{R}^d},\gamma\right\rangle\\ +\sum_{j=1}^N\partial_j g_{\scriptscriptstyle{F}}\Big(\langle f_1,\gamma\rangle,\ldots,\langle f_N,\gamma\rangle\Big)\bigg(\langle\Delta f_j,\gamma\rangle+\left\langle\Big(\nabla\phi,\nabla f_j\Big)_{\mathbb{R}^d},\gamma\right\rangle\\-\sum_{\{x,y\}\subset\gamma}\Big(\nabla\phi(x-y),\nabla f_j(x)-\nabla f_j(y)\Big)_{\mathbb{R}^d}\bigg)\end{gathered}$$ $\mbox{for }\mu\mbox{-a.e.~}\gamma\in\Gamma\mbox{ and }F\in\mathcal{F}C_b^\infty(C^\infty_0(\mathbb{R}^d),\Gamma)$. Apply Corollary \[corintbyparts\] with $V:=\nabla^\Gamma G$. Then the first assertion follows by (\[equL\]). The second we obtain by direct calculations using (\[equL\]) and (\[defgrad\]). In the sequel we denote by $\ddot{\Gamma}\subset\mathcal{M}_p(\mathbb{R}^d)$ the space of integer valued, positive Radon measures. Note that $\ddot{\Gamma}\supset\Gamma$, since $$\begin{aligned} \Gamma=\left\{\gamma\in\ddot{\Gamma}\,\left|\,\max_{x\in\mathbb{R}^d}\gamma(\{x\})\le 1\right.\right\}. \end{aligned}$$ Clearly, $\nabla^\Gamma$ extends to a linear operator on $D(\mathcal{E}^{\scriptscriptstyle{\Gamma,\mu}}_{\scriptscriptstyle{\text{gsdad}}})$. We denote these extension by the same symbol. Furthermore, note that since $\Gamma\subset\ddot{\Gamma}$ and $\mathcal{B}(\ddot{\Gamma})\cap\Gamma=\mathcal{B}(\Gamma)$ we can consider $\mu$ as a measure on $(\ddot{\Gamma},\mathcal{B}(\ddot{\Gamma}))$ and correspondingly $\left(\mathcal{E}^{\scriptscriptstyle{\Gamma,\mu}}_{\scriptscriptstyle{\text{gsdad}}},D(\mathcal{E}^{\scriptscriptstyle{\Gamma,\mu}}_{\scriptscriptstyle{\text{gsdad}}})\right)$ is a Dirichlet form on $L^2(\ddot{\Gamma},\mu)$. In particular, we have that $D(\mathcal{E}^{\scriptscriptstyle{\Gamma,\mu}}_{\scriptscriptstyle{\text{gsdad}}})$ is the closure of $\mathcal{F}C_b^{\infty,\mu}(C_0^{\infty}(\mathbb{R}^d),\ddot{\Gamma})$ with respect to the norm $\sqrt{{\mathcal{E}^{\scriptscriptstyle{\Gamma,\mu}}_{\scriptscriptstyle{\text{gsdad}}}}_1}$, where $$\begin{aligned} {{\mathcal{E}^{\scriptscriptstyle{\Gamma,\mu}}_{\scriptscriptstyle{\text{gsdad}}}}_1}(F):={\mathcal{E}^{\scriptscriptstyle{\Gamma,\mu}}_{\scriptscriptstyle{\text{gsdad}}}(F,F)+(F,F)_{L^2(\ddot{\Gamma},\mu)}},\quad F\in D(\mathcal{E}^{\scriptscriptstyle{\Gamma,\mu}}_{\scriptscriptstyle{\text{gsdad}}}).\end{aligned}$$ The corresponding generator of the Dirichlet form can also be considered as linear operator on $L^2(\ddot{\Gamma},\mu)$. \[thmform1\] Suppose that the pair potential $\phi$ satisfies (SS), (I), (LR), (D$\text{L}^\text{2}$) and (LS). Let $\mu\in\mathcal{G}^{\scriptscriptstyle{gc}}_{\scriptscriptstyle{\text{Rb}}}(\Phi_{\scriptscriptstyle{\phi}},z\exp(-\phi)),~0<z<\infty$. Then 1. $\left(\mathcal{E}^{\scriptscriptstyle{\Gamma,\mu}}_{\scriptscriptstyle{\text{gsdad}}},\mathcal{F}C_b^{\infty,\mu}(C^\infty_0(\mathbb{R}^d),\Gamma)\right)$ is closable on $L^2(\Gamma,\mu)$ and its closure $\left(\mathcal{E}^{\scriptscriptstyle{\Gamma,\mu}}_{\scriptscriptstyle{\text{gsdad}}},D(\mathcal{E}^{\scriptscriptstyle{\Gamma,\mu}}_{\scriptscriptstyle{\text{gsdad}}})\right)$ is a symmetric Dirichlet form which is conservative, i.e., $1\in D(\mathcal{E}^{\scriptscriptstyle{\Gamma,\mu}}_{\scriptscriptstyle{\text{gsdad}}}),~\mathcal{E}^{\scriptscriptstyle{\Gamma,\mu}}_{\scriptscriptstyle{\text{gsdad}}}(1,1)=0$. Its generator, denoted by $H^{\scriptscriptstyle{\Gamma,\mu}}_{\scriptscriptstyle{\text{gsdad}}}$, is the Friedrichs’ extension of $-L^{\scriptscriptstyle{\Gamma,\mu}}_{\scriptscriptstyle{\text{gsdad}}}$. 2. $\left(\mathcal{E}^{\scriptscriptstyle{\Gamma,\mu}}_{\scriptscriptstyle{\text{gsdad}}},D(\mathcal{E}^{\scriptscriptstyle{\Gamma,\mu}}_{\scriptscriptstyle{\text{gsdad}}})\right)$ is quasi-regular on $L^2(\ddot{\Gamma},\mu)$. 3. $\left(\mathcal{E}^{\scriptscriptstyle{\Gamma,\mu}}_{\scriptscriptstyle{\text{gsdad}}},D(\mathcal{E}^{\scriptscriptstyle{\Gamma,\mu}}_{\scriptscriptstyle{\text{gsdad}}})\right)$ is local, i.e., $\mathcal{E}^{\scriptscriptstyle{\Gamma,\mu}}_{\scriptscriptstyle{\text{gsdad}}}(F,G)=0$ provided $F,G\in D(\mathcal{E}^{\scriptscriptstyle{\Gamma,\mu}}_{\scriptscriptstyle{\text{gsdad}}})$ with\ $\text{supp}(|F|\cdot\mu)\cap\text{supp}(|G|\cdot\mu)=\varnothing$. $ $ 1. By Corollary \[corgen\] we have closability and the last part of the assertion. The Dirichlet property immediately follows from the chain rule for $\nabla^\Gamma$ on $\mathcal{F}C_b^{\infty}(C^\infty_0(\mathbb{R}^d),\Gamma)$ and the conservativity is obvious. (We refer to [@MaRo92 Chap. I and Chap. II, Sect. 2,3] for the terminology and details.) 2. This is a special case of [@MaRo00 Coro. 4.9]. 3. Since $\nabla_\Gamma$ satisfies the product rule on bounded functions in $D(\mathcal{E}^{\scriptscriptstyle{\Gamma,\mu}}_{\scriptscriptstyle{\text{gsdad}}})$ the proof is exactly the same as in [@MaRo92 Chap. V, Exam. 1.12(ii)]. \[thmexpromgsd\] Suppose the assumptions of Theorem \[thmform1\]. Then 1. there exists a conservative diffusion process $$\begin{aligned} \mathbf{{M}}^{\scriptscriptstyle{\Gamma,\mu}}_{\scriptscriptstyle{\text{gsdad}}}=\left(\mathbf{{\Omega}},\mathbf{{F}}^{\scriptscriptstyle{\text{gsdad}}},(\mathbf{{F}}^{\scriptscriptstyle{\text{gsdad}}}_t)_{t\ge 0},(\mathbf{X}^{\scriptscriptstyle{\text{gsdad}}}_t)_{t\ge 0},(\mathbf{{P}}^{\scriptscriptstyle{\text{gsdad}}}_\gamma)_{\gamma\in\ddot{\Gamma}}\right)\end{aligned}$$ on $\ddot{\Gamma}$ which is properly associated with $\left(\mathcal{E}^{\scriptscriptstyle{\Gamma,\mu}}_{\scriptscriptstyle{\text{gsdad}}},D(\mathcal{E}^{\scriptscriptstyle{\Gamma,\mu}}_{\scriptscriptstyle{\text{gsdad}}})\right)$, i.e., for all ($\mu$-versions of) $F\in L^2(\ddot{\Gamma},\mu)$ and all $t>0$ the function $$\begin{aligned} \gamma\mapsto p^{\scriptscriptstyle{\text{gsdad}}}_t F(\gamma):=\int_{{\mathbf{\Omega}}}F({\mathbf{X}}^{\scriptscriptstyle{\text{gsdad}}}_t)\,d{\mathbf{{P}}^{\scriptscriptstyle{\text{gsdad}}}}_\gamma,\quad\gamma\in\ddot{\Gamma},\end{aligned}$$ is an $\mathcal{E}^{\scriptscriptstyle{\Gamma,\mu}}_{\scriptscriptstyle{\text{gsdad}}}$-quasi-continuous version of $\exp(-t {H}^{\scriptscriptstyle{\Gamma,\mu}}_{\scriptscriptstyle{\text{gsdad}}})F$. $\mathbf{{M}}^{\scriptscriptstyle{\Gamma,\mu}}_{\scriptscriptstyle{\text{gsdad}}}$ is up to $\mu$-equivalence unique (cf. [@MaRo92 Chap. IV, Sect. 6]). In particular, $\mathbf{{M}}^{\scriptscriptstyle{\Gamma,\mu}}_{\scriptscriptstyle{\scriptscriptstyle{\text{gsdad}}}}$ is $\mu$-symmetric, i.e., $$\begin{aligned} \int_{\ddot{\Gamma}} G\,p^{\scriptscriptstyle{\text{gsdad}}}_t F\,d\mu(\gamma)=\int_{\ddot{\Gamma}}F\,p^{\scriptscriptstyle{\text{gsdad}}}_t G\,d\mu(\gamma)\quad\mbox{for all }F,G:\ddot{\Gamma}\to\mathbb{R_+},~\mathcal{B}(\ddot{\Gamma})\mbox{-measurable}\end{aligned}$$ and has $\mu$ as invariant measure. 2. $\mathbf{{M}}^{\scriptscriptstyle{\Gamma,\mu}}_{\scriptscriptstyle{\text{gsdad}}}$ from (i) is the (up to $\mu$-equivalence, cf. [@MaRo92 Def. 6.3]) unique diffusion process having $\mu$ as invariant measure and solving the martingale problem for\ $\left(-{H}^{\scriptscriptstyle{\Gamma,\mu}}_{\scriptscriptstyle{\text{gsdad}}},D({H}^{\scriptscriptstyle{\Gamma,\mu}}_{\scriptscriptstyle{\text{gsdad}}})\right)$, i.e., for all $G\in D({H}^{\scriptscriptstyle{\Gamma,\mu}}_{\scriptscriptstyle{\text{gsdad}}})\supset\mathcal{F}C_b^\infty(C^\infty_0(\mathbb{R}^d),\Gamma)$ $$\begin{aligned} \widetilde{G}(\mathbf{X}^{\scriptscriptstyle{\text{gsdad}}}_t)-\widetilde{G}(\mathbf{X}^{\scriptscriptstyle{\text{gsdad}}}_0)+\int_0^t {H}^{\scriptscriptstyle{\Gamma,\mu}}_{\scriptscriptstyle{\text{gsdad}}} G(\mathbf{{X}}^{\scriptscriptstyle{\text{gsdad}}}_t)\,ds,\quad t\ge 0,\end{aligned}$$ is an $(\mathbf{{F}}^{\scriptscriptstyle{\text{gsdad}}}_t)_{t\ge 0}$-martingale under $\mathbf{{P}}^{\scriptscriptstyle{\text{gsdad}}}_\gamma$ (hence starting at $\gamma$) for $\mathcal{E}^{\scriptscriptstyle{\Gamma,\mu}}_{\scriptscriptstyle{\text{gsdad}}}$-q.a. $\gamma\in\ddot{\Gamma}$. (Here $\widetilde{G}$ denotes a quasi-continuous version of $G$, cf. [@MaRo92 Chap. IV, Prop.3.3].) $ $ 1. By Theorem \[thmform1\] the proof follows directly from [@MaRo92 Chap. V, Theo. 1.11]. 2. This follows immediately by [@MR1335494 Theo. 3.5]. \[remexceptmgsd\] 1. For $d\ge 2$ an argumentation as in the proof of [@RS98 Prop. 1] together with an argumentation as in the proof of [@RS98 Coro. 1] gives us that under our assumptions the set $\ddot{\Gamma}\setminus\Gamma$ is $\mathcal{E}_{\scriptscriptstyle{\text{gsdad}}}^{\scriptscriptstyle{\Gamma,\mu}}$-exceptional. Therefore, the process $\mathbf{M}^{\scriptscriptstyle{\Gamma,\mu}}_{\scriptscriptstyle{\text{gsdad}}}$ from Theorem \[thmexpromgsd\] lives on the smaller space $\Gamma$. 2. We call the diffusion process $\mathbf{M}^{\scriptscriptstyle{\Gamma,\mu}}_{\scriptscriptstyle{\text{gsdad}}}$ from Theorem \[thmexpromgsd\] *gradient stochastic dynamics with additional drift*. The environment process {#secenv} ----------------------- The following statement is a special case of an integration by parts formula shown in [@CoKu09], which holds for a non-empty subset $\mathcal{G}^{\scriptscriptstyle{gc}}_{\scriptscriptstyle{\text{ibp}}}(\Phi_{\scriptscriptstyle{\phi}},z\exp(-\phi))$ of $\mathcal{G}^{\scriptscriptstyle{gc}}_{\scriptscriptstyle{\text{Rb}}}(\Phi_{\scriptscriptstyle{\phi}},z\exp(-\phi))$, $0<z<\infty$. \[corintbp2\]Suppose that the pair potential $\phi$ satisfies (SS), (I), (LR), (D$\text{L}^\text{2}$) and (LS). Let $\mu\in\mathcal{G}^{\scriptscriptstyle{gc}}_{\scriptscriptstyle{\text{ibp}}}(\Phi_{\scriptscriptstyle{\phi}},z\exp(-\phi)),~0<z<\infty$. Then for $F,G\in\mathcal{F}C_b^\infty(C^\infty_0(\mathbb{R}^d),\Gamma)$ we have $\left(\nabla^\Gamma_\gamma F(\gamma),\nabla^\Gamma_\gamma G(\gamma)\right)_{\scriptscriptstyle{\mathbb{R}^d}}$ $\in L^1(\Gamma,\mu)$. Furthermore, $$\begin{gathered} \int_{\Gamma}\left(\nabla^\Gamma_\gamma F(\gamma),\nabla^\Gamma_\gamma G(\gamma)\right)_{\scriptscriptstyle{\mathbb{R}^d}}\,d\mu(\gamma)\\ =-\int_\Gamma\Bigg(\sum_{i,j=1}^N\partial_i\partial_j g_{\scriptscriptstyle{F}}\left(\langle f_1,\gamma\rangle,\ldots,\langle f_N,\gamma\rangle\right)\Big(\left\langle\nabla f_i,\gamma\right\rangle,\left\langle\nabla f_j,\gamma\right\rangle\Big)_{\mathbb{R}^d}\\ +\sum_{j=1}^N \partial_j g_{\scriptscriptstyle{F}}\left(\langle f_1,\gamma\rangle,\ldots,\langle f_N,\gamma\rangle\right)\bigg(\left\langle\Delta f_j,\gamma\right\rangle-\Big(\left\langle\nabla\phi,\gamma\right\rangle,\left\langle\nabla f_j,\gamma\right\rangle\Big)_{\mathbb{R}^d}\bigg)\Bigg)\\ \times g_{\scriptscriptstyle{G}}\left(\langle g_1,\gamma\rangle,\ldots,\langle g_M,\gamma\rangle\right)\,d\mu(\gamma).\end{gathered}$$ Next we consider $$\begin{aligned} \mathcal{E}^{\scriptscriptstyle{\Gamma,\mu}}_{\scriptscriptstyle{\text{env}}}(F,G)=\mathcal{E}_{\scriptscriptstyle{\text{gsdad}}}^{\scriptscriptstyle{\Gamma,\mu}}(F,G)+\int_{\Gamma}\left(\nabla^\Gamma_\gamma F(\gamma),\nabla^\Gamma_\gamma G(\gamma)\right)_{\scriptscriptstyle{\mathbb{R}^d}}\,d\mu(\gamma),\quad F,G\in\mathcal{F}C_b^\infty(C^\infty_0(\mathbb{R}^d),\Gamma).\end{aligned}$$ \[remsymbienv\] Using Remark \[remsymbimgsd\] and Lemma \[corintbp2\] we have that $\left(\mathcal{E}^{\scriptscriptstyle{\Gamma,\mu}}_{\scriptscriptstyle{\text{env}}},\mathcal{F}C_b^{\infty,\mu}(C^\infty_0(\mathbb{R}^d),\Gamma)\right)$ is a densely defined, positive definite, symmetric bilinear form on $L^2(\Gamma,\mu)$. \[corgenE2n\] Suppose that the pair potential $\phi$ satisfies (SS), (I), (LR), (D$\text{L}^\text{2}$) and (LS). Let $\mu\in\mathcal{G}^{\scriptscriptstyle{gc}}_{\scriptscriptstyle{\text{ibp}}}(\Phi_{\scriptscriptstyle{\phi}},z\exp(-\phi)),~0<z<\infty$. Then for all $F,G\in\mathcal{F}C_b^\infty(C^\infty_0(\mathbb{R}^d),\Gamma)$ we have $$\begin{aligned} \mathcal{E}^{\scriptscriptstyle{\Gamma,\mu}}_{\scriptscriptstyle{\text{env}}}(F,G)=\int_\Gamma\left(\nabla^\Gamma F(\gamma),\nabla^\Gamma G(\gamma)\right)_{T_\gamma\Gamma}+\left(\nabla^\Gamma_\gamma F(\gamma),\nabla^\Gamma_\gamma G(\gamma)\right)_{\scriptscriptstyle{\mathbb{R}^d}}\,d\mu(\gamma)=\int_\Gamma -L^{\scriptscriptstyle{\Gamma,\mu}}_{\scriptscriptstyle{\text{env}}}F\,G\,d\mu.\end{aligned}$$ In particular, $$\begin{gathered} L^{\scriptscriptstyle{\Gamma,\mu}}_{\scriptscriptstyle{\text{env}}}F(\gamma)=L^{\scriptscriptstyle{\Gamma,\mu}}_{\scriptscriptstyle{\text{gsdad}}}F(\gamma) +\sum_{i,j=1}^N\partial_i\partial_j g_{\scriptscriptstyle{F}}\left(\langle f_1,\gamma\rangle,\ldots,\langle f_N,\gamma\rangle\right)\big(\langle\nabla f_i,\gamma\rangle,\langle\nabla f_j,\gamma\rangle\big)_{\mathbb{R}^d}\\ +\sum_{j=1}^N \partial_j g_{\scriptscriptstyle{F}}\left(\langle f_1,\gamma\rangle,\ldots,\langle f_N,\gamma\rangle\right)\Big(\langle\Delta f_j,\gamma\rangle-\big(\langle\nabla\phi,\gamma\rangle,\langle\nabla f_j,\gamma\rangle\big)_{\mathbb{R}^d}\Big)\quad\mbox{for }\mu\mbox{-a.e.~}\gamma\in\Gamma.\end{gathered}$$ Combining Corollary \[corgen\] and Lemma \[corintbp2\] the statement follows. \[lemE21\] $\left(\mathcal{E}^{\scriptscriptstyle{\Gamma,\mu}}_{\scriptscriptstyle{\text{env}}},\mathcal{F}C_b^{\infty,\mu}(C^\infty_0(\mathbb{R}^d),\Gamma)\right)$ is closable on $L^2(\Gamma,\mu)$ and its closure $\left(\mathcal{E}^{\scriptscriptstyle{\Gamma,\mu}}_{\scriptscriptstyle{\text{env}}},D(\mathcal{E}^{\scriptscriptstyle{\Gamma,\mu}}_{\scriptscriptstyle{\text{env}}})\right)$ is a symmetric Dirichlet form which is conservative, i.e., $1\in D(\mathcal{E}^{\scriptscriptstyle{\Gamma,\mu}}_{\scriptscriptstyle{\text{env}}})$ and $\mathcal{E}^{\scriptscriptstyle{\Gamma,\mu}}_{\scriptscriptstyle{\text{env}}}(1,1)=0$. Its generator, denoted by ${H}^{\scriptscriptstyle{\Gamma,\mu}}_{\scriptscriptstyle{\text{env}}}$, is the Friedrichs’ extension of $-L^{\scriptscriptstyle{\Gamma,\mu}}_{\scriptscriptstyle{\text{env}}}$. By Corollary \[corgenE2n\] we have closability and the last part. The Dirichlet property immediately follows since $\nabla^\Gamma$ and $\nabla^\Gamma_\gamma$ fulfill the chain rule on $\mathcal{F}C_b^{\infty}(C^\infty_0(\mathbb{R}^d),\Gamma)$. Conservativity is obvious. Clearly, $\nabla^\Gamma$ and $\nabla^\Gamma_\gamma$ extend to linear operators on $D(\mathcal{E}^{\scriptscriptstyle{\Gamma,\mu}}_{\scriptscriptstyle{\text{env}}})$. We denote these extensions by the same symbols. Furthermore, note that since $\Gamma\subset\ddot{\Gamma}$ and $\mathcal{B}(\ddot{\Gamma})\cap\Gamma=\mathcal{B}(\Gamma)$ we can consider $\mu$ as a measure on $(\ddot{\Gamma},\mathcal{B}(\ddot{\Gamma}))$ and correspondingly $\left(\mathcal{E}^{\scriptscriptstyle{\Gamma,\mu}}_{\scriptscriptstyle{\text{env}}},D(\mathcal{E}^{\scriptscriptstyle{\Gamma,\mu}}_{\scriptscriptstyle{\text{env}}})\right)$ is a Dirichlet form on $L^2(\ddot{\Gamma},\mu)$. In particular, we have that $D(\mathcal{E}^{\scriptscriptstyle{\Gamma,\mu}}_{\scriptscriptstyle{\text{env}}})$ is the closure of $\mathcal{F}C_b^{\infty,\mu}(C_0^{\infty}(\mathbb{R}^d),\ddot{\Gamma})$ with respect to the norm $\sqrt{{\mathcal{E}^{\scriptscriptstyle{\Gamma,\mu}}_{\scriptscriptstyle{\text{env}}}}_1}$, where $$\begin{aligned} {{\mathcal{E}^{\scriptscriptstyle{\Gamma,\mu}}_{\scriptscriptstyle{\text{env}}}}_1}(F):={\mathcal{E}^{\scriptscriptstyle{\Gamma,\mu}}_{\scriptscriptstyle{\text{env}}}(F,F)+(F,F)_{L^2(\ddot{\Gamma},\mu)}},\quad F\in D(\mathcal{E}^{\scriptscriptstyle{\Gamma,\mu}}_{\scriptscriptstyle{\text{env}}}).\end{aligned}$$ The corresponding generator of the Dirichlet form can also be considered as linear operator on $L^2(\ddot{\Gamma},\mu)$. \[lemE22\] $\left(\mathcal{E}^{\scriptscriptstyle{\Gamma,\mu}}_{\scriptscriptstyle{\text{env}}},D(\mathcal{E}^{\scriptscriptstyle{\Gamma,\mu}}_{\scriptscriptstyle{\text{env}}})\right)$ is quasi-regular on $L^2(\ddot{\Gamma},\mu)$. The Dirichlet form $(\mathcal{E}^{\scriptscriptstyle{\Gamma,\mu}}_{\scriptscriptstyle{\text{env}}},D(\mathcal{E}^{\scriptscriptstyle{\Gamma,\mu}}_{\scriptscriptstyle{\text{env}}})))$ is given by $$\begin{aligned} \mathcal{E}^{\scriptscriptstyle{\Gamma,\mu}}_{\scriptscriptstyle{\text{env}}}(F,G):=\int_\Gamma S^\Gamma(F,G)\,d\mu,\end{aligned}$$ where $$\begin{gathered} S^\Gamma(F,G):=S^\Gamma_0(F,G)+\left(\nabla^\Gamma_\gamma F(\gamma),\nabla^\Gamma_\gamma G(\gamma)\right)_{\scriptscriptstyle{\mathbb{R}^d}}\quad\mbox{with}\\ S^\Gamma_0(F,G):=\left(\nabla^\Gamma F,\nabla^\Gamma G\right)_{T_\gamma\Gamma},\quad F,G\in D(\mathcal{E}^{\scriptscriptstyle{\Gamma,\mu}}_{\scriptscriptstyle{\text{env}}}).\end{gathered}$$ To prove quasi-regularity analogously to [@MaRo00 Prop. 4.1], it suffices to show that there exists a bounded, complete metric $\bar{\rho}$ on $\ddot{\Gamma}$ generating the vague topology such that $\bar{\rho}(\cdot,\gamma_0)\in D(\mathcal{E}^{\scriptscriptstyle{\Gamma,\mu}}_{\scriptscriptstyle{\text{env}}})$ for all $\gamma_0\in\ddot\Gamma$ and $$\begin{aligned} S^\Gamma(\bar{\rho}(\cdot,\gamma_0),\bar{\rho}(\cdot,\gamma_0))\le\eta\quad\mu-\mbox{a.e.}\end{aligned}$$ for some $\eta\in L^1(\ddot{\Gamma},\mu)$ (independent of $\gamma_0$). The proof below is a modification of [@MaRo00 Prop. 4.8]. Hence we also use the notation proposed there. Thus $(B_k)_{k\in\mathbb{N}}$ is an exhausting sequence, i.e. $(B_k)_{k\in\mathbb{N}}$ is an increasing sequence of open sets such that $\bigcup_{k\in\mathbb{N}}B_k=\mathbb{R}^d$. Furthermore, since $B^{\frac{1}{2}}_k\subset B_{k+1}$ for all $k\in\mathbb{N}$, $(B_k)_{k\in\mathbb{N}}$ is a well-exhausting sequence in the sense of [@MaRo00] with $\delta_k=\frac{1}{2}$ for all $k\in\mathbb{N}$. Here $B^{\frac{1}{2}}_k:=B_{k+\frac{1}{2}}$. For each $k\in\mathbb{N}$ we define $$\begin{aligned} g_k(x):=g_{B_k,\frac{1}{2}}(x):=\frac{2}{3}\left(\frac{1}{2}-\text{dist}(x,B_k)\wedge\frac{1}{2}\right),\quad x\in\mathbb{R}^d,\end{aligned}$$ and $\phi_k:=3g_k$. Furthermore, we set $S(f,g):=\big(\nabla f,\nabla g \big)_{\mathbb{R}^d}$ for $f,g\in W_0^{1,2}(\mathbb{R}^d)$, where $W_0^{1,2}(\mathbb{R}^d)$ denotes the Sobolev space of compactly supported, weakly differentiable functions in $L^2(\mathbb{R}^d,dx)$ with weak derivative again in $L^2(\mathbb{R}^d,dx)$. Due to [@MaRo00 Exam. 4.5.1] we have that [@MaRo00 Cond. (Q)] holds with $S$ as given above. Moreover, due to [@MaRo00 Lemm. 4.10] $$\begin{aligned} \phi_k g_j\in W^{1,2}_0(\mathbb{R}^d)\quad\mbox{and}\quad S(\phi_k g_j):=S(\phi_k g_j,\phi_k g_j)\le\tilde{\chi_k}\quad\mbox{for all }k,j\in\mathbb{N},\end{aligned}$$ where $\tilde{\chi}_{k}:=4\chi_k\Big(\sqrt{S(\chi_k)}+C\,\big(\chi_k+\sqrt{S(\chi_k)}\big)\Big)$ with $\chi_k\in C_0^\infty(\mathbb{R}^d)$ and $C\in (0,\infty)$ as in [@MaRo00 Cond. (Q)]. For any function $f\in W_0^{1,2}(\mathbb{R}^d)$ we have $\langle f,\cdot\rangle\in D(\mathcal{E}^{\scriptscriptstyle{\Gamma,\mu}}_{\scriptscriptstyle{\text{env}}})$, since $\mu$ fulfills a Ruelle bound. Hence we can consider $$\begin{aligned} S^\Gamma(\langle f,\cdot\rangle):=S^\Gamma(\langle f,\cdot\rangle,\langle f,\cdot\rangle)\end{aligned}$$ and obtain $$\begin{aligned} S^\Gamma(\langle f,\cdot\rangle)=\left\langle\big(\nabla f,\nabla f\big)_{\mathbb{R}^d},\cdot\right\rangle+\big(\langle\nabla f,\cdot\rangle,\langle\nabla f,\cdot\rangle\big)_{\mathbb{R}^d}.\end{aligned}$$ For $\gamma\in\Gamma$ and $\Lambda:=\text{supp~}f,~f\in W^{1,2}_0(\mathbb{R}^d)$, we have $$\begin{gathered} \big(\langle\nabla f,\gamma\rangle,\langle\nabla f,\gamma\rangle\big)_{\mathbb{R}^d}=\sum_{x\in\gamma}\sum_{y\in\gamma}\big(\nabla f(x),\nabla f(y)\big)_{\mathbb{R}^d}\\ \le\sum_{x\in\gamma}\sum_{y\in\gamma}\sqrt{\big(\nabla f(x),\nabla f(x)\big)_{\mathbb{R}^d}}\cdot\sqrt{\big(\nabla f(y),\nabla f(y)\big)_{\mathbb{R}^d}}\\ =\sum_{x\in\gamma}\sqrt{\big(\nabla f(x),\nabla f(x)\big)_{\mathbb{R}^d}}\cdot\sum_{y\in\gamma}\sqrt{\big(\nabla f(y),\nabla f(y)\big)_{\mathbb{R}^d}}\\ =\left|\gamma_\Lambda\right|^2\left(\frac{1}{\left|\gamma_\Lambda\right|}\sum_{x\in\gamma}\sqrt{\big(\nabla f(x),\nabla f(x)\big)_{\mathbb{R}^d}}\right)^2\le\left|\gamma_\Lambda\right|\sum_{x\in\gamma}\big(\nabla f(x),\nabla f(x)\big)_{\mathbb{R}^d}\\ =\left|\gamma_\Lambda\right|\left\langle\big(\nabla f,\nabla f\big)_{\mathbb{R}^d},\gamma\right\rangle,\end{gathered}$$ where we have used Jensen’s inequality. Finally, $$\begin{aligned} S^\Gamma(\langle f,\cdot\rangle)\le \left(1+\left|\gamma_\Lambda\right|\right)\cdot\left\langle\big(\nabla f,\nabla f\big)_{\mathbb{R}^d},\cdot\right\rangle=\left(1+\left|\gamma_\Lambda\right|\right)\cdot\left\langle S(f),\cdot\right\rangle,\quad \Lambda=\text{supp~}f,~f\in W^{1,2}_0(\mathbb{R}^d).\end{aligned}$$ Next we fix a function $\zeta\in C_b^\infty(\mathbb{R})$ such that $0\le \zeta\le 1$ on $[0,\infty)$, $\zeta(t)=t$ on $\left[-\frac{1}{2},\frac{1}{2}\right]$, $\zeta'>0$ and $\zeta''\le 0$. Here $C_b^\infty(\mathbb{R})$ denotes the set of bounded, continuous functions on $\mathbb{R}^d$ which are infinitely often continuously differentiable. Using an argumentation as in [@RS95 Lemm. 3.2] we have that for any fixed $\gamma_0\in\ddot{\Gamma}$ and for any $k,n\in\mathbb{N}$ the restriction to $\Gamma$ of the function $$\begin{aligned} \zeta\left(\sup_{j\le n}\big|\langle\phi_k\,g_j,\cdot\rangle-\langle\phi_k\,g_j,\gamma_0\rangle\big|\right)\end{aligned}$$ belongs to $D(\mathcal{E}^{\scriptscriptstyle{\Gamma,\mu}}_{\scriptscriptstyle{\text{env}}})$. Furthermore, we obtain $$\begin{gathered} S^\Gamma\left(\zeta\left(\sup_{j\le n}\big|\langle\phi_k\,g_j,\cdot\rangle-\langle\phi_k\,g_j,\gamma_0\rangle\big|\right)\right)\\ \le\left(1+N_{B_{k+1}}(\cdot)\right) S^\Gamma_0\left(\zeta\left(\sup_{j\le n}\big|\langle\phi_k\,g_j,\cdot\rangle-\langle\phi_k\,g_j,\gamma_0\rangle\big|\right)\right)\quad\mu\mbox{-a.e.},\end{gathered}$$ since $\phi_k g_j,~k,j\in\mathbb{N}$, having support in $B_{k+1}$. Here, as usual, $N_B:\Gamma\to \mathbb{N}_0\cup\{+\infty\}$ is given by $N_B(\gamma):=\gamma(B)$, where $B\in\mathcal{B}(\mathbb{R}^d)$. Due to [@MaRo00 (4.7)] we have $$\begin{aligned} S^\Gamma_0\left(\zeta\left(\sup_{j\le n}\big|\langle\phi_k\,g_j,\cdot\rangle-\langle\phi_k\,g_j,\gamma_0\rangle\big|\right)\right)\le\left\langle\tilde{\chi}_k^2,\cdot\right\rangle\quad\mu\mbox{-a.e.}\end{aligned}$$ Thus $$\begin{aligned} \label{esti} S^\Gamma\left(\zeta\left(\sup_{j\le n}\big|\langle\phi_k\,g_j,\cdot\rangle-\langle\phi_k\,g_j,\gamma_0\rangle\big|\right)\right)\le \left(1+N_{B_{k+1}}(\cdot)\right)\left\langle\tilde{\chi}_k^2,\cdot\right\rangle\quad\mu\mbox{-a.e.}\end{aligned}$$ For $\gamma,\gamma_0\in\ddot{\Gamma}$ and $k\in\mathbb{N}$ we set $$\begin{aligned} F_k(\gamma,\gamma_0):=\zeta\Big(\sup_{j\in\mathbb{N}}\big|\langle\phi_k g_j,\gamma\rangle-\langle\phi_k g_k,\gamma_0\rangle\big|\Big)\end{aligned}$$ and for a fixed $\gamma_0\in\ddot{\Gamma}$ $$\begin{aligned} \zeta\left(\sup_{j\le n}\big|\langle\phi_k\,g_j,\gamma\rangle-\langle\phi_k\,g_j,\gamma_0\rangle\big|\right)\to F_k(\gamma,\gamma_0)\quad\mbox{as }n\to\infty\mbox{ for all }\gamma\in\ddot{\Gamma},\end{aligned}$$ and in $L^2(\ddot{\Gamma},\mu)$. Hence by (\[esti\]) and the Banach-Saks theorem, $F_k(\cdot,\gamma_0)\in D(\mathcal{E}^{\scriptscriptstyle{\Gamma,\mu}}_{\scriptscriptstyle{\text{env}}}))$ and $$\begin{aligned} \label{esti2} S^\Gamma(F_k(\cdot,\gamma_0))\le\left(1+N_{B_{k+1}}(\cdot)\right)\left\langle\tilde{\chi}_{k}^2,\cdot\right\rangle\quad\mu\mbox{-a.e..}\end{aligned}$$ Next let us define $$\begin{gathered} c_k:=\Big(1+\frac{1}{2}\int_{\mathbb{R}^{2d}}1_{B_{k+1}}(x_1)\tilde{\chi}_{k}^2(x_2)\rho_{\mu}^{\scriptscriptstyle{(2)}}(x_1,x_2)\exp(-\phi)(x_1)\exp(-\phi)(x_2)\,dx_1\,dx_2\\ +\int_{\mathbb{R}^d}1_{B_{k+1}}(x_1)\tilde{\chi}_{k}^2(x_1)\rho_{\mu}^{\scriptscriptstyle{(1)}}(x_1)\exp(-\phi)(x_1)\,dx_1\\ +\int_{\mathbb{R}^d}\tilde{\chi}_k^2(x_1)\rho_{\mu}^{\scriptscriptstyle{(1)}}(x_1)\exp(-\phi)(x_1)\,dx_1\Big)^{-\frac{1}{2}}2^{-\frac{k}{2}},\quad k\in\mathbb{N}.\end{gathered}$$ Note that since $\mu$ fulfills a Ruelle bound and $\phi$ is bounded from below $(c_k)_{k\in\mathbb{N}}$ is a sequence of positive real numbers converging to $0$ as $k\to\infty$. For $\gamma_1,\gamma_2\in\ddot{\Gamma}$ we define $$\begin{aligned} \bar{\rho}(\gamma_1,\gamma_2):=\sup_{k\in\mathbb{N}} c_k\,F_k(\gamma_1,\gamma_2).\end{aligned}$$ By [@MaRo00 Theo. 3.6], $\bar{\rho}$ is a bounded, complete metric on $\ddot{\Gamma}$ generating the vague topology. Furthermore, $$\begin{gathered} S^\Gamma(c_k\,F_k(\cdot,\gamma_0))\le 2^{-k}\Big(1+\frac{1}{2}\int_{\mathbb{R}^{2d}}\!\!\!1_{B_{k+1}}(x_1)\tilde{\chi}_{k}^2(x_2)\rho_{\mu}^{\scriptscriptstyle{(2)}}(x_1,x_2)\exp(-\phi)(x_1)\exp(-\phi)(x_2)\,dx_1\,dx_2\\ +\int_{\mathbb{R}^d}1_{B_{k+1}}(x_1)\tilde{\chi}_{k}^2(x_1)\rho_{\mu}^{\scriptscriptstyle{(1)}}(x_1)\exp(-\phi)(x_1)\,dx_1 +\int_{\mathbb{R}^d}\tilde{\chi}_{k}^2(x_1)\rho_{\mu}^{\scriptscriptstyle{(1)}}(x_1)\exp(-\phi)(x_1)\,dx_1\Big)^{-1}\\ \times\left(N_{B_{k+1}}(\cdot)+1\right)\left\langle\tilde{\chi}_{k}^2,\cdot\right\rangle\\ \le\sup_{k\in\mathbb{N}}\Bigg(2^{-k}\Big(1+\frac{1}{2}\int_{\mathbb{R}^{2d}}1_{B_{k+1}}(x_1)\tilde{\chi}_{k}^2(x_2)\rho_{\mu}^{\scriptscriptstyle{(2)}}(x_1,x_2)\exp(-\phi)(x_1)\exp(-\phi)(x_2)\,dx_1\,dx_2\\ +\int_{\mathbb{R}^d}1_{B_{k+1}}(x_1)\tilde{\chi}_{k}^2(x_1)\rho_{\mu}^{\scriptscriptstyle{(1)}}(x_1)\exp(-\phi)(x_1)\,dx_1 +\int_{\mathbb{R}^d}\tilde{\chi}_{k}^2(x_1)\rho_{\mu}^{\scriptscriptstyle{(1)}}(x_1)\exp(-\phi)(x_1)\,dx_1\Big)^{-1}\\ \times\left(N_{B_{k+1}}(\cdot)+1\right)\left\langle\tilde{\chi}_{k}^2,\cdot\right\rangle\Bigg)=:\eta\quad\mu\mbox{-a.e.}\end{gathered}$$ by (\[esti2\]). Thus by [@RS95 Lemm. 3.2] we have for all $n\in\mathbb{N}$ $$\begin{aligned} S^\Gamma\left(\sup_{k\le n}c_k F_k(\cdot,\gamma_0)\right)\le\sup_{k\le n}S^\Gamma\left(c_k F_k(\cdot,\gamma_0)\right)\le\sup_{k\in\mathbb{N}}S^\Gamma\left(c_k F_k(\cdot,\gamma_0)\right)\le\eta\quad\mu\mbox{-a.e.}\end{aligned}$$ But $\sup_{k\le n}c_k\,F_k(\cdot,\gamma_0)\to\bar{\rho}(\cdot,\gamma_0)$ as $n\to\infty$ pointwisely and in $L^2(\ddot{\Gamma},\mu)$. Thus $\bar{\rho}(\cdot,\gamma_0)\in D(\mathcal{E}^{\scriptscriptstyle{\Gamma,\mu}}_{\scriptscriptstyle{\text{env}}})$ and $S^\Gamma(\bar{\rho}(\cdot,\gamma_0))\le\eta$, by the Banach-Saks theorem, since $$\begin{gathered} \int_\Gamma\eta\,d\mu \le\sum_{k=1}^\infty 2^{-k}\Big(1+\frac{1}{2}\int_{\mathbb{R}^{2d}}1_{B_{k+1}}(x_1)\tilde{\chi}_{k}^2(x_2)\rho_{\mu}^{\scriptscriptstyle{(2)}}(x_1,x_2)\exp(-\phi)(x_1)\exp(-\phi)(x_2)\,dx_1\,dx_2\\ +\int_{\mathbb{R}^d}1_{B_{k+1}}(x_1)\tilde{\chi}_{k}^2(x_1)\rho_{\mu}^{\scriptscriptstyle{(1)}}(x_1)\exp(-\phi)(x_1)\,dx_1 +\int_{\mathbb{R}^d}\tilde{\chi}_{k}^2(x_1)\rho_{\mu}^{\scriptscriptstyle{(1)}}(x_1)\exp(-\phi)(x_1)\,dx_1\Big)^{-1}\\ \times\Big(\frac{1}{2}\int_{\mathbb{R}^{2d}}1_{B_{k+1}}(x_1)\tilde{\chi}_{k}^2(x_2)\rho_{\mu}^{\scriptscriptstyle{(2)}}(x_1,x_2)\exp(-\phi)(x_1)\exp(-\phi)(x_2)\,dx_1\,dx_2\\ +\int_{\mathbb{R}^d}1_{B_{k+1}}(x_1)\tilde{\chi}_{k}^2(x_1)\rho_{\mu}^{\scriptscriptstyle{(1)}}(x_1)\exp(-\phi)(x_1)\,dx_1 +\int_{\mathbb{R}^d}\tilde{\chi}_{k}^2(x_1)\rho_{\mu}^{\scriptscriptstyle{(1)}}(x_1)\exp(-\phi)(x_1)\,dx_1\Big)<\infty.\end{gathered}$$ \[lemE23\] $\left(\mathcal{E}^{\scriptscriptstyle{\Gamma,\mu}}_{\scriptscriptstyle{\text{env}}},D(\mathcal{E}^{\scriptscriptstyle{\Gamma,\mu}}_{\scriptscriptstyle{\text{env}}})\right)$ is local, i.e., $\mathcal{E}^{\scriptscriptstyle{\Gamma,\mu}}_{\scriptscriptstyle{\text{env}}}(F,G)=0$ provided $F,G\in D(\mathcal{E}^{\scriptscriptstyle{\Gamma,\mu}}_{\scriptscriptstyle{\text{env}}})$ with\ $\text{supp}(|F|\cdot\mu)\cap\text{supp}(|G|\cdot\mu)=\varnothing$. The proof is a simple modification of the proof of [@MaRo00 Prop. 4.12], where similar arguments as in the proof of Lemma \[lemE22\] are used. Combining Lemmas \[lemE21\], \[lemE22\] and \[lemE23\] we obtain \[thmdirE2\] Suppose that the pair potential $\phi$ satisfies (SS), (I), (LR), (D$\text{L}^\text{2}$) and (LS) and let $\mu\in\mathcal{G}^{\scriptscriptstyle{gc}}_{\scriptscriptstyle{\text{ibp}}}(\Phi_{\scriptscriptstyle{\phi}},z\exp(-\phi)),~0<z<\infty$. Then $\left(\mathcal{E}^{\scriptscriptstyle{\Gamma,\mu}}_{\scriptscriptstyle{\text{env}}},D(\mathcal{E}^{\scriptscriptstyle{\Gamma,\mu}}_{\scriptscriptstyle{\text{env}}})\right)$ is a local, quasi-regular, symmetric Dirichlet form which is conservative, i.e., $1\in D(\mathcal{E}^{\scriptscriptstyle{\Gamma,\mu}}_{\scriptscriptstyle{\text{env}}})$ and $\mathcal{E}^{\scriptscriptstyle{\Gamma,\mu}}_{\scriptscriptstyle{\text{env}}}(1,1)=0$. Its generator, denoted by ${H}^{\scriptscriptstyle{\Gamma,\mu}}_{\scriptscriptstyle{\text{env}}}$, is the Friedrichs’ extension of $-L^{\scriptscriptstyle{\Gamma,\mu}}_{\scriptscriptstyle{\text{env}}}$. \[thmexpro2\] Suppose the assumptions of Theorem \[thmdirE2\]. Then 1. there exists a conservative diffusion process $$\begin{aligned} \mathbf{{M}}^{\scriptscriptstyle{\Gamma,\mu}}_{\scriptscriptstyle{\text{env}}}=\left(\mathbf{{\Omega}},\mathbf{{F}}^{\scriptscriptstyle{\text{env}}},(\mathbf{{F}}^{\scriptscriptstyle{\text{env}}}_t)_{t\ge 0},(\mathbf{X}^{\scriptscriptstyle{\text{env}}}_t)_{t\ge 0},(\mathbf{{P}}^{\scriptscriptstyle{\text{env}}}_\gamma)_{\gamma\in\ddot{\Gamma}}\right)\end{aligned}$$ on $\ddot{\Gamma}$ which is properly associated with $\left(\mathcal{E}^{\scriptscriptstyle{\Gamma,\mu}}_{\scriptscriptstyle{\text{env}}},D(\mathcal{E}^{\scriptscriptstyle{\Gamma,\mu}}_{\scriptscriptstyle{\text{env}}})\right)$, i.e., for all ($\mu$-versions of) $F\in L^2(\ddot{\Gamma},\mu)$ and all $t>0$ the function $$\begin{aligned} \gamma\mapsto p_t^{\scriptscriptstyle{\text{env}}} F(\gamma):=\int_{{\mathbf{\Omega}}}F({\mathbf{X}}^{\scriptscriptstyle{\text{env}}}_t)\,d{\mathbf{{P}}}^{\scriptscriptstyle{\text{env}}}_\gamma,\quad\gamma\in\ddot{\Gamma},\end{aligned}$$ is an $\mathcal{E}^{\scriptscriptstyle{\Gamma,\mu}}_{\scriptscriptstyle{\text{env}}}$-quasi-continuous version of $\exp(-t {H}^{\scriptscriptstyle{\Gamma,\mu}}_{\scriptscriptstyle{\text{env}}})F$. $\mathbf{{M}}^{\scriptscriptstyle{\Gamma,\mu}}_{\scriptscriptstyle{\text{env}}}$ is up to $\mu$-equivalence unique (cf. [@MaRo92 Chap. IV, Sect. 6]). In particular, $\mathbf{{M}}^{\scriptscriptstyle{\Gamma,\mu}}_{\scriptscriptstyle{\scriptscriptstyle{\text{env}}}}$ is $\mu$-symmetric, i.e., $$\begin{aligned} \int_{\ddot{\Gamma}} G\,p^{\scriptscriptstyle{\text{env}}}_t F\,d\mu(\gamma)=\int_{\ddot{\Gamma}}F\,p^{\scriptscriptstyle{\text{env}}}_t G\,d\mu(\gamma)\quad\mbox{for all }F,G:\ddot{\Gamma}\to\mathbb{R_+},~\mathcal{B}(\ddot{\Gamma})\mbox{-measurable}\end{aligned}$$ and has $\mu$ as invariant measure. 2. $\mathbf{{M}}^{\scriptscriptstyle{\Gamma,\mu}}_{\scriptscriptstyle{\text{env}}}$ from (i) is the (up to $\mu$-equivalence, cf. [@MaRo92 Def. 6.3]) unique diffusion process having $\mu$ as invariant measure and solving the martingale problem for\ $\left(-{H}^{\scriptscriptstyle{\Gamma,\mu}}_{\scriptscriptstyle{\text{env}}},D({H}^{\scriptscriptstyle{\Gamma,\mu}}_{\scriptscriptstyle{\text{env}}})\right)$, i.e., for all $G\in D({H}^{\scriptscriptstyle{\Gamma,\mu}}_{\scriptscriptstyle{\text{env}}})\supset\mathcal{F}C_b^\infty(C^\infty_0(\mathbb{R}^d),\Gamma)$ $$\begin{aligned} \widetilde{G}(\mathbf{X}^{\scriptscriptstyle{\text{env}}}_t)-\widetilde{G}(\mathbf{X}^{\scriptscriptstyle{\text{env}}}_0)+\int_0^t {H}^{\scriptscriptstyle{\Gamma,\mu}}_{\scriptscriptstyle{\text{env}}} G(\mathbf{{X}}^{\scriptscriptstyle{\text{env}}}_t)\,ds,\quad t\ge 0,\end{aligned}$$ is an $(\mathbf{{F}}_t)_{t\ge 0}$-martingale under $\mathbf{{P}}^{\scriptscriptstyle{\text{env}}}_\gamma$ (hence starting at $\gamma$) for $\mathcal{E}^{\scriptscriptstyle{\Gamma,\mu}}_{\scriptscriptstyle{\text{env}}}$-q.a. $\gamma\in\ddot{\Gamma}$. (Here $\widetilde{G}$ denotes a quasi-continuous version of $G$, cf. [@MaRo92 Chap. IV, Prop.3.3].) <!-- --> 1. By Theorem \[thmdirE2\] the proof follows directly from [@MaRo92 Chap. V, Theo. 1.11]. 2. This follows immediately by [@MR1335494 Theo. 3.5]. \[remexcept2\] For $d\ge 2$ an argumentation as in the proofs of [@RS98 Prop. 1] and [@RS98 Coro. 1] together with a similar argumentation as in the proof of Lemma \[lemE22\] gives us that under our assumptions the set $\ddot{\Gamma}\setminus\Gamma$ is $\mathcal{E}^{\scriptscriptstyle{\Gamma,\mu}}_{\scriptscriptstyle{\text{env}}}$-exceptional. Therefore, the process $\mathbf{M}^{\scriptscriptstyle{\Gamma,\mu}}_{\scriptscriptstyle{\text{env}}}$ from Theorem \[thmexpro2\] lives on the smaller space $\Gamma$. The coupled process ------------------- Finally we construct the stochastic process $\mathbf{M}^{\scriptscriptstyle{\mathbb{R}^d\times\Gamma,\hat{\mu}}}_{\scriptscriptstyle{\text{coup}}}$ taking values in $\mathbb{R}^d\times\Gamma$ for $d\ge 2$ (for $d=1$ the process exists only in $\mathbb{R}^d\times\ddot{\Gamma}$), coupling the motion of the tagged particle and the motion of the environment seen from this particle. Therefore, let $\mu\in\mathcal{G}^{\scriptscriptstyle{gc}}_{\scriptscriptstyle{\text{ibp}}}(\Phi_{\scriptscriptstyle{\phi}},z\exp(-\phi)),~0<z<\infty$. As test functions we consider functions $\mathfrak{F}\in C^\infty_0(\mathbb{R}^d)\otimes\mathcal{F}C_b^{\infty}(C^{\infty}_0(\mathbb{R}^d),\Gamma)$. Here $\otimes$ denotes the algebraic tensor product of $C^\infty_0(\mathbb{R}^d)$ and $\mathcal{F}C_b^{\infty}(C^{\infty}_0(\mathbb{R}^d),\Gamma)$. Hence $$\begin{aligned} \label{short} \mathfrak{F}(\xi,\gamma)=\sum_{k=1}^{m_{\scriptscriptstyle{\mathfrak{F}}}}(f_k\otimes F_k)(\xi,\gamma):=\sum_{k=1}^{m_{\scriptscriptstyle{\mathfrak{F}}}}f_k(\xi)F_k(\gamma),\quad(\xi,\gamma)\in\mathbb{R}^d\times\Gamma,\end{aligned}$$ where $f_k\in C^\infty_0(\mathbb{R}^d)$, $F_k\in\mathcal{F}C_b^{\infty}(C^{\infty}_0(\mathbb{R}^d),\Gamma)$ for $k\in\{1,\ldots,m_{\scriptscriptstyle{\mathfrak{F}}}\}$ and $m_{\scriptscriptstyle{\mathfrak{F}}}\in\mathbb{N}$ depends on $\mathfrak{F}\in C^\infty_0(\mathbb{R}^d)\otimes\mathcal{F}C_b^{\infty}(C^{\infty}_0(\mathbb{R}^d),\Gamma)$. As operators on $C^\infty_0(\mathbb{R}^d)\otimes\mathcal{F}C_b^{\infty}(C^{\infty}_0(\mathbb{R}^d),\Gamma)$ we consider $$\begin{aligned} (&{\nabla^{\Gamma}_{\gamma}}-\nabla)\mathfrak{F}(\xi,\gamma):=\sum_{k=1}^{m_{\scriptscriptstyle{\mathfrak{F}}}}\Big(f_k(\xi){\nabla^{\Gamma}_{\gamma}}F_k(\gamma)-\nabla f_k(\xi)F(\gamma)\Big)\quad\mbox{and}\label{sh1}\\ &\nabla^\Gamma \mathfrak{F}(\xi,\gamma):=\sum_{k=1}^{m_{\scriptscriptstyle{\mathfrak{F}}}}f_k(\xi)\nabla^\Gamma F_k(\gamma),\quad\mbox{for }(\xi,\gamma)\in\mathbb{R}^d\times\Gamma,\label{sh2}\end{aligned}$$ where $\mathfrak{F}\in C^\infty_0(\mathbb{R}^d)\otimes\mathcal{F}C_b^{\infty}(C^{\infty}_0(\mathbb{R}^d),\Gamma)$. \[simp\] Since the objects we consider are linear or bilinear, respectively, for simplicity we use $$\begin{aligned} &\mathfrak{F}(\xi,\gamma)=f(\xi)\,F(\gamma)\mbox{ instead of (\ref{short})},\\ &({\nabla^{\Gamma}_{\gamma}}-\nabla)\mathfrak{F}(\xi,\gamma)=f(\xi)\,{\nabla^{\Gamma}_{\gamma}}F(\gamma)-\nabla f(\xi)\,F(\gamma)\mbox{ instead of (\ref{sh1}) and}\\ &\nabla^\Gamma \mathfrak{F}(\xi,\gamma)=f(\xi)\,\nabla^\Gamma F(\gamma)\mbox{ instead of (\ref{sh2})}.\end{aligned}$$ Now we define on $C^\infty_0(\mathbb{R}^d)\otimes\mathcal{F}C_b^{\infty}(C^{\infty}_0(\mathbb{R}^d),\Gamma)$ the following positive definite, symmetric bilinear form: $$\begin{gathered} \label{equcoup} \mathcal{E}^{\scriptscriptstyle{\mathbb{R}^d\times\Gamma,\hat{\mu}}}_{\scriptscriptstyle{\text{coup}}}(\mathfrak{F},\mathfrak{G})=\int_{\mathbb{R}^d\times\Gamma}\Big(({\nabla^{\Gamma}_{\gamma}}-\nabla)\mathfrak{F}(\xi,\gamma),({\nabla^{\Gamma}_{\gamma}}-\nabla)\mathfrak{G}(\xi,\gamma)\Big)_{\mathbb{R}^d}\\+\Big(\nabla^\Gamma \mathfrak{F}(\xi,\gamma),\nabla^\Gamma \mathfrak{G}(\xi,\gamma)\Big)_{T_\gamma\Gamma}\,d\mu(\gamma)\,d\xi,\\ \quad\mathfrak{F},\mathfrak{G}\in C_0^\infty(\mathbb{R}^d)\otimes\mathcal{F}C_b^{\infty}(C^{\infty}_0(\mathbb{R}^d),\Gamma).\end{gathered}$$ Using Notation \[simp\], (\[equcoup\]) can be rewritten as $$\begin{gathered} \mathcal{E}^{\scriptscriptstyle{\mathbb{R}^d\times\Gamma,\hat{\mu}}}_{\scriptscriptstyle{\text{coup}}}(\mathfrak{F},\mathfrak{G})=\int_{\mathbb{R}^d\times\Gamma}f(\xi)\,g(\xi)\,\bigg(\Big(\nabla^\Gamma F(\gamma),\nabla^\Gamma G(\gamma)\Big)_{T_\gamma\Gamma}+\Big({\nabla^{\Gamma}_{\gamma}}F(\gamma),{\nabla^{\Gamma}_{\gamma}}G(\gamma)\Big)_{\mathbb{R}^d}\bigg)\\ -\Big({\nabla^{\Gamma}_{\gamma}}F(\gamma),\nabla g(\xi)\Big)_{\mathbb{R}^d}f(\xi)\,G(\gamma)-\Big({\nabla^{\Gamma}_{\gamma}}G(\gamma),\nabla f(\xi)\Big)_{\mathbb{R}^d}g(\xi)\,F(\gamma)\\ +F(\gamma)\,G(\gamma) \big(\nabla f(\xi),\nabla g(\xi)\big)_{\mathbb{R}^d}\,d\mu(\gamma)\,d\xi.\end{gathered}$$ \[thmexcoup\] Suppose that the pair potential $\phi$ satisfies (SS), (I), (LR), (LS) and (D$\text{L}^{\text{q}}$) for some $q>d$. Furthermore, let $\mu\in\mathcal{G}^{\scriptscriptstyle{gc}}_{\scriptscriptstyle{\text{ibp}}}(\Phi_{\scriptscriptstyle{\phi}},z\exp(-\phi)),~0<z<\infty$. Then\ $(\mathcal{E}^{\scriptscriptstyle{\mathbb{R}^d\times\Gamma,\hat{\mu}}}_{\scriptscriptstyle{\text{coup}}},C^\infty_0(\mathbb{R}^d)\otimes\mathcal{F}C_b^{\infty,\mu}(C^{\infty}_0(\mathbb{R}^d),\Gamma)$ is closable in $L^2(\mathbb{R}^d\times\Gamma,dx\otimes\mu)$ and its closure\ $(\mathcal{E}^{\scriptscriptstyle{\mathbb{R}^d\times\Gamma,\hat{\mu}}}_{\scriptscriptstyle{\text{coup}}},D(\mathcal{E}^{\scriptscriptstyle{\mathbb{R}^d\times\Gamma,\hat{\mu}}}_{\scriptscriptstyle{\text{coup}}}))$ is a conservative, local, quasi-regular Dirichlet form on $L^2(\mathbb{R}^d\times \ddot{\Gamma},\mu)$. Moreover, $$\begin{aligned} \mathcal{E}^{\scriptscriptstyle{\mathbb{R}^d\times\Gamma,\hat{\mu}}}_{\scriptscriptstyle{\text{coup}}}(\mathfrak{F},\mathfrak{G})=\int_{\mathbb{R}^d\times\Gamma}-L^{\scriptscriptstyle{\mathbb{R}^d\times\Gamma,\hat{\mu}}}_{\scriptscriptstyle{\text{coup}}}\mathfrak{F}\,\mathfrak{G}\,d\mu\,d\xi,\end{aligned}$$ where $$\begin{aligned} L^{\scriptscriptstyle{\mathbb{R}^d\times\Gamma,\hat{\mu}}}_{\scriptscriptstyle{\text{coup}}}\mathfrak{F}(\xi,\gamma)=L^{\scriptscriptstyle{\Gamma,\mu}}_{\scriptscriptstyle{\text{env}}}F(\xi,\gamma)\,f(\xi)\!-2\,\Big({\nabla^{\Gamma}_{\gamma}}F(\gamma),\nabla f(\xi)\Big)_{\mathbb{R}^d} \!\!\!+\sum_{x\in\gamma}\Big(\nabla\phi(x),\nabla f(\xi)\Big)_{\mathbb{R}^d}\!\!\!+\Delta f(\xi)\,F(\gamma).\end{aligned}$$ The generator of $(\mathcal{E}^{\scriptscriptstyle{\mathbb{R}^d\times\Gamma,\hat{\mu}}}_{\scriptscriptstyle{\text{coup}}},D(\mathcal{E}^{\scriptscriptstyle{\mathbb{R}^d\times\Gamma,\hat{\mu}}}_{\scriptscriptstyle{\text{coup}}}))$, denoted by ${H}^{\scriptscriptstyle{\mathbb{R}^d\times\Gamma,\hat{\mu}}}_{\scriptscriptstyle{\text{coup}}}$, is the Friedrichs’ extension of $-L^{\scriptscriptstyle{\mathbb{R}^d\times\Gamma,\hat{\mu}}}_{\scriptscriptstyle{\text{coup}}}$. Applying Fubini’s theorem then carrying out an integration by parts, we obtain $$\begin{gathered} \mathcal{E}^{\scriptscriptstyle{\mathbb{R}^d\times\Gamma,\hat{\mu}}}_{\scriptscriptstyle{\text{coup}}}(\mathfrak{F},\mathfrak{G})=\int_{\mathbb{R}^d\times\Gamma}f(\xi)\,g(\xi)\,\bigg(\Big(\nabla^\Gamma F(\gamma),\nabla^\Gamma G(\gamma)\Big)_{T_\gamma\Gamma}+\Big({\nabla^{\Gamma}_{\gamma}}F(\gamma),{\nabla^{\Gamma}_{\gamma}}G(\gamma)\Big)_{\mathbb{R}^d}\bigg)\\ -\Big({\nabla^{\Gamma}_{\gamma}}F(\gamma),\nabla g(\xi)\Big)_{\mathbb{R}^d}f(\xi)\,G(\gamma)-\Big({\nabla^{\Gamma}_{\gamma}}G(\gamma),\nabla f(\xi)\Big)_{\mathbb{R}^d}g(\xi)\,F(\gamma)\\ +F(\gamma)\,G(\gamma)\big(\nabla f(\xi),\nabla g(\xi)\big)_{\mathbb{R}^d}\,d\mu(\gamma)\,d\xi\\ =\int_{\mathbb{R}^d}\int_\Gamma -L^{\scriptscriptstyle{\Gamma,\mu}}_{\scriptscriptstyle{\text{env}}}F(\gamma)\,f(\xi)\,G(\gamma)\,g(\xi)\,d\mu(\gamma)\,dx +\int_{\mathbb{R}^d}\int_\Gamma\Big({\nabla^{\Gamma}_{\gamma}}F(\gamma),\nabla f(\xi)\Big)_{\mathbb{R}^d}\,G(\gamma)\,g(\xi)\,d\mu(\gamma)\,d\xi\\ +\int_{\mathbb{R}^d}\int_\Gamma\Bigg(\Big({\nabla^{\Gamma}_{\gamma}}F(\gamma),\nabla f(\xi)\Big)_{\mathbb{R}^d}-\sum_{x\in\gamma}\Big(\nabla\phi(x),\nabla f(\xi)\Big)_{\mathbb{R}^d}F(\gamma)\Bigg)\,G(\gamma)\,g(\xi)\,d\mu(\gamma)\,d\xi \\-\int_{\mathbb{R}^d}\int_\Gamma\Delta f(\xi) F(\gamma)\, G(\gamma)\,g(\xi)\,d\mu(\gamma)\,d\xi\\ =\!\!\!\int_{\mathbb{R}^d}\int_\Gamma\Bigg(-L^{\scriptscriptstyle{\Gamma,\mu}}_{\scriptscriptstyle{\text{env}}}F(\gamma)\,f(\xi)+2\,\Big({\nabla^{\Gamma}_{\gamma}}F(\gamma),\nabla f(\xi)\Big)_{\mathbb{R}^d}\\ -\sum_{x\in\gamma}\Big(\nabla\phi(x),\nabla f(\xi)\Big)_{\mathbb{R}^d}\,F(\gamma) -\Delta f(\xi)\,F(\gamma)\Bigg)G(\gamma)\,g(\xi)\,d\mu(\gamma)\,d\xi.\end{gathered}$$ Thus we have closability. Since the operator $\nabla_\gamma^\Gamma-\nabla$ fulfills a chain rule on $C^\infty_0(\mathbb{R}^d)\otimes\mathcal{F}C_b^{\infty}(C^{\infty}_0(\mathbb{R}^d),\Gamma)$, the Dirichlet property follows. Furthermore, $\nabla_\gamma^\Gamma-\nabla$ satisfies the product rule for bounded functions in $D(\mathcal{E}^{\scriptscriptstyle{\mathbb{R}^d\times\Gamma,\hat{\mu}}}_{\scriptscriptstyle{\text{coup}}})$ then as shown in [@MaRo92 Chap. V, Exam. 1.12(ii)], $(\mathcal{E}^{\scriptscriptstyle{\mathbb{R}^d\times\Gamma,\hat{\mu}}}_{\scriptscriptstyle{\text{coup}}},D(\mathcal{E}^{\scriptscriptstyle{\mathbb{R}^d\times\Gamma,\mu}}_{\scriptscriptstyle{\text{coup}}}))$ is local. Quasi-regularity can be shown as follows. We denote by $(\mathcal{E},D(\mathcal{E}))$ the classical gradient Dirichlet form on $L^2(\mathbb{R}^d,dx)$. We know that both $(\mathcal{E}^{\scriptscriptstyle{\Gamma,{\mu}}}_{\scriptscriptstyle{\text{env}}},D(\mathcal{E}^{\scriptscriptstyle{\Gamma,{\mu}}}_{\scriptscriptstyle{\text{env}}}))$ and $(\mathcal{E},D(\mathcal{E}))$ are quasi-regular. By $(E_k)_{k\in\mathbb{N}}$ we denote the $\mathcal{E}^{\scriptscriptstyle{\Gamma,{\mu}}}_{\scriptscriptstyle{\text{env}}}$-nest of compact sets in $\ddot{\Gamma}$. An $\mathcal{E}$-nest of compact sets in $\mathbb{R}^d$ is given by $(\overline{\Lambda_k})_{k\in\mathbb{N}}$. Hence $(F_k)_{k\in\mathbb{N}}$, where $F_k:=\overline{\Lambda}_k\times E_k$, is an exhausting sequence of compact sets in $\mathbb{R}^d\times\ddot{\Gamma}$. One easily shows that $C_0^\infty(\mathbb{R}^d)\otimes\bigcup_{k\ge 1}D(\mathcal{E}^{\scriptscriptstyle{\Gamma,{\mu}}}_{\scriptscriptstyle{\text{env}}})_{E_k} \subset D(\mathcal{E}^{\scriptscriptstyle{\mathbb{R}^d\times\Gamma,\mu}}_{\scriptscriptstyle{\text{coup}}})$. Then using that $\bigcup_{k\ge 1}D(\mathcal{E}^{\scriptscriptstyle{\Gamma,{\mu}}}_{\scriptscriptstyle{\text{env}}})_{E_k}$ is dense in $D(\mathcal{E}^{\scriptscriptstyle{\Gamma,{\mu}}}_{\scriptscriptstyle{\text{env}}})$ with respect to $\sqrt{\mathcal{E}^{\scriptscriptstyle{\Gamma,{\mu}}}_{\scriptscriptstyle{\text{env}}}}$ and that $C_0^\infty(\mathbb{R}^d)$ is dense in $D(\mathcal{E})$ with respect to $\sqrt{\mathcal{E}}$ we can easily show that $C_0^\infty(\mathbb{R}^d)\otimes\bigcup_{k\ge 1}D(\mathcal{E}^{\scriptscriptstyle{\Gamma,{\mu}}}_{\scriptscriptstyle{\text{env}}})_{E_k}$ is dense, first in $C^\infty_0(\mathbb{R}^d)\otimes\mathcal{F}C_b^{\infty}(C^{\infty}_0(\mathbb{R}^d),\Gamma)$, hence also in $D(\mathcal{E}^{\scriptscriptstyle{\mathbb{R}^d\times\Gamma,\hat{\mu}}}_{\scriptscriptstyle{\text{coup}}})$ with respect to $\sqrt{\mathcal{E}^{\scriptscriptstyle{\mathbb{R}^d\times\Gamma,\hat{\mu}}}_{\scriptscriptstyle{\text{coup}}}}$. Thus $(F_k)_{k\in\mathbb{N}}$ is an $\mathcal{E}^{\scriptscriptstyle{\mathbb{R}^d\times\Gamma,\hat{\mu}}}_{\scriptscriptstyle{\text{coup}}}$-nest of compact sets. All further properties necessary to have quasi-regularity are clear due to quasi-regularity of $(\mathcal{E}^{\scriptscriptstyle{\Gamma,{\mu}}}_{\scriptscriptstyle{\text{env}}},D(\mathcal{E}^{\scriptscriptstyle{\Gamma,{\mu}}}_{\scriptscriptstyle{\text{env}}}))$ and $(\mathcal{E},D(\mathcal{E}))$, respectively. Finally, we prove the conservativity of $(\mathcal{E}^{\scriptscriptstyle{\mathbb{R}^d\times\Gamma,\hat{\mu}}}_{\scriptscriptstyle{\text{coup}}},D(\mathcal{E}^{\scriptscriptstyle{\mathbb{R}^d\times\Gamma,\mu}}_{\scriptscriptstyle{\text{coup}}}))$. I.e., we have to show that $T^{\scriptscriptstyle{\text{coup}}}_t(1\otimes 1)=1$, with $(T^{\scriptscriptstyle{\text{coup}}}_t)_{t\ge 0}$ the $L^\infty$-contraction semigroup corresponding to\ $(\exp(-tH^{\scriptscriptstyle{\text{coup}}}))_{t\ge 0}$. Therefore, we denote by $(G^{\scriptscriptstyle{\text{coup}}}_\alpha)_{\alpha>0}$ the resolvent corresponding to $\mathcal{E}^{\scriptscriptstyle{\mathbb{R}^d\times\Gamma,\hat{\mu}}}_{\scriptscriptstyle{\text{coup}}}$ and prove at first that $$\begin{aligned} _{\scriptscriptstyle{L^1}}(\mathfrak{F},G_1^{\scriptscriptstyle{\text{coup}}}(1\otimes 1))_{\scriptscriptstyle{L^\infty}}={_{\scriptscriptstyle{L^1}}(}\mathfrak{F}, 1)_{\scriptscriptstyle{L^\infty}}\quad\mbox{for all }\mathfrak{F}\in L^1(\mathbb{R}^d\times\Gamma,\hat{\mu})\cap L^{\infty}(\mathbb{R}^d\times\Gamma,\hat{\mu}).\end{aligned}$$ Here $_{\scriptscriptstyle{L^1}}(\cdot,\cdot)_{\scriptscriptstyle{L^\infty}}$ denotes the dual pairing between the spaces $L^1(\mathbb{R}^d\times\Gamma,\hat{\mu})$ and $L^\infty(\mathbb{R}^d\times\Gamma,\hat{\mu})$. In order to show this we choose $f_0:\mathbb{R}^d\to\mathbb{R}$ infinitely often differentiable such that $f_0(x)=1$ for $x\in[-1,1]^d$ and $f_0(x)=0$ for $x\in\mathbb{R}^d\setminus[-3,3]^d$. For $k\in\mathbb{N}$ we define $f_k(x)=f_0(k^{-1}x),~x\in\mathbb{R}^d$. Then for any $q>d$ we have $$\begin{aligned} \label{conv} \Vert \nabla f_k\Vert_{\scriptscriptstyle{L^p(\mathbb{R}^d)}}\to 0\quad\mbox{and}\quad\Vert \Delta f_k\Vert_{\scriptscriptstyle{L^p(\mathbb{R}^d)}}\to 0\quad\mbox{as }k\to\infty.\end{aligned}$$ It holds that $f_k\otimes 1\in C_0^\infty(\mathbb{R}^d)\otimes\mathcal{F}C_b^{\infty}(C^{\infty}_0(\mathbb{R}^d),\Gamma)$ and $$\begin{gathered} _{\scriptscriptstyle{L^1}}(\mathfrak{F},G_1^{\scriptscriptstyle{\text{coup}}}(1\otimes 1))_{\scriptscriptstyle{L^\infty}}=\lim_{k\to\infty}(\mathfrak{F},G_1^{\scriptscriptstyle{\text{coup}}}(f_k\otimes 1))_{\scriptscriptstyle{L^2}(\mathbb{R}^d\times\Gamma,\hat{\mu})}\\ =\lim_{k\to\infty}((1-L^{\scriptscriptstyle{\Gamma,\hat{\mu}}}_{\scriptscriptstyle{coup}})G_1^{\scriptscriptstyle{\text{coup}}}\mathfrak{F},G_1^{\scriptscriptstyle{\text{coup}}}(f_k\otimes 1))_{\scriptscriptstyle{L^2}(\mathbb{R}^d\times\Gamma,\hat{\mu})}\\ =\lim_{k\to\infty}(G_1^{\scriptscriptstyle{\text{coup}}}\mathfrak{F},f_k\otimes 1)_{\scriptscriptstyle{L^2}(\mathbb{R}^d\times\Gamma,\hat{\mu})}\\ =\lim_{k\to\infty}((1-L^{\scriptscriptstyle{\Gamma,\hat{\mu}}}_{\scriptscriptstyle{coup}}+L^{\scriptscriptstyle{\Gamma,\hat{\mu}}}_{\scriptscriptstyle{coup}})G_1^{\scriptscriptstyle{\text{coup}}}\mathfrak{F},f_k\otimes 1)_{\scriptscriptstyle{L^2}(\mathbb{R}^d\times\Gamma,\hat{\mu})}\\ ={_{\scriptscriptstyle{L^1}}}(\mathfrak{F},1\otimes 1)_{\scriptscriptstyle{L^\infty}}+\lim_{k\to\infty}(L^{\scriptscriptstyle{\Gamma,\hat{\mu}}}_{\scriptscriptstyle{coup}}G_1^{\scriptscriptstyle{\text{coup}}}\mathfrak{F},f_k\otimes 1)_{\scriptscriptstyle{L^2(\mathbb{R}^d\times\Gamma,\hat{\mu})}}\\ ={_{\scriptscriptstyle{L^1}}}(\mathfrak{F},1)_{\scriptscriptstyle{L^\infty}}+\lim_{k\to\infty}(G_1^{\scriptscriptstyle{\text{coup}}}\mathfrak{F},L^{\scriptscriptstyle{\Gamma,\hat{\mu}}}_{\scriptscriptstyle{coup}}(f_k\otimes 1))_{\scriptscriptstyle{L^2(\mathbb{R}^d\times\Gamma,\hat{\mu})}}\\ ={_{\scriptscriptstyle{L^1}}}(\mathfrak{F},1)_{\scriptscriptstyle{L^\infty}}+\lim_{k\to\infty}(G_1^{\scriptscriptstyle{\text{coup}}}\mathfrak{F},\Delta f_k+\nabla f_k\nabla^\Gamma_\gamma\phi)_{\scriptscriptstyle{L^2(\mathbb{R}^d\times\Gamma,\hat{\mu})}}.\end{gathered}$$ Since $$\begin{aligned} (G_1^{\scriptscriptstyle{\text{coup}}}\mathfrak{F},\Delta f_k+\nabla f_k\nabla^\Gamma_\gamma\phi)_{\scriptscriptstyle{L^2(\mathbb{R}^d\times\Gamma,\hat{\mu})}}\le\Vert 1_{\{\nabla f_k\not=0\}}G_1^{\scriptscriptstyle{\text{coup}}}\mathfrak{F}\Vert_{\scriptscriptstyle{L^p(\mathbb{R}^d\times\Gamma,\hat{\mu})}}\Vert \Delta f_k+\nabla f_k\nabla^\Gamma_\gamma\phi\Vert_{\scriptscriptstyle{L^q(\mathbb{R}^d\times\Gamma,\hat{\mu})}}<\infty,\end{aligned}$$ by (D$\text{L}^{\text{q}}$), we obtain that $\lim_{k\to\infty}(G_1^{\scriptscriptstyle{\text{coup}}}\mathfrak{F},\Delta f_k+\nabla f_k\nabla^\Gamma_\gamma\phi)_{L^2(\mathbb{R}^d\times\Gamma,\hat{\mu})}=0$ by (\[conv\]). Hence $_{\scriptscriptstyle{L^1}}(\mathfrak{F},G_1^{\scriptscriptstyle{\text{coup}}}1\otimes 1)_{\scriptscriptstyle{L^\infty}}={_{\scriptscriptstyle{L^1}}}(\mathfrak{F}, 1)_{\scriptscriptstyle{L^\infty}}$ for all $\mathfrak{F}\in L^1(\mathbb{R}^d\times\Gamma,\hat{\mu})\cap L^{\infty}(\mathbb{R}^d\times\Gamma,\hat{\mu})$. Now using the relation between resolvents and semigroups via the Laplace transform together with the Hahn-Banach theorem we obtain conservativity. \[thmexprocoup\] Suppose the assumptions of Theorem \[thmexcoup\]. 1. there exists a conservative diffusion process $$\begin{aligned} \mathbf{{M}}^{\scriptscriptstyle{\mathbb{R}^d\times\Gamma,\hat{\mu}}}_{\scriptscriptstyle{\text{coup}}}=\left({\mathbf{{\Omega}}}^{\scriptscriptstyle{\text{coup}}},{\mathbf{{F}}}^{\scriptscriptstyle{\text{coup}}},({\mathbf{{F}}}^{\scriptscriptstyle{\text{coup}}}_t)_{t\ge 0},({\mathbf{X}}^{\scriptscriptstyle{\text{coup}}}_t)_{t\ge 0},({\mathbf{{P}}}^{\scriptscriptstyle{\text{coup}}}_{(x,\gamma)})_{(x,\gamma)\in\mathbb{R}^d\times\ddot{\Gamma}}\right)\end{aligned}$$ on $\mathbb{R}^d\times\ddot{\Gamma}$ which is properly associated with $\left(\mathcal{E}^{\scriptscriptstyle{\mathbb{R}^d\times\Gamma,\hat{\mu}}}_{\scriptscriptstyle{\text{coup}}},D(\mathcal{E}^{\scriptscriptstyle{\mathbb{R}^d\times\Gamma,\hat{\mu}}}_{\scriptscriptstyle{\text{coup}}})\right)$, i.e., for all ($\hat{\mu}$-versions of) $\mathfrak{F}\in L^2(\mathbb{R}^d\times\ddot{\Gamma},\hat{\mu})$ and all $t>0$ the function $$\begin{aligned} (\xi,\gamma)\mapsto p_t^{\scriptscriptstyle{\text{coup}}} \mathfrak{F}(\xi,\gamma):=\int_{{\mathbf{\Omega}}}\mathfrak{F}({\mathbf{X}}^{\scriptscriptstyle{\text{coup}}}_t)\,d{\mathbf{{P}}}^{\scriptscriptstyle{\text{coup}}}_{(\xi,\gamma)},\quad(\xi,\gamma)\in\mathbb{R}^d\times\ddot{\Gamma},\end{aligned}$$ is an $\mathcal{E}^{\scriptscriptstyle{\Gamma,\hat{\mu}}}_{\scriptscriptstyle{\text{coup}}}$-quasi-continuous version of $\exp(-t {H}^{\scriptscriptstyle{\Gamma,\hat{\mu}}}_{\scriptscriptstyle{\text{coup}}})\mathfrak{F}$. $\mathbf{{M}}^{\scriptscriptstyle{\Gamma,\hat{\mu}}}_{\scriptscriptstyle{\text{coup}}}$ is up to $\hat{\mu}$-equivalence unique (cf. [@MaRo92 Chap. IV, Sect. 6]). In particular, $\mathbf{{M}}^{\scriptscriptstyle{\mathbb{R}^d\times\Gamma,\hat{\mu}}}_{\scriptscriptstyle{\text{coup}}}$ is $d\xi\otimes\mu$-symmetric, i.e., $$\begin{aligned} \int_{\mathbb{R}^d\times\ddot{\Gamma}} \mathfrak{G}\,p^{\scriptscriptstyle{\text{coup}}}_t \mathfrak{F}\,d\hat{\mu}=\int_{\mathbb{R}^d\times\ddot{\Gamma}}\mathfrak{F}\,p^{\scriptscriptstyle{\text{coup}}}_t \mathfrak{G}\,d\hat{\mu}\end{aligned}$$ for all $\mathfrak{F},\mathfrak{G}:\mathbb{R}^d\times\ddot{\Gamma}\to\mathbb{R_+},~\mathcal{B}(\mathbb{R}^d\times\ddot{\Gamma})$[-measurable]{} and has $\hat{\mu}=d\xi\otimes\mu$ as invariant measure. 2. $\mathbf{{M}}^{\scriptscriptstyle{\mathbb{R}^d\times\Gamma,\hat{\mu}}}_{\scriptscriptstyle{\text{coup}}}$ from (i) solves the martingale problem for $\left(-{H}^{\scriptscriptstyle{\mathbb{R}^d\times\Gamma,\hat{\mu}}}_{\scriptscriptstyle{\text{coup}}},D({H}^{\scriptscriptstyle{\mathbb{R}^d\times\Gamma,\hat{\mu}}}_{\scriptscriptstyle{\text{coup}}})\right)$, i.e., for all $\mathfrak{G}\in D({H}^{\scriptscriptstyle{\mathbb{R}^d\times\Gamma,\hat{\mu}}}_{\scriptscriptstyle{\text{coup}}})\supset C_0^\infty(\mathbb{R}^d)\otimes\mathcal{F}C_b^\infty(C^\infty_0(\mathbb{R}^d),\Gamma)$ $$\begin{aligned} \widetilde{\mathfrak{G}}(\mathbf{X}^{\scriptscriptstyle{\text{coup}}}_t)-\widetilde{\mathfrak{G}}(\mathbf{X}^{\scriptscriptstyle{\text{coup}}}_0)+\int_0^t {H}^{\scriptscriptstyle{\Gamma,\hat{\mu}}}_{\scriptscriptstyle{\text{coup}}} G(\mathbf{{X}}^{\scriptscriptstyle{\text{coup}}}_t)\,ds,\quad t\ge 0,\end{aligned}$$ is an $(\mathbf{{F}}^{\scriptscriptstyle{coup}}_t)_{t\ge 0}$-martingale under $\mathbf{{P}}^{\scriptscriptstyle{\text{coup}}}_{(\xi,\gamma)}$ (hence starting at $(\xi,\gamma)$) for $\mathcal{E}^{\scriptscriptstyle{\Gamma,\hat{\mu}}}_{\scriptscriptstyle{\text{coup}}}$-q.a. $(\xi,\gamma)\in\mathbb{R}^d\times\ddot{\Gamma}$.\ (Here $\widetilde{\mathfrak{G}}$ denotes a quasi-continuous version of $\mathfrak{G}$, cf. [@MaRo92 Chap. IV, Prop.3.3].) <!-- --> 1. By Theorem \[thmexcoup\] the proof follows directly from [@MaRo92 Chap. V, Theo. 1.11]. 2. This follows immediately by [@MR1335494 Theo. 3.5]. \[remextppro\] 1. As before we obtain a diffusion process on $\mathbb{R}^d\times\Gamma$ for $d\ge 2$. 2. To get the tagged particle process we do a projection on the first component of $\mathbf{M}^{\scriptscriptstyle{\mathbb{R}^d\times\Gamma,\hat{\mu}}}_{\scriptscriptstyle{\text{coup}}}$ taking values in $\mathbb{R}^d$. 3. Note that in general the tagged particle process no longer is a Markov process. [DMFGW89]{} S. Albeverio, Yu.G. Kondratiev, and M. R[ö]{}ckner. Analysis and geometry on configuration spaces. , 154:444–500, 1998. S. Albeverio, Yu.G. Kondratiev, and M. R[ö]{}ckner. Analysis and geometry on configuration spaces: The [G]{}ibbsian case. , 157:242–291, 1998. S. Albeverio and M. R[ö]{}ckner. Dirichlet form methods for uniqueness of martingale problems and applications. In [*Stochastic analysis (Ithaca, NY, 1993)*]{}, volume 57 of [ *Proc. Sympos. Pure Math.*]{}, pages 513–528. Amer. Math. Soc., Providence, RI, 1995. F. Conrad and T. Kuna. An integration by parts formula for the generator of uniform translations in the configuration space. , 2009. A. De Masi, P.A. Ferrari, S. Goldstein, and W.D. Wick. , 55(3-4):787–855, 1989. M. Grothaus, Yu.G. Kondratiev, and M. Röckner. /[V]{}-limit for stochastic dynamics in continuous particle systems. , 137(1-2):121–160, 2007. M. Z. Guo and G. Papanicolaou. Self-diffusion of interacting brownian particles. In [*Probabilistic methods in mathematical physics*]{}, pages 113–151. Academic Press, Inc., Boston, Mass., 1985. O. Kallenberg. . Akademie-Verlag, Berlin, third edition, 1983. Yu.G. Kondratiev and T. Kuna. Harmonic analysis on configuration space [I]{}. [G]{}eneral theory. , 5(2):201–233, 2002. Yu.G. Kondratiev and T. Kuna. Correlation functionals for [G]{}ibbs measures and [R]{}uelle bounds. , 9(1):9–58, 2003. J. Kerstan, K. Matthes, and J. Mecke. . Akademie-Verlag, Berlin, 1978. R. Lang. Unendlichdimensionale [W]{}ienerprozesse mit [W]{}echselwirkung [II]{}. , 39:277–299, 1977. A. Lenard. States of classical statistical mechanical systems of infinitely many particles. [I]{}. , 59:219–239, 1975. A. Lenard. States of classical statistical mechanical systems of infinitely many particles. [II]{}. , 59:241–256, 1975. Z.-M. Ma and M. R[ö]{}ckner. . Springer, Berlin, New York, 1992. Z.-M. Ma and M. R[ö]{}ckner. Construction of diffusions on configuration spaces. , 37(2):273–314, 2000. H. Osada. Dirichlet form approach to infinite-dimensional [W]{}iener process with singular interactions. , 176:117–131, 1996. H. Osada. An invariance principle for [M]{}arkov processes and [B]{}rownian particles with singular interactions. , 34(2):217–248, 1998. C.J. Preston. , volume 534 of [*Lecture Notes in Mathematics*]{}. Springer, Berlin, Heidelberg, New York, 1976. C. Preston. Canonical and microcanonical [G]{}ibbs states. , 46:125–158, 1979. M. R[ö]{}ckner and B. Schmuland. Quasi-regular Dirichlet forms: Examples and counterexamples. , 47(1):165–200, 1995. M. R[ö]{}ckner and B. Schmuland. A support property for infinite-dimensional interacting diffusion processes. , 326(3):359–364, 1998. D. Ruelle. Superstable interactions in classical statistical mechanics. , 18:127–159, 1970. M.W. Yoshida. Construction of infinite-dimensional interacting diffusion process through [D]{}irichlet forms. , 106:265–297, 1996. [^1]: 2000 [*Mathematics Subject Classification. primary: 60J60, 82C22, secondary: 60K37, 37L55.*]{}\ We thank Yuri Kondratiev, Michael Röckner, Sven Struckmeier and Heinrich v. Weizsäcker for discussions and helpful comments. Special thanks go to Florian Conrad who proposed a proof for conservativity of the coupled process. Financial support by the DFG through the project GR 1809/4-2 is gratefully acknowledged.
--- abstract: 'Over the last decade there has been mounting evidence that the strength of the Sun’s polar magnetic fields during a solar cycle minimum is the best predictor of the amplitude of the next solar cycle. Surface flux transport models can be used to extend these predictions by evolving the Sun’s surface magnetic field to obtain an earlier prediction for the strength of the polar fields, and thus the amplitude of the next cycle. In 2016, our Advective Flux Transport (AFT) model was used to do this, producing an early prediction for Solar Cycle 25. At that time, AFT predicted that Cycle 25 will be similar in strength to the Cycle 24, with an uncertainty of about 15% . AFT also predicted that the polar fields in the southern hemisphere would weaken in late 2016 and into 2017 before recovering. That AFT prediction was based on the magnetic field configuration at the end of January 2016. We now have 2 more years of observations. We examine the accuracy of the 2016 AFT prediction and find that the new observations track well with AFT’s predictions for the last two years. We show that the southern relapse did in fact occur, though the timing was off by several months. We propose a possible cause for the southern relapse and discuss the reason for the offset in timing. Finally, we provide an updated AFT prediction for Solar Cycle 25 which includes solar observations through January of 2018.' bibliography: - 'main.bib' title: 'An Updated Solar Cycle 25 Prediction with AFT: The Modern Minimum' --- Cycle 25 will be slightly weaker than Cycle 24, making it the weakest cycle on record in the last hundred years. Weak cycles are preceded by long extended minima – we may not reach the Cycle 24/25 minimum until 2021. We are currently (beginning with Cycle 24) in the midst of the modern Gleissberg cycle minimum. It is too early to determine if this will remain a short Gleissberg minimum (like the Dalton) or if the Sun will produce a longer grand minimum (like the Maunder). Introduction ============ The appearance of solar activity (sunspots, flares, coronal mass ejections, etc) is cyclic with an average period of about 11 years. Large solar storms, which also vary with the solar activity cycle, produce space weather events that can have devastating impacts on our assets in space, as well as here on Earth (e.g., communications and power grids). Accurate solar cycle predictions are essential for planning of future and current space missions and for minimizing disruptions to the nation’s infrastructure. While there are still several different solar cycle prediction techniques [@2008Pesnell; @2015Hathaway], one method is emerging as a definitive leader in the field: the amplitude of the Sun’s polar magnetic fields at solar cycle minimum [e.g., @2005Svalgaard_etal; @2013MunozJaramillo_etal]. Surface Flux Transport (SFT) models [@2005Sheeley; @2014Jiang_etalB], which simulate the evolution of the Sun’s magnetic field, provide a way of estimating the amplitude of the polar fields several year prior to solar minimum, thereby extending the range of solar cycle predictions. The Advective Flux Transport (AFT) model is one such SFT model, designed specifically with the intent of being as realistic as possible without the use of free parameters [@2014UptonHathawayA; @2014UptonHathawayB; @2015UgarteUrra_etal]. The AFT model was recently used to make an ensemble of 32 predictions for the amplitude of Solar Cycle 25 [@2016HathawayUpton] (hereafter referred to as HU16). In this study, 3 model parameters - the convective motion details, active region tilt, and meridional flow profile - were varied in order to also determine the relative uncertainty produced. HU16 found that the polar fields near the end of Cycle 24 would be similar to or slightly smaller than the polar fields near the end of Cycle 23, suggesting Cycle 25 would be similar or somewhat weaker than Cycle 24. After four years of simulation, the variability across the ensemble produced an accumulated uncertainty of about 15 %. Additionally, all realizations in the HU16 ensemble predicted a relapse in the southern polar field in late 2016 and into 2017. One of the biggest sources of uncertainty in making solar cycle predictions comes from the large scatter inherent in the systematic (Joy’s Law) tilt of Active Regions (ARs)[@2014Jiang_etalA]. This tilt angle produces an axial dipole moment in newly emerged ARs, which continues to evolve during the lifetime of the AR. Over the course of the solar cycle, the axial dipole moments of the residual ARs are transported to higher latitudes, where they accumulate, causing the reversal and build up of the polar fields. The net global axial dipole at the end of the cycle (i.e., solar cycle minimum), forms the seed that determines the amplitude of the next cycle. @2014Cameron_etal [@2017Nagy_etal] showed that large, highly tilted ’rogue’ active regions can have a huge impact on the Sun’s axial dipole moment, particularly if they emerge close to the equator. We are now two years closer to solar minimum since our last prediction. At this late stage of the solar cycle, fewer ARs emerge, reducing the likelihood that a ’rogue’ active region will emerge. Those that do emerge typically have much weaker flux [@2015MunozJaramillo], emerge closer to the equator (Spörer’s Law), and have smaller tilt angles (Joy’s Law). The net effect of all of these factors, barring the emergence of a large ’rogue’ active region, means that the few ARs that are left to emerge will have very small axial dipole moments and little impact on the polar field strengths. Another effect is that the uncertainly caused by the variability in the tilt is significantly reduced. With the solar cycle minimum only 2-3 years away, this is an optimal time for an updated prediction. In this paper we begin by revisiting the previous Solar Cycle 25 prediction made with the AFT model. We discuss the accuracy of those predictions as compared to the observations that have since occurred. We then provide an updated prediction for Solar Cycle 25. Previous Prediction Fidelity ============================ ![Validating the AFT 2016 Predictions. This figure shows the polar field predictions that were made in [@2016HathawayUpton] (in blue) along with the polar field observations (WSO in black and AFT Baseline in red) that have occurred since the prediction was made. The average polar fields poleward of 55are shown on the left. The polar field strength as measured from the axial dipole moment is shown on the right.[]{data-label="fig:Jan2016"}](Jan2016Predictions_new.png){width="30pc"} The prediction of HU16 was initiated in January 2016. We now have 2 years of observations of the Sun’s polar fields to compare with and investigate the accuracy of those simulations. We begin with the predicted and observed axial dipole moment (Figure \[fig:Jan2016\], right panel) and find that the observations track right in the middle of the ensemble of predictions. Next, we compare the polar fields as measured above 55(Figure \[fig:Jan2016\], left panel). In the northern hemisphere, the observations track the predictions fairly well. There is a strong agreement for the first year, but the predictions are slightly weaker in the second year. However, when we compare the polar fields in the southern hemisphere, we find that the agreement is not as good. While the southern hemisphere relapse that was predicted in HU2016 did in fact occur, it appears to happen about nine months later than the prediction. **Why did the southern relapse occur later in the observations than in AFT predictions?** Before we can answer this question, we must first look at the reason that the southern relapse occurred in the first place. ![Sequence of AFT magnetic Maps. This sequence of AFT maps has been supersaturated to enhance the appearance of the weak magnetic field at the poles. The 55latitude lines have been marked with thin black lines. AR 12192 has been circled in red on the top left panel. The subsequent evolution of AR 12192 can been seen in the first two rows. The red circled regions in the bottom panels show the formation of a positive polarity region right at the55latitude. Two additional active regions, 12415 (top right panel) and 12422 (middle right panel), occurred later and also contributed to the formation of the positive polarity region at the 55latitude line. []{data-label="fig:AFTmaps"}](SouthernRelapseSmall.png){width="33pc"} The updated observations show that the southern polar field started off progressing normally, with the negative polarity growing. But then, in October of 2014, a new extraordinary Active Region emerged, AR 12192, which created a very large positive polarity stream which was transported to the South, as shown in Figure \[fig:AFTmaps\]. AR 12192 had Hale’s polarity and a small tilt angle, consistent with Joy’s Law. It was the Largest Active Region in the last 24 years and it ranked 33rd largest of 32,908 active regions since 1874 [@2015Sun_etal]. From the sequence of maps in Figure \[fig:AFTmaps\], we see that both the leading and following polarities are sheared out by the differential rotation. Both polarities are transported to high latitudes, but this shearing effect pushed the leading positive polarity flux to higher latitudes than the negative following polarity. The polar-effectiveness (e.g, the amount of flux transported to the poles) of the leading polarity may have been enhanced because is was surrounded by a weak negative flux region. This minimized cancellation at the ’Bow’-side of the AR, while the following polarity flux was squeezed and canceled by the positive polarity flux that surrounded it. This culminated in a weak band of positive polarity flux just above the 55line in the South, as seen in the bottom middle panel of Figure \[fig:AFTmaps\]. But AR 12192 only laid the foundation for the relapse. The positive band it created was aided by subsequent Active Regions (most notably NOAA 12415 and NOAA 12422), which helped to enhance the positive polarity band that formed in the South. As this positive polarity band progressed poleward, it degraded the negative southern pole, causing the subsequent relapse. Strong shear in the differential rotation at mid latitudes stretches the magnetic flux in the East-West direction. When both polarities are transported to high latitudes, this tends to produce alternating bands of flux which form long polarity inversion lines stretching East-West. Throughout 2016, the neutral line for the positive band was right at 55latitude. This latitude, coincidently, was the cutoff used to measure the hemispheric polar fields (Figure \[fig:Jan2016\], left panel). Small differences in the SFT processes (e.g., meridional flow and convection pattern) can shift significant amounts of flux above or below this arbitrary line. This can cause differences in the polar field measurements above that latitude. This difference in the timing of the flux crossing 55translates into a difference in the timing of the relapse. The HU2016 simulations had slightly more of the positive flux cross the 55line, causing the relapse to occur sooner in those simulations. This serves as a reminder that while polar field measurements above a given latitude are useful for identifying hemispheric asymmetries, they can be somewhat subjective and lead to offsets in prediction timing [@2014UptonHathawayA]. Despite this offset in the timing, we are reassured by the fact that the axial dipole predictions (Figure \[fig:Jan2016\], right panel) are remarkably well matched - falling within the middle of the prediction ensemble. This provides confidence in the ability of AFT to accurately predict the evolution of the polar fields at least two years in advance during the early part of the declining phase of the solar cycle. Updated Cycle 25 Prediction =========================== ![AFT 2018 Predictions. This figure shows the polar field observations (red) along with the AFT predictions (2016 in the lighter blue and 2018 in the darker blue). The polar fields strengths as measured from 55and above are shown on the left. The polar field strength as measured from the axial dipole moment is shown on the right.[]{data-label="fig:AFT2018"}](Jan2018Predictions_new.png){width="30pc"} We now have two additional years of observations, since the predictions of HU2016. Here, instead of a start date Jan 2016, we’ll start the new prediction at Jan 2018. At this time, the northern polar is stronger, and the southern polar field is weaker than they were in Jan of 2016. As it is later in the cycle, we expect fewer active regions to emerge. The active regions that do emerge will be smaller, will be at lower latitudes and will tend to have a small tilt angle. All of these characteristics work together to reduce the axial dipole moment of each active region, thereby reducing its polar effectiveness. At this late stage of the cycle, the active regions that will emerge will have little to no effect on the polar fields that will ultimately produce Solar Cycle 25, significantly minimizing the uncertainty in our prediction for the next cycle. Here we ran 10 simulations using the active regions from solar cycle 14, varying both Joy’s tilt and the convective pattern (see HU2016 for the details). The results of all of these simulations are shown in Figure \[fig:AFT2018\]. The average of all 10 realizations gives an axial dipole strength at the start of 2020 of +1.56 $\pm 0.05$ G. WSO gave an axial dipole strength of -1.61 G at the start of Cycle 24, +3.21 G at the start of Cycle 23, and -4.40 G at the start of Cycle 22. **This suggests that Cycle 25 will be a another small cycle, with an amplitude slightly smaller than ($\sim$ 95-97%) the size of Cycle 24. This would make Solar Cycle 25 the smallest cycle in the last 100 years.** This indicates that the weak cycle 24 is not an isolated weak cycle, but rather the onset of the modern Gleissburg minimum [@1939Gleissberg], which will include Cycle 25 at present this is akin to the last Gleissburg minimum (SC12, SC13, & SC14) which occurred in the late 1800s and early 1900s. Unfortunately, we will need to wait another 10-15 years before we will know if the Sun will go into a deeper minimum state (e.g. the Dalton or Maunder minima, or somewhere in between) or if it will recover as it did following the last Gleissberg minimum. Weak cycles are preceded by long extended minima [@2015Hathaway] and we expect a similar deep, extended minimum for the Cycle 24/25 minimum in 2020. Based, on the latest prediction, **we expect that minimum will be closer to the end of 2020 or beginning of 2021.** Long extended minima such as this are punctuated by a large number of spotless days (e.g., SC12-SC15 and SC24). Similarly, **we expect that the Cycle 24/25 minimum will include extended periods of spotless days throughout 2020 and into 2021.** Fortunately, the strength of the axial dipole doesn’t change much during 2020: +1.56 $\pm 0.05$ G for the start of 2020 and +1.54 $\pm 0.04$ G for start of 2021. Therefore, this extended minimum should have little impact on the prediction for Cycle 25. Conclusions =========== We have investigated the accuracy of the predictions made by AFT in 2016 (HU2016). We found that those predictions are largely in line with the observations that have occurred since that prediction was made.The biggest discrepancy was found to be the timing of a relapse in the strength of the southern polar field - while the amplitude was correct, the relapse actually occurred about 9 months later. We identified a few active regions that produced leading polarity streams that caused this relapse, with the most significant of these ARs, being NOAA 12192. We found that the offset in the timing of the relapse was due primarily to formation of the polarity inversion line right at the 55latitude cutoff. Slight differences in the surface flux transport can significantly change the amount of flux above or below this line, resulting in offsets in the timing of the evolution of the hemispheric polar fields. Despite this offset, the evolution of the axial dipole for the last 2 years was accurately predicted in HU2016. We provided an updated prediction for solar Cycle 25, which incorporated the observations up to Jan 2018. The new prediction gave an axial dipole of +1.56 $\pm 0.05$ G for the start of 2020 and +1.54 $\pm 0.04$ G for start of 2021. This indicated that Cycle 25 will be on the order of 95% of Cycle 24. Of the predictions that are using the axial dipole as a predictor, AFT is on the lower end of the spectrum. [@2017Jiang_Cao] expects the axial dipole at 2020 to be 1.76 $\pm$ 0.68 G, or comparable to Cycle 24. [@2017Wang] also expects Cycle 25 to be comparable to Cycle 24. [@2016Cameron_etal] predicts that Cycle 25 will be slightly higher than Cycle 24, but acknowledges that the reliability of this prediction is limited by the intrinsic uncertainty. Given the consensus of these predictions with our own results, we are confident that Cycle 25 will indeed be another weak cycle. We note that our new prediction ( +1.56 $\pm 0.05$ G) falls within the uncertainty given in our HU2016 prediction (+1.36 $\pm 0.20$ G). While this demonstrates that AFT can accurately predict the evolution of the axial dipole, within the uncertainty, 4 years in advance of the minimum, the addition of two more years of observations significantly adds to the precision of the AFT solar cycle predictions. At this late stage of the cycle, the uncertainty in AFT’s ability to predict the polar fields is very small. We acknowledge that there is additional uncertainty associated with using the axial dipole as a predictor of the amplitude of the next cycle. Compounding this is the fact that, while this trend appears to be linear for cycles stronger than Cycle 24, we do not yet have data to show that this relationship holds for cycles that are weaker than Cycle 24 (see Figure 1 HU2016, which shows Cycle 24 is the smallest cycle used to determine this relationship). Though we do make this assumption in our prediction for the strength of Cycle 25, Cycle 25 will be a test of this assumption. As the saying goes, *only time will tell*, but we await it with open arms. The data presented in this article are freely and publicly available at the following web address: <http://solarcyclescience.com/Predictions/2018GRLData.zip>. L.A.U. was supported by the National Science Foundation Atmospheric and Geospace Sciences Postdoctoral Research Fellowship Program (Award Number:1624438) and is hosted by the High Altitude Observatory at National Center for Atmospheric Research (NCAR) . NCAR is sponsored by the National Science Foundation. We would like to thank Robert Cameron for insightful discussions about the Southern Relapse. Finally, we would like to thank the anonymous referees for their careful reading of our manuscript and their valuable comments and suggestions.
--- abstract: 'In a network, the shortest paths between nodes are of great importance as they allow the fastest and strongest interaction between nodes. However measuring the shortest paths between all nodes in a large network is computationally expensive. In this paper we propose a method to estimate the shortest path length (SPL) distribution of a network by random walk sampling. To deal with the unequal inclusion probabilities of dyads (pairs of nodes) in the sample, we generalize the usage of Hansen-Hurwitz estimator and Horvitz-Thompson estimator (and their ratio forms) and apply them to the sampled dyads. Based on theory of Markov chains we prove that the selection probability of a dyad is proportional to the product of the degrees of the two nodes. To approximate the actual SPL for a dyad, we use the observed SPL in the induced subgraph for networks with large degree variability, i.e., the standard deviation is at least two times of the mean, and for networks with small degree variability, estimate the SPL using landmarks for networks with small degree variability. By simulation studies and applications to real networks, we find that 1) for large networks, high estimation accuracy can be achieved by using a single random or multiple random walks with total number of steps equal to at least $20\%$ of the nodes in the network; 2) the estimation performance increases as the network size increases but tends to stabilize when the network is large enough; 3) a single random walk performs as well as multiple random walks; 4) the Horvitz-Thompson ratio estimator performs best among the four estimators.' author: - 'Minhui <span style="font-variant:small-caps;">Zheng</span> and Bruce D. <span style="font-variant:small-caps;">Spencer</span>' bibliography: - 'reference.bib' title: Estimating Shortest Path Length Distributions via Random Walk Sampling --- Introduction ============ In a large network, the shortest paths between nodes are of particular importance because they are likely to provide the fastest and strongest interaction between nodes ([@katzav2015analytical]). Although measures such as diameter and mean distance ([@newman2010networks], [@chung2002average], [@cohen2003scale]) have been studied extensively, the entire shortest path length distribution (SPLD) has received little attention. While the shortest path for a pair of nodes is measurable by existing algorithms such as breadth-first search, measuring the shortest paths for all pairs of nodes in a large network is computationally expensive ([@potamias2009fast]).\ In this paper, we study the problem of estimating SPLDs in networks via random walk sampling. In particular, for each possible value of the shortest path length (SPL), we estimate the fraction of dyads with that value of SPL. There are two aspects to the problem. First, if a dyad is observed in the sample, the observed SPL in the sample may exceed the actual SPL in the population. Second, the dyads observed in a random walk sample have unequal chances of being included in the sample. With regard to the former aspect, [@ribeiro2012multiple] have shown that in a network with large degree variability, random walks often uncover the shortest paths. In other words, for two nodes in a network where the variance of degree distribution is very large, the observed shortest path in the subgraph induced by a random walk sample is usually the true shortest path in the population. This property is present in scale-free networks where the degree distribution follows the power law. In this paper, we’ve shown that this property extends to networks whose degree distribution has a large coefficient of variation ($c.v.$), i.e., whose ratio of standard deviation to mean is large. On the other hand, [@potamias2009fast] have shown that in large networks, when calculating the actual distance is computationally expensive, one can use precomputed information to obtain fast estimates of the actual distance in very short time. More specifically, one can first choose a small fraction of nodes as landmarks and compute distances from every node to them. When the distance between a pair of nodes is needed, it can be estimated quickly by combining their precomputed distances to the landmarks.\ With regard to dyads’ unequal probabilities of being included in the sample, we draw upon classical sampling theory for estimating totals from samples of elements included with unequal probabilities. The estimators we use are Hansen-Hurwitz estimator ([@hansen1943theory]) and Horvitz-Thompson estimator ([@horvitz1952generalization]). Both estimators will be used in original form and ratio form to estimate the fraction of dyads with a particular value of SPL. The ratio form is defined with the numerator equal to the estimator of the number of dyads with a particular value of SPL and the denominator equal to the estimator of the total number of dyads. To develop the Hansen-Hurwitz estimator, we derive from theory of Markov chains ([@newman2010networks], [@sigman2009], [@anderson1989second]) that the expected number of appearances of a dyad in a random walk sample with a sufficiently large number of steps is approximately proportional to the product of the degrees of the two nodes. This result allows application of the Hansen-Hurwitz estimator to the sample including a duplicate selection of nodes. To develop the Horvitz-Thompson estimator, we approximate the random walk sampling of nodes by an adjusted multinomial sampling model in $t$ draws, with $t$ equal to the number of steps in the random walk. Then we apply the Horvitz-Thompson estimator to the sample excluding duplicate nodes.\ We provide practical solutions to estimate $c.v.$’s and weights used in both estimators when we are only able to crawl part of the network and observe the actual degrees of the sampled nodes. We also provide plots and numerical measures to evaluate the performance of our estimators. By applying the estimators and evaluation techniques to several simulation studies, we have the following findings: - When a network has a $c.v.$ in degree distribution much larger than $2$, random walks have strong ability to discover the actual shortest paths between sampled nodes. Therefore we can use the observed SPL between sampled nodes in the induced subgraph to approximate their actual SPL. - When a network has a $c.v.$ in degree distribution much smaller than $2$, random walks don’t have strong ability to discover the actual shortest paths between sampled nodes. Therefore we need to do breadth-first search in the population graph to get the actual SPL, but only to a fraction, such as $30\%$, of the sampled nodes (known as “landmarks”), and use that information to approximate the SPL between other sampled nodes. - The estimation performance improves as sampling budget increases, with dramatic improvement as the sampling budget reaches $20\%$ and moderate improvement beyond that. - If we fix the total sampling budget, such as $20\%$, using a single random walk performs equally well as using multiple random walks. - To a small degree, the Horvitz-Thompson ratio estimator outperforms the generalized Hansen-Hurwitz ratio estimator, and people can use the former with a smaller sampling budget to achieve the same estimation accuracy as by the latter. - The estimation performance improves as the network size increases, but tends to be stable once the network is large enough, such as of size $n=5000$ or larger. Finally, we apply our estimators to eight real networks with various sizes, degree distributions, and $c.v.$’s. The results from evaluation measures for estimation from real networks support our findings from the simulation studies. Background ========== Preliminary Definitions ----------------------- Let $G=(V, E)$ be a finite graph (network), where $V$ is the set of nodes with $|V|=n$ and $S$ is the set of edges with $|E|=m$. Let $i\in \{1,...,n\}$ denote a node in the graph, and $r \in \{1, ..., N\}$ denote dyad $(i,j)$, $i, j=1,...,n$, $j\neq i$, in the graph, where $N=\binom{n}{2}$ is the number of dyads in the graph. An *induced subgraph* $G^*=(V^*, E^*)$ of $G$, is a graph formed from a subset of the nodes $V^* \subset V$ and all of the edges $E^* \subset E$ connecting pairs of nodes in $V^*$.\ The *adjacency matrix $\boldsymbol{A}$* ([@newman2010networks], p.111) of a graph is the matrix with element $A_{ij}$ such that $$A_{ij}=\left\{ \begin{array}{ll} 1 \text{ if there is an edge from node $i$ to node $j$,}\\ 0 \text{ otherwise.} \end{array} \right.$$ A graph is *undirected* if $A_{ij}=A_{ji}$ for all $i$ and $j$, i.e., the adjacency matrix $\boldsymbol{A}$ is symmetric. In this paper, we only consider undirected networks without self-edges, so the adjacency matrix is symmetric and the diagonal elements are all zero.\ The *degree* ([@newman2010networks], p.133) of node $i$, denoted as $k_i$, in a graph is the number of edges connected to it. For an undirected graph, the degree can be written in terms of the adjacency matrix as $$k_i = \sum_{j=1}^{n}A_{ij} = \sum_{j=1}^{n}A_{ji}$$ We define $p_k$ to be the fraction of nodes in the network to have degree $k$, and the *degree distribution* to be the collection of the $p_k$’s for $k=0, 1, ..., n-1$. We denote $<k>$ as the first moment and $<k^2>$ as the second moment of the degree distribution.\ A *path* ([@newman2010networks], p.136) in a network is any sequence of nodes such that every consecutive pair of nodes in the sequence is connected by an edge in the network. A graph is *connected* if and only if there exists a path between any pair of nodes. A graph is *primitive* if $A^k>0$ for some positive integer $k<(n-1)n^n$. In a primitive graph, a path of length $k$ exits between every pair of nodes for some positive integer $k$. The *length* ([@newman2010networks], p.136) of a path in a network is the number of edges traversed along the path. The *shortest path* ([@newman2010networks], p.139), also known as *geodesic path*, is a path between two nodes such that no shorter path exists. The *diameter $L$* of a graph is the longest shortest path between any two nodes. Note that the diameter is finite for connected graphs.\ Let $l_r=l_{ij} \in \{1, ..., L\}$ denote the true *shortest path length (SPL)*, also known as the *geodesic distance*, of dyad $r$ in the population graph $G$. The *mean distance $M$* of a graph, is the average of shortest path lengths of all dyads in the graph. We define $f_l$ to be the fraction of dyads in the network to have SPL $l$, and the *Shortest Path Length Distribution (SPLD)* to be the collection of $f_l$’s for $l=1, 2, ..., L$ Random Walk Sampling -------------------- Random walk sampling is a class of network sampling methods that have arisen recently and has been applied widely in large networks, due to its strong ability of ‘crawling’ in the network. In this paper, we define a single random walk $\{X_t\}$ with length $t$ ($t$ steps) in a given graph $G=(V,E)$ as follows: 1\) Select a node $u$ with equal probability $1/n$ from $V$; 2\) If node $u$ has $k_u$ neighbors, i.e., node $u$ has degree $k_u$, include one of its neighbors, say $v$, with equal probability $1/k_u$ into the sample; 3\) In turn, conditionally independent of previous steps, one of $v$’s neighbor nodes is selected with equal probability $1/k_v$ from the set of $v$’s neighbors; 4\) Repeat this process until the desired length $t$ of the random walk is reached.\ In the real world, some random walks are self-avoiding, in which case an edge or a node cannot be visited twice. However, in this paper we only consider random walks that are allowed to go along edges more than once, visit nodes more than once, or retrace their steps along an edge just traversed. In other words, we may have duplicates in our random walk sample. Scale-free Networks -------------------- Many of the research papers in graph theory concern the Erdős-Rényi random graphs. A *Erdős-Rényi random graph* $G(n, p)$ is a graph with $n$ nodes and each edge is assigned independently to to each pair of distinct nodes with probability $p\in (0,1)$ ([@Kolaczyk2009:SAN:1593430], p.156). By this definition, the degree distribution for a Erdős-Rényi random graph follows a binomial distribution: $$p_k=\binom{n-1}{k}p^k(1-p)^{n-1-k}.$$ As demonstrated by ([@newman2010networks], p.402), in the limit of large $n$, $G(n,p)$ has a Poisson degree distribution: $$\lim_{n\rightarrow\infty}p_k=e^{-c}\frac{c^k}{k!},$$ where $c=(n-1)p$ is the mean degree of $G(n,p)$. According to the property of Poisson distribution, the variance of degree distribution is always equal to the mean of degree distribution.\ The model is widely studied because of its simple structure. However, recent empirical results ([@albert2002statistical]) show that for many real-world networks the degree distribution significantly deviates from a Poisson distribution. In particular, for many real-world networks, the degree distribution has a power-law tail $$p_k \propto k^{-\alpha},$$ where $\alpha$ is the *exponent* of the power law. Such networks are called *scale-free*. Typically, the values in $\alpha$ from real networks are in range $[2,3]$, although values slightly outside this range are possible and are observed occasionally ([@newman2010networks], p.248).\ Scale-free networks possess some unusual properties as compared to other networks. One of the nicest properties is the existence of hubs. The definition for hubs is vague in the literature. In this paper we define a *hub* in a network to a node whose degree is in the upper tail of the degree distribution. Intuitively, nodes with small degrees are usually connected through hubs. Therefore hubs in a network play an important role in information exchange and shortening the shortest paths between nodes. As we will discuss in section 3.1, scale-free networks have a smaller average geodesic distance than other networks. The existence of hubs is a significant difference between random networks and scale-free networks. In random networks, the expected degree is comparable for every node, and thus fewer hubs emerge.\ The emergence of hubs can be explained by the growth algorithm of a scale-free network. A widely used model is the *preferential attachment* model ([@albert2002statistical]):\ The network begins with an initial connected network of $m_0$ nodes. New nodes are added to the network one at a time. Each new node is connected to $m \leq m_0$ existing nodes with a probability that is proportional to the number of edges that the existing nodes already have. Formally, the probability that the new node is connected to node $i$ is $\frac{k_i}{\sum_{j}k_j}$, where $k_i$ is the degree of the node $i$ and the sum is taken over all pre-existing nodes $j$. Numerical simulations ([@albert2002statistical]) indicated that this network evolves into a scale-free network with $\alpha=3$.\ In Figure \[tab:SF\_ER\] below, we illustrate the comparison between scale-free networks and Erdős-Rényi random graph.\ ![**Scale-free network vs. Erdős-Rényi random graphs.** ([@barabasi2014])[]{data-label="tab:SF_ER"}](SF_ER.png) 1. Comparing a Poisson function with a power-law function ($\alpha=2.1$) on a linear plot. Both distributions have $<k>=11$. 2. The same curves as in (a), but shown on a log-log plot, allowing us to inspect the difference between the two functions in the high-$k$ regime. 3. An Erdős-Rényi random network with $<k>=3$ and $n = 50$, illustrating that most nodes have comparable degree around $<k>$. The variation in degrees is very small. 4. A scale-free network with $\alpha=2.1$ and $<k>=3$, illustrating that numerous small-degree nodes coexist with a few highly connected hubs. The size of each node is proportional to its degree, therefore the large ones are hubs in the network. The Horvitz-Thompson Estimator and the Hansen-Hurwitz Estimator --------------------------------------------------------------- Suppose we have a population of elements $\{1, 2, ..., M\}$ and $y_i$ is the characteristic of interest associated with element $i$, $i=1, ..., M$. Let $t_y=\sum_{i=1}^{M}y_i$ denote the total of $y_i$’s. In order to estimate $t_y$ from samples of elements selected with unequal probabilities, we can use the Horvitz-Thompson estimator for samples drawn without replacement and the Hansen-Hurwitz estimator for samples drawn with replacement.\ Suppose a sample of size $m$ is drawn without replacement from the population, and the inclusion probability for element $y_i$ is $\pi_i>0$. Let $Z_i$ be an indicator variable such that $Z_i=1$ if element $i$ is in the sample and 0 otherwise. The Horvitz-Thompson estimator ([@horvitz1952generalization]) of the population total $t_y$ is $$\hat{t}_y^{HT} = \sum_{i=1}^{M}\frac{Z_i y_i}{\pi_i},$$ with mean $$E(\hat{t}_y^{HT}) = t_y,$$ and variance $$Var(\hat{t}_y^{HT}) = \sum_{i=1}^{M}\sum_{k>1}^{M}(\pi_i\pi_j - \pi_ik)(\frac{y_i}{\pi_i}-\frac{y_k}{\pi_k})^2.$$ Next suppose a sample of size $m$ is drawn with replacement in $m$ independent draws from the population, and that on each draw the probability of selecting element $y_i$ is $\beta_i$. Let $Q_i$ denote the number of times element $y_i$ selected in the sample, so that $Q_1, ..., Q_M \sim \text{multinomial}(\beta_1, ..., \beta_M;m)$, $E(Q_i) = m\beta_i$, and $\sum_{i=1}^{N}Q_i=m$.The Hansen-Hurwitz estimator ([@hansen1943theory]) of the population total $t_y=\sum_{i=1}^{M}y_i$ is $$\hat{t}_y^{HH} = \frac{1}{m}\sum_{i=1}^{M}\frac{Q_i y_i}{\beta_i},$$ with mean $$E(\hat{t}_y^{HH}) = t_y,$$ and variance $$Var(\hat{t}_y^{HH}) = \frac{1}{m}\sum_{i=1}^{M}\beta_i(y_i/\beta_i-t_y)^2.$$ More generally, we will consider sample selections that could be dependent with varying selection probabilities for different draws. Thus, we define a more general form of $\hat{t}_y^HH$ as $$\hat{t_y}^{GHH} = \sum_{i=1}^{M}\frac{Q_i y_i}{E(Q_i)}. \label{GHH}$$ This is always unbiased for $t_y$ as long as $E(Q_i)>0$. The variance of $\hat{t}_y^{GHH}$ can be estimated if the sample is selected with replication.\ Note that we can also estimate the total from a sample obtained by sampling with replacement by a Horvitz-Thompson estimator. If we reduce the sample obtained by sampling with replacement to a subsample by excluding the duplicates, we will get the subsample consisting of distinct elements from the population, which is analogous to a sample obtained by sampling without replacement but with random sample size. Therefore we can apply the idea of estimating the population total by Horvitz-Thompson estimator to the subsample, provided we can calculate $\pi_i$ terms. Related Work ============ The Small World Effect ---------------------- One of the most interesting and widely studied of network phenomena is *the small world effect*: in many networks, the distances between nodes are surprisingly small. The first empirical study of this phenomenon goes back to Stanley Milgram’s letter-passing experiment in the 1960s, in which he asked each of the randomly chosen “starter” individuals to try forwarding a letter to a designated “target” person living in the town of Sharon, MA, a suburb of Boston. It turned out that the letters made it to the target in a remarkably small number of steps, around six on average. Therefore, this phenomenon is also called “six degrees of separation".\ With complete network data and measuring methods available these days, it is possible to measure or estimate the distances between nodes, and the small world effect has been verified explicitly. In mathematical terms, the small-world effect is the condition that the mean distance $M$ is small. In fact, following the mathematical models, the mean distance for Erdős-Rényi random graphs was shown to scale as $\log n$ ([@newman2010networks], p.422).\ What’s more, analytical results have shown that the mean distances for scale-free networks are even smaller. [@chung2002average], showed that for certain families of random graphs with given expected degrees the average distance is almost surely of order $\log n / \log \tilde{d}$ . Here $\tilde{d}$ denotes the second-order average degree defined by $\tilde{d} = \frac{\sum w_i^2}{\sum w_i}$, where $w_i$ denotes the expected degree of the $i^{th}$ node. More specifically, for scale-free networks with $\alpha>3$, they proved that the average distance is almost surely of order $\log n / \log \tilde{d}$. However, many Internet, social, and citation networks are scale-free networks with exponents in the range $2<\alpha<3$, for which the mean distance is almost surely of order $\log \log n$, but have diameter of order $\log n$ (subject some mild constraints for the average distance and maximum degree, see [@chung2002average] for details). This was followed by the study by [@cohen2003scale], who showed, using analytical argument, that the mean distance $M \sim \log \log n$ for $2<\alpha<3$, $M \sim \log n /\log \log n$ for $\alpha=3$, and $M \sim \log n$ for $\alpha>3$.\ To summarize, the small world effect on scale-free networks with $2 < \alpha < 3$ yields the nice property that the mean distance and the diameter are of scale $\log \log n$ and $\log n$ respectively. For instance, a scale-free network of size $n=10000$ has diameter only around $9$. A small diameter leads to a small range of SPL, and thus it’s practical to estimate the SPLD, which consists of the percentage of dyads with a particular value of SPL for each possible value of SPL. Shortest Path Length Distribution --------------------------------- The shortest paths are of particular importance because they are likely to provide the fastest and strongest interaction between nodes in a network ([@katzav2015analytical]). Up to now, measures such as the diameter and the mean distance have been studied extensively, but the entire shortest path length distribution (SPLD) has apparently attracted little attention. This distribution is of great importance as it’s closely related to dynamic properties such as velocities of network spreading processes ([@bauckhage2013weibull]). More specifically, it plays a key role in the temporal evolution of dynamical processes on networks, such as signal propagation, navigation, and epidemic spreading ([@pastor2001epidemic]).\ [@katzav2015analytical] showed two complementary analytical approaches for calculating the distribution of shortest path lengths in Erdős-Rényi networks, based on recursion equations for the shells around a reference node and for the paths originating from it. However, Erdős-Rényi graphs are not widely observed in real networks and are often only of research interest because of their simple structure. In practice, we are more interested in a wider class of networks.\ Other researchers such as [@bauckhage2013weibull] have characterized shortest path histograms of networks by the Weibull distributions. Empirical tests with different graph topologies, including scale-free networks, have confirmed their theoretical prediction. However, each real network has its own parameter values of the Weibull distribution, and it is hard to find those values without full access to the network. What’s more, even if we can measure the shortest distance between any pair of nodes in a network, it is very time-consuming when the network is large ([@potamias2009fast]). Therefore, here in this paper, we consider estimating the SPLD of a population graph by the sample data generated by random walks. Ability of Random Walks to Recover Shortest Paths ------------------------------------------------- The strong ability of random walks to discover the shortest paths in networks with large degree variability was shown by [@ribeiro2012multiple]. They found that the ability of random walks to find shortest paths bears no relation to the paths they take, but instead relies on the large variance of the degree distribution of the network.\ They proved two important results for networks with large degree variability. First, even with a relatively small number of steps, a single random walk is able to traverse a large fraction of edges. Let $<k^r>$ denote the $r^{th}$ moment of the degree distribution. They show that for a single random walk with $t$ steps, the number of edges discovered by the random walk is approximately $\frac{<k^2>-<k>}{<k>}t$, which is very large for networks with large variance in degree distribution. Second, two random walks cross with high probability after a small percentage of nodes have been visited. The first result indicates that the observed SPLs in the induced subgraph are very likely to be the true SPLs in the population. With a large fraction of edges visited by the random walk, the true shortest paths are very likely to be observed. The second result implies that a single random walk has the potential the explore a large area in the population network, instead of staying around the small area close to itself. This property provides the possibility of using a single random walk to uncover the true SPLs. We will verify this property in section 5.3. These observations provide the possibility of using random walks to uncover shortest paths in networks with large degree variability.\ Their simulation results on some real networks are also very promising. For most real-world networks they tested, more than $65 \%$ of the shortest paths observed in the sampled graph by random walk sampling are the true shortest paths in the parent graph, and more than $90 \%$ of the shortest paths observed in the sampled graph by random walk sampling are within one hop of the true shortest paths in the parent graph. The only exception is a network whose degree variability measured by $\frac{<k^2>-<k>}{<k>}t$ is much smaller than other networks. Estimating Shortest Distances by Landmarks ------------------------------------------ Computing the shortest distance, i.e., the length of the shortest path between arbitrary pairs of nodes, has been a prominent problem in computer science. In an unweighted graph with $n$ nodes and $m$ edges, the shortest distances between one node and all other nodes can be computed by the Breadth First Search (BFS) algorithm in time $O(m+n)$ ([@potamias2009fast]). To measure the distances between all pairs of nodes, one can implement the BFS algorithm $n$ times in time $O(n^2 + mn)$, which is quadratic in the number of nodes. Therefore, in large networks, computing the exact shortest distances between all pairs of nodes is computationally expensive. To improve the efficiency, several fast approximation algorithms have been developed recently.\ Most of the approximation algorithms are landmark-based methods. They start from selecting a small set of nodes called landmarks. Then the actual distances from each landmark to all other nodes in the graph are computed by BFS and stored in memory. By using the precomputed shortest distances from the landmarks, the distance between an arbitrary pair of nodes can be computed in almost constant time. The algorithm proposed by [@potamias2009fast] is one of the landmark-based methods to quickly estimate the the length of the point to point shortest path.\ Their algorithm is based on the triangle inequalities for the geodesic distance. That is, given any three nodes $s$, $u$, and $t$, the geodesic distances between them satisfy the following inequalities: $$l_{st} \leq l_{su} + l_{ut}, \label{upper}$$ $$l_{st} \geq |l_{su} - l_{ut}|.$$ Note that if $u$ lies on one of the shortest paths from $s$ to $t$, then inequality (\[upper\]) holds with equality.\ In the pre-computing step, a set of $d$ landmarks $D$ are selected from the graph, and the actual distances between each landmark and all other nodes are computed by BFS. In the estimating step, by the above inequalities, the actual geodesic distance between node $s$ and $t$ satisfies: $$L \leq l_{st} \leq U,$$ where $$L = max_{j \in D}|l_{sj} - l_{jt}|,$$ $$U = min_{i \in D}\{l_{si} + l_{it}\}.$$ By experiments, [@potamias2009fast] proposed simply using the upper bound $U$ as an estimate to the geodesic distance. That is, $$l_{st} \approx min_{i \in D}\{l_{si} + l_{it}\}.$$ This algorithm takes $O(d)$ time to approximate the distance between a pair of nodes and requires $O(dm+dn)$ space for the pre-computation data.\ Note that the approximation will be very precise if many shortest paths pass through the landmarks. That is, the best set of landmarks consists of the most “central” nodes in the graph, and more specifically, the nodes with high betweenness centralities. In graph $G$, let $n_{st}^i$ be the number of shortest paths between node $s$ and node $t$ passing node $i$, and $g_{st}$ be the total number of shortest paths between $s$ and $t$, the *betweenness centrality* of node $i$ is defined to be $\sum_{st}\frac{n_{st}^i}{g_{st}}$. Intuitively, it measures the fraction of shortest paths passing node $i$. Generally, nodes with high degrees usually have high betweenness centralities but nodes with high betweenness centralities don’t always have high degrees. One example would be a graph consisting of two clusters which are connected trough a single node. The connecting node has only degree $2$ but its betweenness centrality is really high.\ Measuring the betweenness centrality of a node requires the information of shortest paths between all nodes in the sample, which can not be observed from the sample. As an alternative, [@potamias2009fast] came up with two basic strategies based on other centrality measures for selecting landmarks: (i) high degree nodes and (ii) nodes with high estimated *closeness centrality*, where the closeness centrality is the inverse of the average distance from a node to all other nodes. They defined the estimation error to be the average of $|\hat{l}-l|/l$ across all pairs of sampled nodes, where $l$ is the actual distances and $\hat{l}$ is the approximation. Regarding to the size of the set of landmarks, they found from the application to some real networks that, with $100$ landmarks, the estimation error is at less than $10\%$ in $3$ of the $5$ real networks, and between $10\%$ and $20\%$ in the other 2 real networks. Proposed Method =============== Intuition --------- Recall that in a scale-free network, most nodes with small degrees are connected through hubs. Our approach is based on the following intuition: random walks in scale-free networks usually take steps along the shortest paths between pairs of nodes. This nice behavior is attributed to the existence of hubs.\ Consider an extreme case of a network with only one hub to which all other nodes are connected. Then the random walk always goes back to the hub before moving to another node, which indeed is following the shortest path of length 2 between the nodes before and after the hub. Next consider a network with multiple hubs, but still, all other nodes are connected only to the hubs. In this case a random walk starting from any node will have to go back to the hub to which the node is connected to get to another node, which forces the random walk to travel along the shortest path for a pair of nodes.\ More generally, if there are some but very few connections between nodes which are not hubs, a random walk might have the chance to traverse a path that is not the shortest path between two nodes, but the chance is small. Figure \[RW\] shows how multiple random walks recover shortest paths in a scale-free network. ![**Illustration of a RW sample path.** The green nodes are the starting nodes, the blue nodes are nodes visited by the random walkers, and the purple edges are the edges used by the walkers to explore the graph.([@ribeiro2012multiple])[]{data-label="RW"}](RW.png) Problem Definition ------------------ Consider a connected and undirected network $G=(V,E)$ with $n$ nodes, $m$ edges, and diameter $L$. Then the shortest path length distribution (SPLD) of $G$ is defined as $$f_l = \frac{N_l}{N}, l=1,...,L$$ where $N_l$ is the number of dyads with SPL $l$, and $N=\sum_{l=1}^{L}N_l={}_nC_2$ is the total number of dyads (pairs of nodes) in $G$. Sampling Algorithm ------------------ For a given network $G=(V,E)$, we first take a simple random sample of $H$ distinct nodes $U=\{u_1,..., u_H\}$, and start a random walk from each of them. The $H$ random walks are independent after the starting nodes. We define the sampling budget, denoted by $\beta, \text{ } 0<\beta<1$, to be the ratio of total steps of the $H$ random walks to the networks size $n$, and let each random walk take $B=\beta n/H$ steps.\ Let $X(h)=(X_1^{(h)},...,X_B^{(h)}), \text{ } h=1,...,H$, denote the sequence of nodes visited by the $h^{th}$ walker. Let $V(h)$ denote the set of distinct nodes visited by the $h^{th}$ walker, and $|V(h)|$ denote the number of nodes in set $V(h)$. Note that $|V(h)| \leq B$ as a node can be revisited during the random walk. Let $E(h)$ denote the set of edges in $E$ that have both endpoints in $V(h)$.\ Let $V^*=\bigcup_{h=1}^{h=H}V(h)$ denote the set of distinct nodes visited by the any of the $H$ random walks, and $E^*$ denote the set of edges in $E$ that have both of their endpoints in $V^*$. Then $G^*=(V^*, E^*)$ is the induced subgraph obtained by connecting nodes in $V^*$ using edges in $E^*$. The observed shortest path length between any two sampled nodes will be measured from $G^*$. Estimating Method ----------------- In order to estimate the fraction $f_l$ of dyads with SPL $l$, we need to first estimate $N_l$, the number of dyads with SPL $l$ in the population graph. Let $\hat{N}_l$ denote the estimate for $N_l$, and $f_l$ can be estimated by $\hat{f}_l = \frac{\hat{N}_l}{N}$. Note that sometimes we want to use a ratio estimator $\hat{f}_l^r = \frac{\hat{N}_l}{\hat{N}}$, in which case we also estimate $N$, the total number of dyads in the population graph.\ ### The Unweighted Estimator A naive approach to estimate population SPLD is to simply use the SPLD of the induced subgraph $G^*$ as an estimate. Let $N_l^*$ denote the number of dyads with SPL $l$ in $G^*$, and $N^*$ denote the total number of dyads in $G^*$, the unweighted estimator for $f_l$ is $$\hat{f}_l^{uw} = \frac{N_l^*}{N^*}, \text{ } l=1,...,L. \label{UW_Nl}$$ However, this simple estimator may suffer from two sources of bias. First, the dyads are sampled with unequal probabilities due to the nature of random walk sampling. More specifically, dyads with shorter SPLs are more likely to be sampled than those with longer SPLs. Therefore, with the unweighted estimator, $f_l$ for small value of $l$ is likely to be over estimated, and $f_l$ for large value of $l$ is likely to be under estimated. Second, the observed SPL in $G^*$ might be longer than the actual SPL $G$, and thus $f_l$ for small value of $l$ is likely to be under estimated, and $f_l$ for large value of $l$ is likely to be over estimated.\ As discussed in Section 3.3 and 4.1, the bias from not observing the actual SPL is negligible in networks with large degree variability. We will discuss this issue in details in section 4.4.4. In the section 4.4.2 and section 4.4.3, we will develop estimators that deal with the unequal selection probabilities of dyads. ### The Hansen-Hurwitz Estimator Let $s = \{X(1), X(2), ..., X(H)\}$ denote the set of sequences of nodes visited by $H$ random walks, including duplicates, and let $|s|=H\cdot B$ denote the size of $s$. Let $I(X^{(h)}_b=i)$ denote an indicator variable taking the value 1 if node $i$ is visited at $b^{th}$ step in $h^{th}$ random walk, and zero otherwise. Let $q_i=\sum_{h=1}^{H}\sum_{b=1}^{B}I(X^{(h)}_b=i), \text{ } i=1,...,n$ denote the number of times node $i$ appears in sample $s$, and define $\phi_i=E(q_i)/|s|$. We assume $0< E(q_i)< |s|$ $\forall i$, and thus $0 < \phi_i < 1$ $\forall i$. Since $\sum_{i=1}^{n}q_i=|s|$, $\sum_{i=1}^{n}\phi_i=1$. Therefore, the $\phi_i$’s form a probability distribution over the $n$ nodes.\ Let $r, \text{ } r=1,...,N$ represent dyad $(i,j), \text{ } i=1,...,n-1, \text{ } j=i+1,...,n$ in the population graph. Let $S = \{(X_{b_1}^{(h_1)}, X_{b_2}^{(h_2)}):h_1, h_2 \in \{1,...,H\}, b_1, b_2 \in \{1,...,B\}, X_{b_1}^{(h_1)}\neq X_{b_2}^{(h_2)}\}$ denote the set of dyads whose members are any two distinct nodes in $s$. That os, $S$ is the sequence of dyads visited by the $H$ random walks, including duplicates. Define $Q_r=q_iq_j, \text{ } i=1,...,n-1, \text{ } j=i+1,...,n$ as the number of times dyad $r$ appears in sample $S$, and let $|S|=\sum_{r=1}^{N}Q_r$ denote the size of $S$. Notice that there may be duplicates in the sample of nodes $s$, but to a dyad, we only include pairs consisting of two different nodes, therefore $|S|$ is a random variable with $|S| = \binom{|s|}{2}-\sum_{i=1}^{n}\binom{q_i}{2}$. Define $\psi_r=\frac{E(Q_r)}{E(|S|)}$ and assume $0<E(Q_r)<|S|$ $\forall r$, therefore $0 < \psi_r < 1$ $\forall r$. Since $\sum_{r=1}^{N}Q_r=|S|$, $\sum_{r=1}^{N}\psi_r=1$. Therefore, the $\psi_r$’s form a probability distribution over the $N$ dyads.\ Let $l_r\in\{1,...,L\}$ denote the true SPL of dyad $r$ in the population graph. Let $y_r^l$, $r \in \{1, ..., N\}$ and $l \in \{1, ..., L\}$, denote an indicator variable taking value $y_r^l=1$ if $l_r=l$ and zero otherwise. Thus $N_l=\sum_{r=1}^{N}y_r^l$ is the number of dyads with SPL $l$ in the population, and $N=\sum_{l=1}^{L}\sum_{r=1}^{N}y_r^l$ is the total number of dyads in the population.\ According to \[GHH\], the generalized Hansen-Hurwitz estimator for $N_l$ is $$\hat{N}_l^{GHH} = \frac{1}{|S|}\sum_{r=1}^{N}\frac{Q_ry_r^l}{\psi_r}, \text{ } l=1,...,L \label{HH_Nl}$$ The generalized Hansen-Hurwitz estimator for $N$ is $$\hat{N}^{GHH} = \frac{1}{|S|}\sum_{r=1}^{N}\frac{Q_r}{\psi_r}, \text{ } l=1,...,L \label{HH_N}$$ In order to apply (\[HH\_Nl\]) and (\[HH\_N\]) we need to compute or estimate $\psi_r$. We first recall some definitions and results for Markov chains. We call a sequence of random variables $\{X_t: t=1, 2, ...\}$ a *discrete-time Markov chain (DTMC)* if it satisfies $$P(X_{t+1}=i_{t+1}|X_t=i_t, X_{t-1}=i_{t-1}, .., X_1=i_1)=P(X_{t+1}=i_{t+1}|X_t=i_t),$$ for all $t\geq1$ and $i_1, i_2, ..., i_{t+1} \in \Omega$, where $\Omega$ is a finite or countable state space.\ A DTMC is *finite* if $\Omega$ is finite. A DTMC is *homogeneous* if it satisfies $$P(X_{t+1}=j|X_t=i)=P_{i,j} \text{ for all } i, j\in \Omega, \text{ independent of } t.$$ We call the probabilities $P_{i, j}$’s the *transition probabilities*. Let $\boldsymbol{P}$ denote a matrix with element $P_{i, j}$ at its position of $i^{th}$ row and $j^{th}$ column. We call $\boldsymbol{P}$ the *transition matrix* for a homogeneous DTMC. Since we will only consider finite DTMCs in this paper, we denote $\Omega = \{1, 2, ..., n\}$ for simplicity.\ Let $p_i(t)$ denote the probability that $\{X_t\}$ is in state $i$ at time $t$, and let $\boldsymbol{p}(t)=(p_1(t), p_2(t), ..., p_n(t))^T$ denote the vector of probabilities. For a finite homogeneous DTMC we have $$\boldsymbol{p}^T(t+1)= \boldsymbol{p}^T(t)\boldsymbol{P}.$$ A probability vector $\boldsymbol{p}=(p_1, p_2, ..., p_n)^T$ is called a *stationary distribution* for a homogeneous DTMC with transition matrix $\boldsymbol{P}$, if it satisfies $$\boldsymbol{p}^T=\boldsymbol{p}^T\boldsymbol{P}.$$ State $j$ is said to be *accessible* from state $i$ if $P^n_{i,j}>0$ for some $n\geq 0$. If state $i$ is accessible from state $j$ and state $j$ is accessible from state $i$, $i$ and $j$ are said to *communicate*. A DTMC is called *irreducible* if all of its states communicate with each other. A state $i$ is *aperiodic* if the greatest common divisor of $\{n\geq0: P_{i,i}^n>0\}$ is $1$. A DTMC is called *aperiodic* if all of its states are aperiodic.\ - **Proposition 1:** ([@newman2010networks] p.157-159) A single random walk $\{X_t\}$ on a graph $G=(V,E)$ of size $n$ is a finite homogeneous DTMC with a stationary distribution $\boldsymbol{p}=(\frac{k_1}{K}, ..., \frac{k_n}{K})^T$, where $K=\sum_{w}k_w$. - **Proof:** Consider a random walk $\{X_t\}$ that starts at a certain node and takes $t$ steps. Suppose $\{X_t\}$ is at node $i$ at time $t-1$, then the probability that it will be at node $j\neq i$ at time $t$ is $1/k_i$, by the definition of random walk sampling in section 2.2, given that $i$ is connected to $j$, i.e., $A_{ij}=1$. That is $$P(X_t=j|X_{t-1}=i)=\frac{A_{ij}}{k_i}.$$ Therefore, $\{X_t\}$ is a homogeneous DTMC with finite state space $\{1, 2, ..., n\}$ and transition probabilities $P_{i, j}=\frac{A_{ij}}{k_i}$. Let $\boldsymbol{P}$ denote the transition matrix of $\{X_t\}$, then $\boldsymbol{P}=\boldsymbol{D}^{-1}\boldsymbol{A}$, where $\boldsymbol{D}$ is the diagonal matrix with elements $k_i$’s for $i=1,...,n$.\ Let $\boldsymbol{p} = (\frac{k_1}{K}, \frac{k_2}{K}, ..., \frac{k_n}{K})^T$, where $K=\sum_{w}k_w$. $$\begin{aligned} \boldsymbol{p}^T\boldsymbol{D}^{-1}\boldsymbol{A} & = \begin{pmatrix} \frac{k_1}{K} & \frac{k_2}{K} & ... & \frac{k_n}{K} \end{pmatrix} \begin{pmatrix} \frac{A_{11}}{k_1} & \frac{A_{12}}{k_1} & ... & \frac{A_{1n}}{k_1} \\ \frac{A_{21}}{k_2} & \frac{A_{22}}{k_2} & ... & \frac{A_{2n}}{k_2} \\ \vdots & \vdots & \ddots & \vdots \\ \frac{A_{n1}}{k_n} & \frac{A_{n2}}{k_n} & ... & \frac{A_{nn}}{k_n}\\ \end{pmatrix}\\ & = \begin{pmatrix} \sum_{i=1}^{n}\frac{k_i}{K}\frac{A_{i1}}{k_i} & \sum_{i=1}^{n}\frac{k_i}{K}\frac{A_{i2}}{k_i} & ... & \sum_{i=1}^{n}\frac{k_i}{K}\frac{A_{in}}{k_i} \end{pmatrix}\\ & = \begin{pmatrix} \frac{1}{K}\sum_{i=1}^{n}A_{i1} & \frac{1}{K}\sum_{i=1}^{n}A_{i2} & ... & \frac{1}{K}\sum_{i=1}^{n}A_{in} \end{pmatrix} &= \begin{pmatrix} \frac{k_1}{K} & \frac{k_2}{K} & ... & \frac{k_n}{K} \end{pmatrix} = \boldsymbol{p}^T\end{aligned}$$ That is, $\boldsymbol{p}^T = \boldsymbol{p}^T\boldsymbol{P}$. Since $p_i>0$ and $\sum_{j}p_j=1$, $\boldsymbol{p}$ is a stationary distribution for $\{X_t\}$.\ - **Proposition 2:** If $G$ is connected and has at least one triangle, the finite homogeneous DTMC $\{X_t\}$ from **Proposition 1** is irreducible and aperiodic. - **Proof:** Since $G$ is connected, any node in the is accessible by any other node. That is, all states of $\{X_t\}$ communicate with other, and thus $\{X_t\}$ is irreducible. For any node in $G$, it can be either in a triangle or not. Suppose $i$ is any node in a triangle, then starting from itself, $i$ can be reached by either 2 steps or 3 steps, that is $P_{i,i}^2>0$ and $P_{i,i}^3>0$. Therefore $i$ is an aperiodic state. Consider any node $j$ which is not in a triangle and suppose that its shortest distance to node $i$ is $l$, then starting from itself, $j$ can be reached by either $2l+2$ steps or $2l+3$ steps, that is $P_{i,i}^{2l+2}>0$ and $P_{i,i}^{2l+3}>0$. Therefore $j$ is also an aperiodic state. Since all states in $\{X_t\}$ are aperiodic, $\{X_t\}$ is aperiodic.\ - **Proposition 3:** If a single random walk $\{X_t\}$ initiates from its stationary distribution $\boldsymbol{p}$ on a connected graph $G$ with at least one triangle, then $\phi_i = E(q_i)/t = k_i/K$, and $lim_{t \rightarrow \infty}\psi_r=\alpha k_ik_j$, where $\alpha=2[(\sum_{w}k_w)^2-\sum_{w}k_w^2]^{-1}$, and $K = \sum_{w=1}^{n}k_w$. - **Proof:** Let $\boldsymbol{q} = (q_1, q_2, ..., q_n)^T$, where $q_i$ = number of times node $i$ appears in the sample, and $\boldsymbol{p} = (p_1, p_2, ..., p_n)^T$, where $p_i$ = $\frac{k_i}{K}$. According to Anderson’s (1989) results for irreducible and aperiodic Markov chains, $$E(\boldsymbol{q})=\boldsymbol{p}t, \label{Anderson1}$$ and $$\lim_{t\rightarrow\infty}\frac{Cov(\boldsymbol{q})}{t}= C, \label{Anderson2}$$ where $C$ is a square matrix with constant elements.\ From (\[Anderson1\]), we have $\frac{E(q_i)}{t} = \frac{k_i}{K}$, for $i=1, ..., n$.\ In general $a_n=O(b_n)$ indicates $\lim_{t \rightarrow \infty}a_n/b_n=c$, where $c$ is a constant, and $a_n=o(b_n)$ indicates $\lim_{t \rightarrow \infty}a_n/b_n=0$, so we have $$Cov(q_i, q_j)=o(t^2), \text{ and } Var(q_i)=o(t^2) \text{ } \forall i.$$ The expected number of times dyad $r$ appears in sample $S$ is $$\begin{aligned} E(Q_r) = E(q_iq_j) = E(q_i)E(q_j) + Cov(q_i, q_j) = p_ip_jt^2+o(t^2)\end{aligned}$$ The expected number of dyads (including duplicates) in sample $S$ is $$\begin{aligned} E(|S|) &= \binom{t}{2}-\sum_{i=1}^{n}E(\frac{q_i(q_i-1)}{2})\\ &=\binom{t}{2}-\frac{1}{2}\sum_{i=1}^{n}(E(q_i^2)-E(q_i)) \\ &=\binom{t}{2}-\frac{1}{2}\sum_{i=1}^{n}(E^2(q_i)-E(q_i)+Var(q_i)) \\ &=\binom{t}{2}-\frac{1}{2}\sum_{i=1}^{n}tp_i(tp_i-1)+o(t^2)\\ &= \frac{1}{2}t(t-1) -\frac{1}{2}(t^2\sum_{i=1}^{n}p_i^2-t) + o(t^2)\\ & = \frac{1}{2}(1-\sum_{i=1}^{n}p_i^2)t^2 + o(t^2) \end{aligned}$$ In the long run, the expected fraction that dyad $r$ appears in sample $S$ is $$\begin{aligned} \lim_{t\rightarrow\infty}\psi_r &= \lim_{t\rightarrow\infty} \frac{E(Q_r)}{E|S|}\\ & = \lim_{t\rightarrow\infty} \frac{2p_ip_jt^2 + o(t^2)}{(1-\sum_{i=1}^{n}p_i^2)t^2+o(t^2)}\\ & = \frac{2p_ip_j}{1-\sum_{i=1}^{n}p_i^2}\\ &=\frac{2\frac{k_ik_j}{(\sum_{w}k_w)^2}}{1-\frac{\sum_{w}k_w^2}{(\sum_{w}k_w)^2}}\\ &=\frac{2k_ik_j}{(\sum_{w}k_w)^2-\sum_{w}k_w^2} \end{aligned}$$ For simplicity we can write $lim_{t \rightarrow \infty}\psi_r=\alpha k_ik_j$, where $\alpha=2[(\sum_{w}k_w)^2-\sum_{w}k_w^2]^{-1}$.\ Therefore, the generalized Hansen-Hurwitz estimator for $N_l$ is $$\hat{N}_l^{GHH} =\frac{1}{|S|}\sum_{r=1}^{N}\frac{Q_ry_r^l}{\alpha k_i k_j}, \text{ } l=1,...,L,$$ and the generalized Hansen-Hurwitz estimator for $N$ is $$\hat{N}^{GHH} = \frac{1}{|S|}\sum_{r=1}^{N}\frac{Q_r}{\alpha k_i k_j}, \text{ } l=1,...,L.$$ The generalized Hansen-Hurwitz estimator for the fraction of dyads with SPL $l$ is $$\hat{f}_l^{GHH} = \frac{\hat{N}_l^{HH}}{N}=\frac{\sum_{r=1}^{N}\frac{Q_ry_r^l}{\alpha k_ik_j}}{|S|N}, \text{ } l=1,...,L,$$ and the generalized Hansen-Hurwitz ratio estimator for the fraction of dyads with SPL $l$ is $$\hat{f}_l^{GHH.r} = \frac{\hat{N}_l^{HH}}{\hat{N}^{HH}}=\frac{\sum_{r=1}^{N}\frac{Q_ry_r^l}{k_ik_j}}{\sum_{r=1}^{N}\frac{Q_r}{k_ik_j}}, \text{ } l=1,...,L \label{HH.ratio}$$ ### The Horvitz-Thompson Estimator In the Hansen-Hurwitz estimator illustrated above, we take the average of all observed dyads, including duplicates, to estimate $N_l$ and $N$. Alternatively, we can consider applying the Horvitz-Thompson estimator to the subsample obtained by excluding duplicate observations.\ Let $s^* = V^*$ denote set of distinct nodes visited by $H$ random walks, and $|s^*| = \sum_{h=1}^{H}|V(h)|$ denote the sample size of $s^*$. Since $s^*$ is derived from $s$ by excluding the duplicates, $|s^*|$ is a random variable depending on $s$. Let $z_i, \text{ } i=1,...,n$ denote the number of times node $i$ appears in sample $s^*$. In our case $z_i$ is an indicator variable such that $z_i=1$ if $i\in s^*$ and zero otherwise. Let $\tau_i=E(z_i)$ denote the inclusion probability of node $i$ in the subsample $s^*$, which is indeed the probability that node $i$ ever appears in sample $s$. Since $\sum_{i=1}^{n}z_i=|s^*|$, we have $\sum_{i=1}^{n}\tau_i=E(|s^*|)$.\ Let $S^*$ denote the set of all pairs of nodes in $s^*$, and let $|S^*|$ denote the size of $S^*$. Let $Z_r, \text{ } i=1,...,n-1, \text{ } j=i+1,...,n$ denote the number of times dyad $r=(i,j)$ appears in sample $S^*$. In our case $Z_r$ is an indicator variable such that $Z_r=1$ if $r\in S^*$ and zero otherwise. Let $\pi_r=E(Z_r)$ denote the inclusion probability of dyad $r$ in the subsample $S^*$, which is indeed the probability that dyad $r$ ever appears in sample $S$. Since $\sum_{r=1}^{N}Z_r=|S^*|$, we have $\sum_{r=1}^{N}\pi_r=E(|S^*|)$.\ Due to the lack of knowledge about the full network $G=(V,E)$ as well as computational considerations, we will use an approximation for estimating $\pi_r$, $r \in S^*$. If a single random walk $\{X_t\}$ initiates from its stationary distribution $\boldsymbol{p}$ on a connected graph $G$ with at least one triangle, in the long run, $$\pi_r \approx \tau_i\tau_j, \text{ for } r=1, 2, ..., N,$$ where $$\begin{aligned} \tau_i = \frac{|s^*|}{\sum_{i=1}^{n}\theta_i}\theta_i \text{ for } i=1, 2, ...,n, \label{tau}\end{aligned}$$ and $$\theta_i=1-(1-\frac{k_i}{\sum_{w}k_w})^t \text{ for } i=1, 2, ...,n.$$ - **Heuristic proof:** To derive the expected number of appearances of dyads in $S$, we used (\[Anderson2\]) but did not need to use the form of the matrix $C$. A simple sampling model that satisfies (\[Anderson1\]) and (\[Anderson2\]) is multinomial sampling with $t$ draws and probability $p_i=\frac{k_i}{\sum_{w}k_w}$ for node $i$ to be sampled at each draw. For multinomial sampling, $$\begin{aligned} E(q_i) = tp_i, \end{aligned}$$ and $$\begin{aligned} Cov(q_i, q_j)=\left\{ \begin{array}{ll} -tp_ip_j \text{, } i \neq j,\\ tp_i(1-p_i) \text{, } i=j, \end{array} \right.\end{aligned}$$ and hence (\[Anderson1\]) and (\[Anderson2\]) are satisfied. Under multinomial sampling, the probability that node $i$ is ever included in the sample by step $t$ is $$\theta_i = 1-(1-p_i)^t.$$ The joint probability that $i$ and $j$ are both included in the sample is $$\theta_r = \theta_{ij} = \sum_{x=1}^{t-1}P(z_i=1|q_j=x)P(q_j=x).$$ Note that $$P(q_j=x) = \binom{t}{x}p_j^x(1-p_j)^{t-x},$$ and $$P(z_i=1|q_j=x) = 1-(1-\frac{p_i}{1-p_j})^{t-x},$$ so $$\begin{aligned} \theta_r& = \sum_{x=1}^{t-1}\binom{t}{x}p_j^x(1-p_j)^{t-x} [1-(1-\frac{p_i}{1-p_j})^{t-x}]\\ &= \sum_{x=1}^{t-1}\binom{t}{x}p_j^x(1-p_j)^{t-x} -\sum_{x=1}^{t-1}\binom{t}{x}p_j^x(1-p_i-p_j)^{t-x}\\ &= 1-(1-p_ij)^t-p_j^t-[(1-p_i)^t-(1-p_i-p_j)^t-p_j^t]\\ &= 1-(1-p_i)^t-(1-p_j)^t+(1-p_i-p_j)^t.\end{aligned}$$ Since $$\begin{aligned} \theta_i\theta_j&=[1-(1-p_i)^t][1-(1-p_j)^t]\\ & = 1-(1-p_i)^t-(1-p_j)^t+(1-p_i-p_j+p_ip_j)^t\\ & \approx 1-(1-p_i)^t-(1-p_j)^t+(1-p_i-p_j)^t \text{ if } p_ip_j \text{ is negligible},\end{aligned}$$ and as $p_ip_j$ is verified to be negligible by simulations in this case, we can estimate $\theta_r$ by $$\theta_r \approx \theta_i\theta_j.$$ The only problem in approximation by multinomial sampling is that we assume the draws are independent, while it is not the case in random walk sampling since a node can’t be sampled twice consecutively. Therefore, $\theta_i$ under the multinomial sampling model over estimates $\tau_i$, the inclusion probability of node $i$ in random walk sampling. To adjust for the overestimation, we can use the one of the following two approaches to estimate $\tau_i$, and then estimate $\pi_r$ by $$\pi_r \approx \tau_i \tau_j.$$ **Approach 1:** Using the fact $\sum_{r=1}^{n}\tau_i=E(|s^*|)$ as a constraint for $\tau_i$, we can estimate $\tau_i$ by $$\tau_i = \frac{|s^*|}{\sum_{i=1}^{n}\theta_i}\theta_i, \label{HT1}$$ **Approach 2:** Using the fact $\sum_{i \in s^*}\tau_i^{-1}=n$, we can choose the exponent $t^*<t$ for the random walking sampling such that $$(\sum_{i \in s^*}\frac{1}{1-(1-\phi_i)^{t^*}}-n)^2$$ is minimized, and estimate $\tau_i$ by $$\tau_i = 1-(1-\phi_i)^{t^*}. \label{HT2}$$ Simulation results have shown that both (\[HT1\]) and (\[HT2\]) can provide a good estimation for $\tau_i$. The Horvitz-Thompson estimator for $N_l$ is $$\hat{N}_l^{HT} = \sum_{r=1}^{N}\frac{Z_ry_r^l}{\pi_r}, \text{ } l=1,...,L,$$ and the Horvitz-Thompson estimator for $N$ is $$\hat{N}^{HT} = \sum_{r=1}^{N}\frac{Z_r}{\pi_r}$$ The Horvitz-Thompson estimator for the fraction of dyads with SPL $l$ is $$\hat{f}_l^{HT} = \frac{\hat{N}_l^{HT}}{N}=\frac{\sum_{r=1}^{N}\frac{Z_ry_r^l}{\pi_r}}{N}, \text{ } l=1,...,L$$ and the Horvitz-Thompson ratio estimator for the fraction of dyads with SPL $l$ is $$\hat{f}_l^{HT.r} = \frac{\hat{N}_l^{HT}}{\hat{N}^{HT}}=\frac{\sum_{r=1}^{N}\frac{Z_ry_r^l}{\pi_r}}{\sum_{r=1}^{N}\frac{Z_r}{\pi_r}}, \text{ } l=1,...,L$$ ### Approximating actual SPLs between sampled nodes As discussed in section 3.4, in a network with $n$ nodes and $m$ edges, the time complexity to measure the actual distances between all pairs of nodes is $O(mn+n^2)$. This is computationally expensive for large networks. With our proposed estimators discussed above, we only need measure the distances between sampled nodes to estimate the SPLD of the population graph. Let $\beta^*$ denote the fraction of nodes in the induced subgraph, where $0<\beta^* \leq \beta$ and $\beta$ is the sampling budget. The computation time of actual distances between all sampled nodes is $O(\beta^*mn+\beta^*n^2)$. For $\beta^*=20\%$, only measuring the actual distances between sampled nodes will bring a $80\%$ reduction in computation time.\ However, according to some approximation methods for SPLs discussed in section 3.3 and section 3.4, we can approximate the actual SPLs between sampled nodes instead of actually measuring them. And by doing that we can achieve further reduction in computation time. In the following section we will revise the approximation methods from [@ribeiro2012multiple] and [@potamias2009fast] and apply them to our random walk samples.\ 1) For networks with large $c.v.$, approximate actual SPLs by observed SPLs in the induced subgraph.\ Based on theoretical and simulation results from [@ribeiro2012multiple], in scale-free networks, random walks have strong ability to uncover the true shortest paths, so the actual SPLs between sampled nodes can be approximated by the their observed SPLs in the subgraph induced by the random walk sample. More specifically, for a pair of sampled nodes $(i, j)$, the actual SPL $l_{ij}$ between them in the population graph $G$ can be approximated by the observed SPL in the induced subgraph $G^*$. More generally, it is the existence of hubs in scale-free networks that makes random walks able to find the shortest paths, as discussed in section 4.1. Therefore in this paper, we generalize the condition for random walks to uncover shortest paths to networks with relatively large variance in degree distribution, compared to the mean degree $<k>$. Let $c.v.=\frac{\sqrt{Var(k)}}{<k>}=\frac{\sqrt{<k^2>-<k>^2}}{<k>}$ denote the *coefficient of variation* of the degree distribution as a measure of the relative variance. A large $c.v.$ is needed in order for the random walks to uncover the shortest paths, and we will discuss in section 5.1 about how large the $c.v.$ needs to be.\ In an induced subgraph with $\beta^*n$ nodes, the computing time for single source shortest paths is reduced to $O(\beta^*m+\beta^*n)$ by BFS within the induces subgraph. Applying BFS to $\beta^*n$ sampled nodes in the induced subgraph, the time complexity for computing SPLs between all sampled nodes is $O(\beta^{*2}mn+\beta^{*2}n^2)$. Comparing to measuring the actual distance between sampled nodes, i.e., applying BFS to sampled nodes in the population graph, doing BFS only in the induced subgraph can save us $(1-\beta^*)\times 100 \%$ in computation time.\ 2) For networks with small $c.v.$, approximate actual SPLs using landmarks.\ For networks with small $c.v.$ in degree distribution, since random walks can’t find the shortest paths in the induced subgraph, we need to implement breadth-first search (BFS) on sampled nodes in the population graph to find the shortest paths. However, based on findings by [@potamias2009fast], the BFS doesn’t have to be applied to all sampled nodes. Instead, one can apply BFS to only a fraction of the sampled nodes to find their shortest distances to all other nodes, and use that information to estimate the shortest distances between other sampled nodes. More specifically, one can first select a set of nodes as landmarks, denoted as $D$, pre-compute the SPLs from landmarks to all other nodes by BFS in the population graph, and estimate the SPL between any arbitrary pair of nodes $s$ and $t$ by $min_{j \in D}\{l_{sj}+l_{jt}\}$. The estimation will be very precise if many shortest paths contain the selected landmarks. From their experiments, using $100$ nodes with highest degrees from the population seems a fairly good strategy for choosing landmarks.\ In this paper, we propose selecting landmarks from the sample. This is because we are only interested in the SPLs between nodes in the sample, and landmarks from the sample will be more likely to be on the shortest paths between nodes in the sample. Also it is costly to select landmarks from the population since we need to observe the degrees of all nodes. Let $\gamma$ denote the the ratio of number of landmarks to number of nodes in the induced subgraph $G^*$. From the sample we will choose the top $\gamma \beta^* n$ nodes in their actual degrees as landmarks. We will discuss the size of landmark set, i.e., the value of $\gamma$, in section 5.2.\ In an induced subgraph with $\beta^*n$ nodes and $\gamma \beta^*n$ landmarks, the computing time for SPLs between a single landmark and all other nodes in the sample is still $O(m+n)$, since the BFS needs to be implemented in the population graph to compute the actual distances. Invoking the BFS $\gamma \beta^*n$ times, the computing time for SPLs between all landmarks and all other nodes in the sample is $O(\gamma \beta^* mn+\gamma \beta^*n^2)$. Comparing to measuring the actual distance between sampled nodes, i.e., applying BFS to all sampled nodes in the population graph, doing BFS only to the landmarks can save us $(1-\gamma)\times 100 \%$ in computation time. This is for the pre-computing stage.\ For the estimation stage, for any arbitrary pair of nodes, it only takes $O(\gamma \beta ^*n)$ time to go through the distances from these two nodes to each landmark and choose the minimum sum as the estimated SPL. Note that with BFS applied to landmarks, the distances between $\gamma \beta ^*n$ landmarks and all other nodes in the sample have already been identified, therefore we just need to estimate the distances between $(1-\gamma)\beta ^*n$ nodes that are not used as landmarks. Applying $\gamma \beta ^*n$ numerical search to $\binom{(1-\gamma)\beta^*n}{2} \approx \frac{1}{2}(1-\gamma)^2\beta^{*2}n^2$ pairs of nodes in the sample, the computing time for estimating distances between sampled nodes that are not landmarks is about $O(\frac{1}{2}\gamma (1-\gamma)^2\beta^{*3}n^3)$ after we have the pre-computation data. Application of Estimating Methods --------------------------------- In practice, sometimes we are only able to crawl part of the network, so we are restricted to observing the degrees of the sampled nodes. To apply the estimators in section 4.4 to estimating the SPLD for a network, we need to estimate $\psi_r$’s and $\pi_r$’s of the sampled nodes and $c.v$ of degree distribution by the degrees of nodes in the sample.\ Following the mathematical expressions of $c.v.$, $\psi_r$, and $\pi_r$, we can estimate them by the estimated first moment $<k>$ and the second moment $<k^2>$ of the degree distribution. The estimation for $<k>$ and $<k^2>$ can be achieved by Hansen-Hurwitz ratio estimator. Suppose a single random walk $\{X_t\}$ initiates from its stationary distribution $\boldsymbol{p} = (\frac{k_1}{K}, \frac{k_2}{K}, ..., \frac{k_n}{K})^T$ on a connected graph $G$ with at least one triangle such that $$\phi_i=\frac{k_i}{K}=\frac{k_i}{n<k>}.$$ Then we can estimate the first moment $<k>$ by $$\hat{k}_1 = \frac{\hat{K}}{\hat{n}} = \frac{\frac{1}{|s|}\sum_{i \in s}\frac{k_i}{\phi_i}}{\frac{1}{|s|}\sum_{i \in s}\frac{1}{\phi_i}} = \frac{\frac{1}{|s|}\sum_{i \in s}\frac{k_i}{\frac{k_i}{K}}}{\frac{1}{|s|}\sum_{i \in s}\frac{1}{\frac{k_i}{K}}}=\frac{|s|}{\sum_{i \in s}k_i^{-1}}.$$ Similarly, we can estimate the second moment $<k^2>$ by $$\hat{k}_2= \frac{\frac{1}{|s|}\sum_{i \in s}\frac{k_i^2}{\phi_i}}{\frac{1}{|s|}\sum_{i \in s}\frac{1}{\phi_i}} = \frac{\frac{1}{|s|}\sum_{i \in s}\frac{k_i^2}{\frac{k_i}{K}}}{\frac{1}{|s|}\sum_{i \in s}\frac{1}{\frac{k_i}{K}}}=\frac{\sum_{i \in s}k_i}{\sum_{i \in s}k_i^{-1}}.$$ ### Estimation of $c.v.$ We can estimate $c.v.$ by $$\hat{c.v.} = \frac{\sqrt{\hat{k}_2-(\hat{k}_1)^2}}{\hat{k}_1}.$$ ### Estimation of $\psi_r$ For Hansen-Hurwitz estimator, we can estimate $\alpha$ in $\psi_r=\alpha k_ik_j$ by $$\hat{\alpha} = \frac{2}{(n\hat{k}_1)^2-n\hat{k}_2},$$ and can therefore estimate $\psi_r$ by $$\hat{\psi}_r = \frac{2}{(n\hat{k}_1)^2-n\hat{k}_2}k_ik_j.$$ Note that for Hansen-Hurwitz ratio estimator (\[HH.ratio\]), we can just plug in the observed degrees $k_i$ and $k_j$ of sampled nodes, and don’t need to estimate any selection probabilities.\ ### Estimation of $\pi_r$ For Horvitz-Thompson estimator, we can estimate $\tau_i$ identified in (\[tau\]) by $$\hat{\tau}_i=\frac{|s^*|}{n\hat{\bar{\theta}}}\hat{\theta}_i,$$ where $$\hat{\theta}_i=1-(1-\frac{k_i}{n\hat{k}_1})^t$$ and $$\hat{\bar{\theta}}=\frac{\frac{1}{|s|}\sum_{i \in s}\frac{\hat{\theta}_i}{\phi_i}}{\frac{1}{|s|}\sum_{i \in s}\frac{1}{\phi_i}}=\frac{\frac{1}{|s|}\sum_{i \in s}\frac{\hat{\theta}_i}{k_i/K}}{\frac{1}{|s|}\sum_{i \in s}\frac{1}{k_i/K}}=\frac{\sum_{i \in s}\frac{\hat{\theta}_i}{k_i}}{\sum_{i \in s}\frac{1}{k_i}}.$$ Consequently, we can estimate $\pi_r$ by $$\hat{\pi}_r=\hat{\tau}_i\hat{\tau}_j.$$ Evaluation Techniques --------------------- To evaluate the performance of an estimator, we take $K$ random walk samples from the population graph $G$, compute the estimate from each sample, and then apply the following four evaluating techniques to get an overall assessment for the estimator. ### Box plots We first plot the histogram of the population SPLD. For each value of the population SPL, we place a box plot of sample estimates on the corresponding position of the histogram. Figure \[boxplot\] is an example of box plots of Hansen-Hurwitz ratio estimates based on 100 samples taken from a scale-free network of size 1000. For each sample, a single random walk of 200 steps is used to produce the induced subgraph for the sample SPL to be observed. ![Box plots of estimated SPLDs on the histogram of population SPLD.[]{data-label="boxplot"}](boxplot.png) ### Mean Absolute Difference (MAD) For each value of population SPL $l$, the Mean Absolute Difference (MAD) for the estimated fraction $\hat{P}(l)$ is $$mad(l) = E(|\hat{P}(l)-P(l)|).$$ The empirical MAD for SPL $l$ from $K$ samples is $$MAD(l) = \frac{1}{K}\sum_{k}|\hat{P}_k(l)-P(l)|,$$ with estimated variance $$\hat{Var}(MAD(l)) = \frac{1}{K}\frac{\sum_{k}(|\hat{P}_k(l)-P(l)|-MAD(l))^2}{K-1}.$$ Averaging all possible values of population SPL, the MAD for the estimated SPLD $\hat{P}$ is $$MAD = \frac{1}{L}\sum_{l}MAD(l),$$ with estimated standard error $$\ \hat{se}(MAD) = \frac{1}{L}\sqrt{\sum_{l}\hat{Var}(MAD(l))}$$ ### Root Mean Square Error (RMSE) For each value of population SPL $l$, the Root Mean Square Error (RMSE) for the estimated fraction $\hat{P}(l)$ is $$rmse(l) = \sqrt{E[(\hat{P}(l)-P(l))^2]}.$$ The empirical RMSE for SPL $l$ from $K$ samples is $$RMSE(l) = \sqrt{\frac{1}{K}\sum_{k}(\hat{P}_k(l)-P(l))^2},$$ with estimated variance $$\hat{Var}(RMSE(l)) = \frac{1}{K}\frac{\sum_{k}(\sqrt{(\hat{P}_k(l)-P(l))^2}-RMSE(l))^2}{K-1}.$$ Averaging all possible values of population SPL, the RMSE for the estimated SPLD $\hat{P}$ is $$RMSE = \frac{1}{L}\sum_{l}RMSE(l),$$ with estimated standard error $$\hat{se}(RMSE) = \frac{1}{L}\sqrt{\sum_{l}\hat{Var}(RMSE(l))}.$$ ### Kullback-Leibler Divergence To measure the difference between two discrete distributions: estimated SPLD $\hat{P}_k$ from the $k^{th}$ sample, and population SPLD $P$, we can use the symmetrised Kullback-Leibler divergence: $$KL(k)= \sum_{l}\hat{P}_k(l) log \frac{\hat{P}_k(l)}{P(l)} + \sum_{l}P(l) log \frac{P(l)}{\hat{P}_k(l)}.$$ The average Kullback-Leibler divergence over all $K$ samples is $$KL =\frac{1}{K}\sum_{k}KL(k),$$ with estimated standard error $$\hat{se}(KL) = \sqrt{\frac{1}{K}\frac{\sum_{k}(KL(k)-KL)^2}{K-1}}$$ In practice, since the values of $KL$ are much almost ten times as large as the values of $MAD$ and $RMSE$, we will use $KL/10$ to keep the three numerical measures in the same scale. Simulation Study ================ In this section, we present several simulation studies to assess the performance of the methods we proposed in Section 4. More specifically, by using the evaluation techniques discussed in section 4.6, we 1) test on different values of $c.v.$ of degree distribution to explore the conditions for random walks to uncover shortest paths; 2) test on various lengths and numbers of random walks and different estimators to find the best sampling design; 3) compare our estimates based on approximated SPLs to the unweighted sample SPLDs and estimates based on actual SPLs to evaluate the estimation performance. Conditions for Random Walks to Uncover Shortest Paths ----------------------------------------------------- In Section 4.4.4, we generalized the condition for random walks to uncover shortest paths to having a large $c.v.$ of degree distribution. In this section, we will first verify the strong ability of random walks from scale-free networks in uncovering shortest paths. And based on that, we will explore the range of $c.v.$ which allows the random walks to perform well in uncovering shortest paths in general networks. To assess the performance, we will look at the proportion of shortest paths uncovered by the random walk sample. We will use networks with gamma degree distributions as an example of general networks.\ In addition, as discussed by [@ribeiro2012multiple], in networks with large degree variability, the fraction of edges with at least one its endpoints visited by the random walk is large. In this paper, we are more concerned about the fraction of edges in the induced subgraph, i.e., with edges with both endpoints visited by the random walk, because they are what we use to measure sample SPLs. If more edges are included in the induced subgraph, it is more likely to observe the true shortest paths from the sample. Let $E.f$ denote the fraction of edges with both of its endpoints visited by the random walk, that is, the fraction of edges in the induced subgraph. One should expect large values of $E.f$ for networks with large value of $c.v.$\ For each network of size 1000, a single random walk of 200 steps is implemented to produce the induced subgraph. For each dyad in the subgraph, we take the difference between its sample SPL (SPL observed in the induced subgraph) and population SPL (SPL observed in the population graph, i.e., true SPL). Note that the sample SPL is always as large as or larger than the population SPL, as a node may take more steps in the subgraph to reach another node than it would in the population graph. Therefore the value of this difference has a range $\{0 ,1, 2, ...\}$. For each value of population SPL, we plot the distribution of difference between sample SPL and population SPL. The proportion of uncovered shortest paths by the random walk sample is equal to the proportion of zero difference between sample SPL and population SPL. Therefore, we expect a large proportion with zero difference to show that the random walk sample is performing well in uncovering the true SPL.\ 1) Scale-free networks v.s. Erdős-Rényi networks\ We first compare a Erdős-Rényi network and a scale-free network, both of which have average degree around 6. In Figure \[ER\_scale\_free\], we observe a large proportion of zero difference for each value of SPL in the scale-free network, which indicates that random walks have strong ability in uncovering the true shortest paths. However, in the Erdős-Rényi network, we don’t see a large proportion of zero difference, for any value of SPL greater than 1. Therefore the ability of random walks to uncover the true shortest paths in the Erdős-Rényi is very weak. This is to be expected, since the $c.v.$ of degree distribution of the scale-free network is much larger than that of the Erdős-Rényi network. What’s more, we notice that $E.f$ in the scale-free network is larger than that in the Erdős-Rényi network, which also explains why random walks are doing a better job in uncovering shortest paths in the scale-free network.\ --------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![Erdős-Rényi network v.s. scale-free network: distribution of difference between sample SPL and population SPL.[]{data-label="ER_scale_free"}](cond/ER.png "fig:") ![Erdős-Rényi network v.s. scale-free network: distribution of difference between sample SPL and population SPL.[]{data-label="ER_scale_free"}](cond/scale_free.png "fig:") \(a) Erdős-Rényi, $n=1000$, $\beta=0.2$ \(b) scale-free, $n=1000$, $\beta=0.2$ \[6pt\] --------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2\) General Networks\ A more general condition for random walks to uncover shortest paths is that the degree distribution has a large coefficient of variation ($c.v.$). To explore how large the $c.v.$ needs to be in order for the random walk to perform well in uncovering the shortest paths, we compare 4 networks with gamma degree distributions.\ As one would expect, as the $c.v.$ increases from $0.8$ in network $(c)$ to $2.4$ in network $(f)$, $E.f$ increases, which means more edges are observed in the induced subgraph, and therefore the proportion of zero difference between sample SPL and population SPL increases. When $c.v.$ reaches $1.8$ in network $(e)$, the distribution of difference between sample SPL and population SPL looks very close to that for the scale-free network in Figure \[ER\_scale\_free\]. When $c.v.$ increases from $1.8$ in network $(e)$ to $2.4$ in network $(f)$, there is still an increase in the proportion of zero difference between sample SPL and population SPL, but not very substantial. One should also notice that $c.v.$ for the scale-free network in Figure \[ER\_scale\_free\] is $2.4$. Combining the empirical results from some real networks in section 6, we get some insight about the value of $c.v.$ we need for the random walk to perform well in uncovering shortest paths:\ 1) If the $c.v.$ is much smaller than 2, the random walk is not able to uncover the shortest paths;\ 2) If the $c.v.$ is around 2, the random walk has the ability to uncover the shortest paths, but the performance may vary from case to case;\ 3) If the $c.v$ is much larger than 2, the random walk has strong ability to uncover most of the shortest paths between the sampled nodes.\ As network $(f)$ has the same value of $c.v.$ as the scale-free network $(b)$, we will use degree sequence generated from $Gamma(0.125,40)+1$ to generate networks as an example for networks with large $c.v.$ in the rest of this simulation section. And we will use degree sequence generated from $Gamma(1,5)+1$ (setting for network $(c)$) to generate networks as an example for networks with small $c.v.$. In order to evaluate the estimation performance, for a given network, a specific sampling design and a specific estimator, a total of $K=100$ random walk samples will be drawn from the network. An estimate will be computed from each of the samples. Then the $100$ estimates will be used to construct the the box plots and calculate the three numerical measures discussed in section 4.6.\ ---------------------------------------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------------------------------- ![Networks with Gamma degree distribution: distribution of difference between sample SPL and population SPL.[]{data-label="gamma"}](cond/c.png "fig:") ![Networks with Gamma degree distribution: distribution of difference between sample SPL and population SPL.[]{data-label="gamma"}](cond/d.png "fig:") \(c) Gamma(1,5)+1, $n=1000$, $\beta=0.2$ \(d) Gamma(0.5,10)+1, $n=1000$, $\beta=0.2$ \[6pt\] ![Networks with Gamma degree distribution: distribution of difference between sample SPL and population SPL.[]{data-label="gamma"}](cond/e.png "fig:") ![Networks with Gamma degree distribution: distribution of difference between sample SPL and population SPL.[]{data-label="gamma"}](cond/f.png "fig:") \(e) Gamma(0.25,20)+1, $n=1000$, $\beta=0.2$ \(f) Gamma(0.125,40)+1, $n=1000$, $\beta=0.2$ ---------------------------------------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------------------------------- Sampling designs for Random Walks --------------------------------- In this section, we will explore random walk sampling designs for estimating the population SPLD. We will also compare the performance of different estimators. Basically, we will answer the following four questions:\ 1) For networks with large $c.v.$, how many steps do we need in a single random walk in order to get a good estimation?\ 2) For networks with small $c.v.$, how many nodes do we need to use as landmarks and how many steps do we need in a single random walk in order to get a good estimation?\ 3) Will multiple random walks outperform a single random walk, given fixed sampling budget?\ 4) For a fixed sampling design, how will the performance differ by using different estimators? ### Length of Random Walks for Networks with Large $c.v.$ For networks with large $c.v.$ in degree distribution, we use the observed SPLs in the induced subgraph to approximate the actual SPLs between sampled nodes. In order to see the effect of length of a single random walk on the estimation performance, we implement single random walks with sampling budget $\beta = 0.05(0.05)0.5$, where $x = a(r)b$ means $x$ increasing from $a$ to $b$, with $r$ increment at each time. This process is applied to networks with $c.v.=2.4$ and size $n=1,000$, $n=5,000$, and $n=10,000$. The estimator we use here is the generalized Hansen-Hurwitz estimator, denoted as HH.ra.\ In Figure \[cv&gt;2\_length\], the values of the three numerical measures of accuracy keep decreasing, as we increase the sampling budget from $0.05$ to $0.5$. That means, the estimation performance is improving as the single random walk gets longer, which is to be expected. However, the improvement is dramatic as the sampling rate reaches $0.2$, and becomes moderate beyond that. Therefore, it is appropriate to set the minimum sampling budget $\beta$ to be around $0.2$ for the estimation to perform well. Let’s now assume $\beta^*=\beta=0.2$, then the computing time of approximating SPLs between all sampled nodes is $O(0.04mn+0.04n^2)$. Comparing it to the computing time of actual distances between all sampled nodes $O(0.2mn +0.2n^2)$, approximating the SPLs leads to about $80\%$ reduction in computation.\ Another thing we can notice from Figure \[cv&gt;2\_length\] is that the estimation performance is better in larger networks. More specifically, as we increase the network size, the estimates stay unbiased and their variance gets smaller. One possible reason for this phenomenon is the small world effect. For a fixed sampling budget, the sample size increases linearly with the network size, while the shortest path lengths only increases in the $\log$ scale. Therefore even with the same sampling budget, a random walk in a large network is relatively “longer" than that in a small network, and thus has a stronger ability in uncovering the shortest paths. However, we can also observe from plots $(d)$, $(e)$ and $(f)$ in Figure \[cv&gt;2\_length\_inverse\] that the estimation performance for network of size $n=5000$ and that for network of size $n=10000$ are very similar. Therefore one can expect the relationship between estimation performance and sampling budget to be similar as plot $(c)$ in Figure \[cv&gt;2\_length\] if a network with large $c.v.$ is of size $n=5000$ or larger.\ What’s more, as we can observe in plots $(a)$, $(b)$ and $(c)$ in Figure \[cv&gt;2\_length\_inverse\], the inverse of the three estimation measures seem to have an approximately linear relationship with the sampling budget. If the coefficients of this linear relationship can be found for large networks, we can estimate the estimation accuracy in advance based on sampling budget. ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![Performance of generalized Hansen-Hurwitz ratio estimator versus length ($\beta$) of random walks in networks with large $c.v.$, measured by $MAD$, $RMSE$, and $KL$ (low values are better).[]{data-label="cv>2_length"}](cv>2/n1000_length_numerical.png "fig:") ![Performance of generalized Hansen-Hurwitz ratio estimator versus length ($\beta$) of random walks in networks with large $c.v.$, measured by $MAD$, $RMSE$, and $KL$ (low values are better).[]{data-label="cv>2_length"}](cv>2/n1000_length_boxplot.png "fig:") \(a) $n=1000$, $c.v.=2.4$ \(d) $n=1000$, $c.v.=2.4$, $\beta=0.2$ ![Performance of generalized Hansen-Hurwitz ratio estimator versus length ($\beta$) of random walks in networks with large $c.v.$, measured by $MAD$, $RMSE$, and $KL$ (low values are better).[]{data-label="cv>2_length"}](cv>2/n5000_length_numerical.png "fig:") ![Performance of generalized Hansen-Hurwitz ratio estimator versus length ($\beta$) of random walks in networks with large $c.v.$, measured by $MAD$, $RMSE$, and $KL$ (low values are better).[]{data-label="cv>2_length"}](cv>2/n5000_length_boxplot.png "fig:") \(b) $n=5000$, $c.v.=2.4$ \(e) $n=5000$, $c.v.=2.4$, $\beta=0.2$ ![Performance of generalized Hansen-Hurwitz ratio estimator versus length ($\beta$) of random walks in networks with large $c.v.$, measured by $MAD$, $RMSE$, and $KL$ (low values are better).[]{data-label="cv>2_length"}](cv>2/n10000_length_numerical.png "fig:") ![Performance of generalized Hansen-Hurwitz ratio estimator versus length ($\beta$) of random walks in networks with large $c.v.$, measured by $MAD$, $RMSE$, and $KL$ (low values are better).[]{data-label="cv>2_length"}](cv>2/n10000_length_boxplot.png "fig:") \(c) $n=10000$, $c.v.=2.4$ \(f) $n=10000$, $c.v.=2.4$, $\beta=0.2$ ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![Performance of generalized Hansen-Hurwitz ratio estimator versus length ($\beta$) of random walks in networks with large $c.v.$ Left: comparison of performance measures under each network size, measured by the inverse of $MAD$, $RMSE$, and $KL$ (high values are better). Right: comparison of network sizes under each performance measure, measured by $MAD$, $RMSE$, and $KL$ (low values are better).[]{data-label="cv>2_length_inverse"}](cv>2/n1000_length_inverse.png "fig:") ![Performance of generalized Hansen-Hurwitz ratio estimator versus length ($\beta$) of random walks in networks with large $c.v.$ Left: comparison of performance measures under each network size, measured by the inverse of $MAD$, $RMSE$, and $KL$ (high values are better). Right: comparison of network sizes under each performance measure, measured by $MAD$, $RMSE$, and $KL$ (low values are better).[]{data-label="cv>2_length_inverse"}](cv>2/mad.png "fig:") \(a) $n=1000$, $c.v.=2.4$ \(d) $MAD$, $c.v.=2.4$ ![Performance of generalized Hansen-Hurwitz ratio estimator versus length ($\beta$) of random walks in networks with large $c.v.$ Left: comparison of performance measures under each network size, measured by the inverse of $MAD$, $RMSE$, and $KL$ (high values are better). Right: comparison of network sizes under each performance measure, measured by $MAD$, $RMSE$, and $KL$ (low values are better).[]{data-label="cv>2_length_inverse"}](cv>2/n5000_length_inverse.png "fig:") ![Performance of generalized Hansen-Hurwitz ratio estimator versus length ($\beta$) of random walks in networks with large $c.v.$ Left: comparison of performance measures under each network size, measured by the inverse of $MAD$, $RMSE$, and $KL$ (high values are better). Right: comparison of network sizes under each performance measure, measured by $MAD$, $RMSE$, and $KL$ (low values are better).[]{data-label="cv>2_length_inverse"}](cv>2/rmse.png "fig:") \(b) $n=5000$, $c.v.=2.4$ \(e) $RMSE$, $c.v.=2.4$ ![Performance of generalized Hansen-Hurwitz ratio estimator versus length ($\beta$) of random walks in networks with large $c.v.$ Left: comparison of performance measures under each network size, measured by the inverse of $MAD$, $RMSE$, and $KL$ (high values are better). Right: comparison of network sizes under each performance measure, measured by $MAD$, $RMSE$, and $KL$ (low values are better).[]{data-label="cv>2_length_inverse"}](cv>2/n10000_length_inverse.png "fig:") ![Performance of generalized Hansen-Hurwitz ratio estimator versus length ($\beta$) of random walks in networks with large $c.v.$ Left: comparison of performance measures under each network size, measured by the inverse of $MAD$, $RMSE$, and $KL$ (high values are better). Right: comparison of network sizes under each performance measure, measured by $MAD$, $RMSE$, and $KL$ (low values are better).[]{data-label="cv>2_length_inverse"}](cv>2/kl.png "fig:") \(c) $n=10000$, $c.v.=2.4$ \(f) $KL$, $c.v.=2.4$ ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ### Size of Landmarks and Length of Random Walks for Networks with Small $c.v.$ For networks with small $c.v.$ in degree distribution, due to the lack of powerful hubs, random walks lack strong ability to uncover shortest paths. As discussed in section 4.4.4, an alternative way is to use landmarks to estimate the SPLs between sampled nodes. We proposed using nodes in the sample with high degrees as landmarks, and a remaining question is the size of landmark set.\ In order to see the effect of landmark size and single random walk length on the estimation performance, we will:\ 1) Fix the sampling budget at $\beta=0.2$ and let $\gamma = 0.05(0.05)0.5$ to find the minimum fraction $\gamma_0$ for good estimation;\ 2) Fix the fraction of landmarks at $\gamma=\gamma_0$, implement single random walks with sampling budget $\beta = 0.05(0.05)0.5$, and check if a random walk with $\beta<0.2$ is also acceptable.\ The above process is applied to networks with $c.v.=0.8$ and size $n=1,000$, $n=5,000$, and $n=10,000$, as shown in Figure \[cv&lt;2\_gamma\_length\] and \[cv&lt;2\_beta\_length\]. The estimator we use here is the generalized Hansen-Hurwitz estimator, denoted as HH.ra.\ In Figure \[cv&lt;2\_gamma\_length\], the values of the three numerical measures are decreasing as $\gamma$ increases from $0.05$ to $0.2$, and stay almost stable after $0.3$. Thus we can use $\gamma_0=0.3$ as the minimum fraction of landmarks. In Figure \[cv&lt;2\_beta\_length\], for large networks when $n=5,000$ or $n=10,000$, the estimation performance is very good if we use a sampling budget as large as $\beta=0.2$. We can also use s smaller sampling budget such as $0.15$ or even $0.1$ for large networks since the estimation error will not increase too much. If we assume $\beta^*=\beta=0.2$ and use $\gamma=0.3$, the pre-computing time of approximating SPLs between all sampled nodes is $O(0.06mn+0.06n^2)$. Comparing it to the computing time of actual distances between all sampled nodes $O(0.2mn +0.2n^2)$, approximating the SPLs leads to about $70\%$ reduction in computation.\ Similar to networks with large $c.v.$, for networks with small $c.v.$ we also notice that the estimation performance is better in larger networks. A possible reason is that as we increase the network size and fix sampling budget and landmark fraction, the number of landmarks is getting larger. And with more landmarks it is more likely to get a precise estimation of the SPLs between sampled nodes.\ On the other hand, Figure \[gamma\_beta\] shows the change of RMSE as we increase the landmark size $\gamma$ for different values of random walk length $\beta$. As expected, the lines for larger $beta$ are below the lines for smaller $\beta$. The means if the random walk is longer, less landmarks are needed. To save computation time of breadth-first search, we want the value of $\beta \gamma$ to be as small as possible. The questions remains whether to use large $\beta$ and small $\gamma$ or to use small $\beta$ and large $\gamma$. Ideally the latter is better because by doing that we can also save the sampling cost. Suppose we want the RMSE to be as small as $0.01$, there are four available combinations of $\beta$ and $\gamma$ listed in Table \[comb\] to achieve this accuracy. Among them $\beta=0.1$ and $\gamma=0.5$ is the best because it achieves both the smallest sampling budget and the shortest computation time for BFS. ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![Performance of generalized Hansen-Hurwitz ratio estimator versus size ($\gamma$) of landmarks in networks with small $c.v.$, measured by $MAD$, $RMSE$ and $KL$ (low values are better).[]{data-label="cv<2_gamma_length"}](cv<2/n1000_gamma_length_numerical.png "fig:") ![Performance of generalized Hansen-Hurwitz ratio estimator versus size ($\gamma$) of landmarks in networks with small $c.v.$, measured by $MAD$, $RMSE$ and $KL$ (low values are better).[]{data-label="cv<2_gamma_length"}](cv<2/n1000_gamma_length_boxplot.png "fig:") \(a) $n=1000$, $c.v.=0.8$, $\beta=0.2$ \(d) $n=1000$, $c.v.=0.8$, $\beta=0.2$, $\gamma=0.3$ ![Performance of generalized Hansen-Hurwitz ratio estimator versus size ($\gamma$) of landmarks in networks with small $c.v.$, measured by $MAD$, $RMSE$ and $KL$ (low values are better).[]{data-label="cv<2_gamma_length"}](cv<2/n5000_gamma_length_numerical.png "fig:") ![Performance of generalized Hansen-Hurwitz ratio estimator versus size ($\gamma$) of landmarks in networks with small $c.v.$, measured by $MAD$, $RMSE$ and $KL$ (low values are better).[]{data-label="cv<2_gamma_length"}](cv<2/n5000_gamma_length_boxplot.png "fig:") \(b) $n=5000$, $c.v.=0.8$, $\beta=0.2$ \(e) $n=5000$, $c.v.=0.8$, $\beta=0.2$, $\gamma=0.3$ ![Performance of generalized Hansen-Hurwitz ratio estimator versus size ($\gamma$) of landmarks in networks with small $c.v.$, measured by $MAD$, $RMSE$ and $KL$ (low values are better).[]{data-label="cv<2_gamma_length"}](cv<2/n10000_gamma_length_numerical.png "fig:") ![Performance of generalized Hansen-Hurwitz ratio estimator versus size ($\gamma$) of landmarks in networks with small $c.v.$, measured by $MAD$, $RMSE$ and $KL$ (low values are better).[]{data-label="cv<2_gamma_length"}](cv<2/n10000_gamma_length_boxplot.png "fig:") \(c) $n=10000$, $c.v.=0.8$, $\beta=0.2$ \(f) $n=10000$, $c.v.=0.8$, $\beta=0.2$, $\gamma=0.3$ ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![Performance of generalized Hansen-Hurwitz ratio estimator versus size ($\gamma$) of landmarks in networks with small $c.v.$ Left: comparison of performance measures under each network size, measured by the inverse of $MAD$, $RMSE$, and $KL$ (high values are better). Right: comparison of network sizes under each performance measure, measured by $MAD$, $RMSE$, and $KL$ (low values are better).[]{data-label="cv<2_gamma_length_inverse"}](cv<2/n1000_gamma_length_inverse.png "fig:") ![Performance of generalized Hansen-Hurwitz ratio estimator versus size ($\gamma$) of landmarks in networks with small $c.v.$ Left: comparison of performance measures under each network size, measured by the inverse of $MAD$, $RMSE$, and $KL$ (high values are better). Right: comparison of network sizes under each performance measure, measured by $MAD$, $RMSE$, and $KL$ (low values are better).[]{data-label="cv<2_gamma_length_inverse"}](cv<2/gamma_mad.png "fig:") \(a) $n=1000$, $c.v.=0.8$, $\beta=0.2$ \(d) $MAD$, $c.v.=0.8$, $\beta=0.2$ ![Performance of generalized Hansen-Hurwitz ratio estimator versus size ($\gamma$) of landmarks in networks with small $c.v.$ Left: comparison of performance measures under each network size, measured by the inverse of $MAD$, $RMSE$, and $KL$ (high values are better). Right: comparison of network sizes under each performance measure, measured by $MAD$, $RMSE$, and $KL$ (low values are better).[]{data-label="cv<2_gamma_length_inverse"}](cv<2/n5000_gamma_length_inverse.png "fig:") ![Performance of generalized Hansen-Hurwitz ratio estimator versus size ($\gamma$) of landmarks in networks with small $c.v.$ Left: comparison of performance measures under each network size, measured by the inverse of $MAD$, $RMSE$, and $KL$ (high values are better). Right: comparison of network sizes under each performance measure, measured by $MAD$, $RMSE$, and $KL$ (low values are better).[]{data-label="cv<2_gamma_length_inverse"}](cv<2/gamma_rmse.png "fig:") \(b) $n=5000$, $c.v.=0.8$, $\beta=0.2$ \(e) $RMSE$, $c.v.=0.8$, $\beta=0.2$ ![Performance of generalized Hansen-Hurwitz ratio estimator versus size ($\gamma$) of landmarks in networks with small $c.v.$ Left: comparison of performance measures under each network size, measured by the inverse of $MAD$, $RMSE$, and $KL$ (high values are better). Right: comparison of network sizes under each performance measure, measured by $MAD$, $RMSE$, and $KL$ (low values are better).[]{data-label="cv<2_gamma_length_inverse"}](cv<2/n10000_gamma_length_inverse.png "fig:") ![Performance of generalized Hansen-Hurwitz ratio estimator versus size ($\gamma$) of landmarks in networks with small $c.v.$ Left: comparison of performance measures under each network size, measured by the inverse of $MAD$, $RMSE$, and $KL$ (high values are better). Right: comparison of network sizes under each performance measure, measured by $MAD$, $RMSE$, and $KL$ (low values are better).[]{data-label="cv<2_gamma_length_inverse"}](cv<2/gamma_kl.png "fig:") \(c) $n=10000$, $c.v.=0.8$, $\beta=0.2$ \(f) $KL$, $c.v.=0.8$, $\beta=0.2$ ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![Performance of generalized Hansen-Hurwitz ratio estimator versus length ($\beta$) of random walks in networks with small $c.v.$, measured by $MAD$, $RMSE$, and $KL$ (low values are better).[]{data-label="cv<2_beta_length"}](cv<2/n1000_beta_length_numerical.png "fig:") ![Performance of generalized Hansen-Hurwitz ratio estimator versus length ($\beta$) of random walks in networks with small $c.v.$, measured by $MAD$, $RMSE$, and $KL$ (low values are better).[]{data-label="cv<2_beta_length"}](cv<2/n1000_beta_length_boxplot.png "fig:") \(a) $n=1000$, $c.v.=0.8$, $\gamma=0.3$ \(d) $n=1000$, $c.v.=0.8$, $\gamma=0.3$, $\beta=0.2$ ![Performance of generalized Hansen-Hurwitz ratio estimator versus length ($\beta$) of random walks in networks with small $c.v.$, measured by $MAD$, $RMSE$, and $KL$ (low values are better).[]{data-label="cv<2_beta_length"}](cv<2/n5000_beta_length_numerical.png "fig:") ![Performance of generalized Hansen-Hurwitz ratio estimator versus length ($\beta$) of random walks in networks with small $c.v.$, measured by $MAD$, $RMSE$, and $KL$ (low values are better).[]{data-label="cv<2_beta_length"}](cv<2/n5000_beta_length_boxplot.png "fig:") \(b) $n=5000$, $c.v.=0.8$, $\gamma=0.3$ \(e) $n=5000$, $c.v.=0.8$, $\gamma=0.3$, $\beta=0.2$ ![Performance of generalized Hansen-Hurwitz ratio estimator versus length ($\beta$) of random walks in networks with small $c.v.$, measured by $MAD$, $RMSE$, and $KL$ (low values are better).[]{data-label="cv<2_beta_length"}](cv<2/n10000_beta_length_numerical.png "fig:") ![Performance of generalized Hansen-Hurwitz ratio estimator versus length ($\beta$) of random walks in networks with small $c.v.$, measured by $MAD$, $RMSE$, and $KL$ (low values are better).[]{data-label="cv<2_beta_length"}](cv<2/n10000_beta_length_boxplot.png "fig:") \(c) $n=10000$, $c.v.=0.8$, $\gamma=0.3$ \(f) $n=10000$, $c.v.=0.8$, $\gamma=0.3$, $\beta=0.2$ --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![Performance of generalized Hansen-Hurwitz ratio estimator versus size ($\gamma$) of landmarks for different lengths ($\beta$) of random walks in a network with $n=5000$ and $c.v.=0.8$ (small).[]{data-label="gamma_beta"}](cv<2/beta_gamma "fig:") ![Performance of generalized Hansen-Hurwitz ratio estimator versus size ($\gamma$) of landmarks for different lengths ($\beta$) of random walks in a network with $n=5000$ and $c.v.=0.8$ (small).[]{data-label="gamma_beta"}](cv<2/beta_gamma_inverse "fig:") \(a) $RMSE$ \(b) $1/RMSE$ -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- $\beta$ $\gamma$ $\beta\gamma$ --------- ---------- --------------- 0.4 0.25 0.1 0.3 0.3 0.09 0.2 0.375 0.075 0.1 0.5 0.05 : Comparison of combinations of random walk length ($\beta$) and landmark size ($\gamma$) to achieve $RMSE \approx 0.01$ in a network with $n=5000$ and $c.v.=0.8$ (small).[]{data-label="comb"} ### Number of Random Walks To compare the estimation performance with a single random walk and multiple random walks, we fix the total sampling budget and take $H$ independent random walk samples with $H$ ranging from $1$ to $6$. For networks with large $c.v.$, we fix the total sampling budget at $\beta_0=0.2$. For networks with small $c.v.$, we fix the total sampling budget at $\beta_0=0.2$ and use $\gamma_0=0.3$ as the landmark fraction.\ As we can observe in Figure \[number\], for both networks, the three numerical measures are stable as we increase the number of random walks from $1$ to $6$. Therefore, when keeping the total sampling budget fixed, using multiple random walks will not improve the estimation performance. In the case of networks with large $c.v$, the reason for this phenomenon is explained by [@ribeiro2012multiple]. As they have shown in their work, if the network has a large variance in degree distribution, two random walks intersect with high probability, and thus the subgraph induced by multiple random walks will be very similar to that induced by a single random walk. In the case of networks with small $c.v.$, where we use landmarks to estimate the SPLs between sampled nodes, although the landmarks found by a single random walk and those by multiple random walks are not necessarily the same, our simulation showed that they have similar and high betweenness centralities. We can therefore infer that they will play similar roles in estimating the distances between other nodes.\ ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![Performance of generalized Hansen-Hurwitz ratio estimator versus number ($H$) of random walks, measured by $MAD$, $RMSE$, and $KL$ (low values are better).[]{data-label="number"}](cv>2/n5000_number_numerical.png "fig:") ![Performance of generalized Hansen-Hurwitz ratio estimator versus number ($H$) of random walks, measured by $MAD$, $RMSE$, and $KL$ (low values are better).[]{data-label="number"}](cv<2/n5000_number_numerical.png "fig:") \(a) $n=5000$, $c.v.=2.4$, $\beta=0.2$ \(b) $n=5000$, $c.v.=0.8$, $\beta=0.2$, $\gamma=0.3$ ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ### Comparison of Estimators In this section, we compare the performances of the four estimators proposed in section 4.4. For generalized Hansen-Hurwitz estimator, Horvitz-Thompson estimator, and Horvitz-Thompson ratio estimator, $\psi_r$’s and $\pi_r$’s are estimated by the expressions discussed in section 4.5, therefore the estimates are denoted by HH.or.s, HT.or.s, and HT.ra.s, respectively. For generalized Hansen-Hurwitz ratio estimator, we just need to use the actual degrees of sampled nodes to compute the estimates, thus the estimates are denoted as HH.ra. The comparison based on numerical evaluations measures and comparison based on box plots are shown in Figure \[HH\_HT\].\ From the numerical comparison, one can observe that the Horvitz-Thompson ratio estimator is doing a slightly better job than the other three estimators. As we can observe from the comparison of box plots, the Horvitz-Thompson ratio estimator exhibits smaller variance than the Hansen-Hurwitz ratio estimator. There are two reasons for this phenomenon. According to the Rao-Blackwell theorem ([@casella2002statistical], p.342), if $\hat{\theta}$ is an unbiased estimator of $\theta$ and $\theta^* = E(\hat{\theta}|T)$ where $T$ is the sufficient statistic for $\theta$, then $\theta^*$ is also an unbiased estimator of $\theta$ and $Var(\theta^*) \leq Var(\hat{\theta})$, and the inequality is strict unless $\theta$ is a function of $T$. That is, for any unbiased estimator that is not a function of the sufficient statistic, one may always obtain an unbiased estimator, depending on the sufficient statistic, that is better in terms of smaller variance. For the finite population sampling situation, the *minimal sufficient statistic $T$* is the unordered set of distinct, labeled observations ([@basu1969role]). Therefore, the Hansen-Hurwitz estimator $\hat{t}^{HH}$ is not a function of the minimal sufficient statistic while the Horvitz-Thompson estimator $\hat{t}^{HT}$ is. Note that both $\hat{t}^{HH}$ and $\hat{t}^{HT}$ are unbiased estimators for $t$. Based on the Rao-Blackwell theorem, we can always find another unbiased estimator $W=E(\hat{t}^{HH}|T)$ such that $W$ has a smaller variance than $\hat{t}^{HH}$, while we cannot find such an estimator for $\hat{t}^{HT}$ as $\hat{t}^{HT} = E(\hat{t}^{HT}|T)$. Therefore $\hat{t}^{HT}$ is expected to have a smaller variance than $\hat{t}^{HH}$. Second, since the ratio form ensures that the estimated fractions for all values of SPL sum to 1, it stabilizes the estimators and therefore has a smaller variance than the original form. Theses two reasons make it not surprising for the Horvitz-Thompson ratio estimator to perform best among the four estimators.\ In Figure \[HH\_HT\_length\], we compare the performance of the Horvitz-Thompson ratio estimator and the generalized Hansen-Hurwitz ratio estimator by plotting their RMSE versus the sampling budget $\beta$. As one can observe, for the Horvitz-Thompson ratio estimator, we can use a smaller sampling budget to achieve the same estimation precision as the generalized Hansen-Hurwitz ratio estimator. For example, in network $(a)$, the estimation precision by the generalized Hansen-Hurwitz ratio estimator with $20\%$ sampling budget can be achieved by the Horvitz-Thompson ratio estimator with only about $12.5\%$ sampling budget. Therefore in practice, people would prefer using the Horvitz-Thompson ratio estimator if sampling is expensive and saving sampling budget is needed. -------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![Estimation performance versus estimators, measured by $MAD$, $RMSE$, and $KL$ (low values are better).[]{data-label="HH_HT"}](cv>2/n5000_HH_HT.png "fig:") ![Estimation performance versus estimators, measured by $MAD$, $RMSE$, and $KL$ (low values are better).[]{data-label="HH_HT"}](cv>2/n5000_HH_HT_boxplot.png "fig:") \(a) $n=5000$, $c.v.=2.4$, $\beta=0.2$ \(b) $n=5000$, $c.v.= 2.4$, $\beta=0.2$ ![Estimation performance versus estimators, measured by $MAD$, $RMSE$, and $KL$ (low values are better).[]{data-label="HH_HT"}](cv<2/n5000_HH_HT.png "fig:") ![Estimation performance versus estimators, measured by $MAD$, $RMSE$, and $KL$ (low values are better).[]{data-label="HH_HT"}](cv<2/n5000_HH_HT_boxplot.png "fig:") \(c) $n=5000$, $c.v.=0.8$, $\beta=0.2$, $\gamma=0.3$ \(d) $n=5000$, $c.v.= 0.8$, $\beta=0.2$, $\gamma=0.3$ -------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![Performance (measured by RMSE) of generalized Hansen-Hurwitz ratio estimator and Horvitz-Thompson ratio estimator versus number ($H$) of random walks in networks, measured by $MAD$, $RMSE$, and $KL$ (low values are better).[]{data-label="HH_HT_length"}](cv>2/n5000_HH_HT_length.png "fig:") ![Performance (measured by RMSE) of generalized Hansen-Hurwitz ratio estimator and Horvitz-Thompson ratio estimator versus number ($H$) of random walks in networks, measured by $MAD$, $RMSE$, and $KL$ (low values are better).[]{data-label="HH_HT_length"}](cv<2/n5000_HH_HT_length.png "fig:") \(a) $n=5000$, $c.v.=2.4$, $\beta=0.2$ \(b) $n=5000$, $c.v.= 0.8$, $\beta=0.2$, $\gamma = 0.3$ ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Evaluation of Estimation ------------------------ In order to evaluate how well our estimates from section 4.4 perform in estimating the population SPLD, we first compare the generalized Hansen-Hurwitz ratio estimates, denoted by HH.ra, to the unweighted sample SPLDs observed from the induced subgraphs, denoted by UW. Note that by using HH.ra, we are correcting bias from UW, but the bias to be corrected for networks with large $c.v.$ and networks with small $c.v$ are different. For networks with large $c.v.$, we only correct the bias from unequal sampling probabilities, because we are still using the observed SPLs between sampled nodes from the induced subgraph. For networks with small $c.v.$, we correct bias from both unequal sampling probabilities and not observing the true SPLs between sampled nodes, as we use landmarks to estimate those SPLs.\ From the numerical comparison of UW and HH.ra in Table \[evaluation\_numerical\], one can observe that for both networks, about $90\%$ of the estimation error in UW is reduced by using HH.ra. In Figure \[evaluation\_boxplots\], one can observe that for networks with large $c.v.$ as shown in $(a)$, the box plots for UW are shifted to the left of the population SPLD. This is because dyads with shorter SPLs are more likely to the be sampled than dyads with longer SPLs, and thus the fractions of dyads with shorter SPLs are over estimated while the fractions of dyads with long SPLs are under estimated. Therefore for networks with large $c.v.$, bias from unequal sampling probabilities is dominating in the estimation error of UW. For networks with small $c.v.$ as shown in $(b)$, the box plots for UW are shifted to the right of the population SPLD. This is because the many observed SPLs are longer than the true SPLs. Therefore in networks with small $c.v.$, bias from not observing the true SPLs between sampled nodes is dominating in the estimation error of UW, and thus correcting it is necessary. For both networks, after applying HH.ra, the box plots stay at the right positions on the histogram with short whisker, which means the estimates are unbiased and have small variance.\ On the other hand, in order to see how much we can improve if we can actually observe the true SPLs between sampled nodes, we compare our HH.ra based on approximated SPLs, to the generalized Hansen-Hurwitz estimates based on the true SPLs between sampled nodes, denoted by HH.ra.l. As we can observe, there will still be some improvement if we use the latter, but the improvement will not be huge. More specifically, in Table \[evaluation\_numerical\], the improvement from HH.ra to HH.ra.l is only about $10\%$. In Figure \[evaluation\_boxplots\], we can also see that the box plots for HH.ra and those for HH.ra.l are really close. Therefore in practice, we will prefer to base our estimates on the approximated SPLs for saving computation time and not loosing much estimation accuracy. --------------------------- ------ ---------------------- ------- ------ --------------------------- ------- $n=5000$, $c.v.=2.4$ $n=5000$, $c.v.=0.8$ $\beta=0.2$ $\beta=0.2$, $\gamma=0.3$ MAD RMSE KL MAD RMSE KL Unweighted sample SPLD .114 .115 0.161 .101 .103 0.213 HH.ra by approximated SPL .010 .012 .0025 .011 .014 .0036 HH.ra by real SPL .009 .011 .0023 .010 .013 .0034 --------------------------- ------ ---------------------- ------- ------ --------------------------- ------- : Numerical comparison of generalized Hansen-Hurwitz ratio estimates based on approximated SPL (HH.ra), unweighted sample SPLD observed from the induced subgraphs (UW), and generalized Hansen-Hurwitz ratio estimates based on actual SPL (HH.ra.l).[]{data-label="evaluation_numerical"} ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![Box plots comparison of generalized Hansen-Hurwitz ratio estimates based on approximated SPL (HH.ra), unweighted sample SPLD observed from the induced subgraphs (UW), and generalized Hansen-Hurwitz ratio estimates based on actual SPL (HH.ra.l)[]{data-label="evaluation_boxplots"}](cv>2/n5000_est.png "fig:") ![Box plots comparison of generalized Hansen-Hurwitz ratio estimates based on approximated SPL (HH.ra), unweighted sample SPLD observed from the induced subgraphs (UW), and generalized Hansen-Hurwitz ratio estimates based on actual SPL (HH.ra.l)[]{data-label="evaluation_boxplots"}](cv<2/n5000_est.png "fig:") \(a) $n=5000$, $c.v.=2.4$, $\beta=0.2$ \(b) $n=5000$, $c.v.=0.8$, $\beta=0.2$, $\gamma=0.3$ ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Applications ============ In this section, we test our SPLD estimation methods on data from eight real world networks. These data are available on the SNAP (Stanford Network Analysis Project) website. To simplify the analysis, we only consider nodes in the largest connected component. Table \[summary\] summarizes the basic information for each network used in out test. These networks vary in size, number of edges, average degree, and most importantly, coefficient of variation. We compare the HH.ra estimates based on observed SPLs from induced subgraph (obs SPL) versus those based on estimated SPLs by landmarks (est SPL). For estimates based on observed SPLs from induced subgraph, we use a single random walk with $20\%$ sampling budget. For estimates based on estimated SPLs by landmarks, we use a single random walk with $20\%$ sampling budget and $30\%$ of the sampled nodes as landmarks. The results are shown in Table \[real\_numerical\] and Figure \[real\_boxplots\].\ As shown in the first $(a)$, $(b)$, and $(c)$ in Figure \[real\_boxplots\], the estimates based on observed SPLs of the first three real networks, Oregon, AS-733, and Email-Enron are very good. This is not surprising, as the $c.v.$’s for those networks are all much larger than 2, which indicates the existence of hubs. In addition, the performance on Email-Enron network is the best among these three, as measured by the small values in MAD, RMSE, and KL in Table Table \[real\_numerical\]. This is also to be expected, since Email-Enron network has the largest size among the three. According to our discussion in section 5.3, our estimation method tends to perform better for larger networks.\ When the $c.v.$ gets closer to 2, the performance of estimation based on observed SPLs varies from case to case. For example, Figure \[real\_boxplots\] $(c)$ and $(d)$ show that the SPLD of CA-HepPh network is a little over-estimated, while the SPLD of Wiki-Vote network is very well estimated. As one can observe, the average distance in CA-HepPh network is longer than the average distance in Wiki-Vote network, therefore random walks in CA-HepPh network are having a harder time in finding the true shortest paths. The performance of estimation is getting worse as the $c.v.$ decreases to some value below 1.5, and even below 1. For networks CA-HepTh, CA-GrQc, and P2P, the SPLDs are highly over estimated. The worst case happens to the P2P network, which only has $c.v.=0.9$. Since there’s no powerful hub in networks $(f)$, $(g)$, and $(h)$ , its really hard for random walks to find the shortest paths.\ Alternatively, we can base the estimates on the estimated SPLs by landmarks. As one can notice, for networks whose estimates based on the observed SPLs are good, such as $(a)$, $(b)$, $(c)$, and $(e)$, there won’t be much improvement if we base the estimates on the estimated SPLs. However, for networks with small value in $c.v.$, whose estimates based on the observed SPLs are far from the true SPLDs, such as $(f)$, $(g)$, and $(h)$, using estimated SPLs will correct the bias from not observing true SPLs in the induced subgraph and therefore result in much better estimation performance. Network nodes edges $<k>$ $cv$ $E.f$ ------------- ------- -------- ------- ------ ------- Oregon 10.7K 22K 4.1 7.6 0.162 AS-733 6.4K 13.2K 4.3 5.8 0.140 Email-Enron 33.7K 361.7K 21.5 3.5 0.298 CA-HepPh 11.2K 235.2K 42 2.29 0.361 Wiki-Vote 7.1K 103.7K 29.3 2.06 0.254 CA-HepTh 8.6K 49.6K 11.5 1.12 0.107 CA-GrQc 4.2K 26.8K 12.9 1.34 0.129 P2P 10.9K 40K 7.4 0.9 0.093 : Summary of Networks[]{data-label="summary"} HH.ra by MAD RMSE KL ------------- ---------- ------- ------ ------- Oregon obs SPL .012 .014 .0032 est SPL .011 .014 .0029 AS-733 obs SPL .016 .021 .0055 est SPL .016 .020 .0051 Email-Enron obs SPL .0069 .009 .0023 est SPL .0085 .010 .0032 CA-HepPh obs SPL .026 .032 .026 est SPL .016 .022 .011 Wiki-Vote obs SPL .014 .018 .0028 est SPL .015 .018 .0029 CA-HepTh obs SPL .028 .034 .054 est SPL .010 .015 .012 CA-GrQc obs SPL .031 .038 .062 est SPL .015 .024 .0225 P2P obs SPL .086 .087 .13 est SPL .009 .010 .0012 : Numerical evaluation measures of estimated SPLDs of real networks: HH.ra by observed SPL ($\beta=0.2$) v.s. HH.ra by estimated SPL by landmarks ($\beta=0.2$, $\gamma=0.3$).[]{data-label="real_numerical"} ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![Box plots of estimated SPLDs of real networks: HH.ra by observed SPL ($\beta=0.2$) v.s. HH.ra by estimated SPL by landmarks ($\beta=0.2$, $\gamma=0.3$).[]{data-label="real_boxplots"}](real/Oregon.png "fig:") ![Box plots of estimated SPLDs of real networks: HH.ra by observed SPL ($\beta=0.2$) v.s. HH.ra by estimated SPL by landmarks ($\beta=0.2$, $\gamma=0.3$).[]{data-label="real_boxplots"}](real/AS-733.png "fig:") \(a) Oregon ($c.v. = 7.6$) \(b) AS-733 ($c.v. = 5.8$) ![Box plots of estimated SPLDs of real networks: HH.ra by observed SPL ($\beta=0.2$) v.s. HH.ra by estimated SPL by landmarks ($\beta=0.2$, $\gamma=0.3$).[]{data-label="real_boxplots"}](real/Email-Enron.png "fig:") ![Box plots of estimated SPLDs of real networks: HH.ra by observed SPL ($\beta=0.2$) v.s. HH.ra by estimated SPL by landmarks ($\beta=0.2$, $\gamma=0.3$).[]{data-label="real_boxplots"}](real/CA-HepPh.png "fig:") \(c) Email-Enron ($c.v. = 3.5$) \(d) CA-HepPh ($c.v. = 2.29$) ![Box plots of estimated SPLDs of real networks: HH.ra by observed SPL ($\beta=0.2$) v.s. HH.ra by estimated SPL by landmarks ($\beta=0.2$, $\gamma=0.3$).[]{data-label="real_boxplots"}](real/Wiki-Vote.png "fig:") ![Box plots of estimated SPLDs of real networks: HH.ra by observed SPL ($\beta=0.2$) v.s. HH.ra by estimated SPL by landmarks ($\beta=0.2$, $\gamma=0.3$).[]{data-label="real_boxplots"}](real/CA-HepTh.png "fig:") \(e) Wiki-Vote ($c.v. = 2.06$) \(f) CA-HepTh ($c.v. = 1.12$) ![Box plots of estimated SPLDs of real networks: HH.ra by observed SPL ($\beta=0.2$) v.s. HH.ra by estimated SPL by landmarks ($\beta=0.2$, $\gamma=0.3$).[]{data-label="real_boxplots"}](real/CA-GrQc.png "fig:") ![Box plots of estimated SPLDs of real networks: HH.ra by observed SPL ($\beta=0.2$) v.s. HH.ra by estimated SPL by landmarks ($\beta=0.2$, $\gamma=0.3$).[]{data-label="real_boxplots"}](real/p2p.png "fig:") \(g) CA-GrQc ($c.v. = 1.34$) \(h) P2P ($c.v. = 0.9$) ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
--- address: - 'Department of Political Science, Michigan State University, East Lansing, MI 48824, USA' - 'Department of Political Science, Duke University, Durham, NC 27701, USA' - 'Department of Statistical Science, Duke University, Durham, NC 27701, USA' author: - Shahryar Minhas - 'Peter D. Hoff' - 'Michael D. Ward' bibliography: - '/Users/s7m/whistle/master.bib' title: | Inferential Approaches for Network Analysis:\ AMEN for Latent Factor Models\ *Forthcoming Political Analysis* --- Data structures that define relations between pairs of actors are ubiquitous in political science – examples include the study of events such as legislation cosponsorship, trade, interstate conflict, and the formation of international agreements. The dominant paradigm for dealing with such data, however, is not a network approach but rather a dyadic design, in which an interaction between a pair of actors is considered independent of interactions between any other pair in the system. To highlight the ubiquity of this approach the following represent just a sampling of the articles published from the 1980s to the present in the American Journal of Political Science (AJPS) and American Political Science Review (APSR) that assume dyadic independence: @dixon:1983 [@mansfield:etal:2000; @lemke:reed:2001a; @mitchell:2002; @dafoe:2011a; @fuhrmann:sechser:2014; @carnegie:2014]. The implication of this assumption is that when, for example, Vietnam and the United States decide to form a trade agreement, they make this decision independently of what they have done with other countries and what other countries in the international system have done among themselves.[^1] An even stronger assumption is that Japan declaring war against the United States is independent of the decision of the United States to go to war against Japan.[^2] A common refrain from those that favor the dyadic approach is that many events are only bilateral (@diehl:wright:2016), thus alleviating the need for an approach that incorporates interdependencies between observations. However, even bilateral events and processes take place within a broader system, and occurrences in one part of the system may be dependent upon events in another. At a minimum, we do not know whether independence of events and processes characterizes what we observe. In this article, we introduce the additive and multiplicative effects (AME) model and compare it to two popular alternatives: the latent space model (LSM) and exponential random graph model (ERGM). The AME approach to network modeling is a flexible framework that can be used to estimate many different types of cross-sectional and longitudinal networks with binary, ordinal, or continuous edges within a generalized linear model framework. Our approach addresses ways in which observations can be interdependent while still allowing scholars to focus on examining theories that may only be relevant in the monadic or dyadic level. Further, at the network level it accounts for nodal and dyadic dependence patterns, and provides a descriptive visualization of higher-order dependencies such as homophily and stochastic equivalence. The article is organized as follows, we begin by briefly discussing the difficulties in studying dyadic data through approaches that assume observational independence. Then we introduce the AME framework in two steps. We first discuss nodal and dyadic dependencies that may lead to non-iid observations and show how the additive effects portion of AME can be used to model these dependencies. Similarly, in the second step, we discuss how the multiplicative effects portion of the AME framework can be used to effectively model third order effects while still enabling researchers to study exogenous covariates of interest. We then briefly contrast these latent variables models with ERGM and conclude with an application on a cross-sectional network measuring collaborations during the policy design of the Swiss CO$_{2}$ act. We show that AME provides a superior goodness of fit to the data in terms of ability to predict linkages and capture network dependencies.\ **Addressing Dependencies in Dyadic Data** {#addressing-dependencies-in-dyadic-data .unnumbered} ========================================== Relational, or dyadic, data provide measurements of how pairs of actors relate to one another. The easiest way to organize such data is the directed dyadic design in which the unit of analysis is some set of $n$ actors that have been paired together to form a dataset of $z$ directed dyads. A tabular design such as this for a set of $n$ actors, $\{i, j, k, l \}$ results in $n \times (n-1)$ observations, as shown in Table \[tab:canDesign\]. $\mathbf{\longrightarrow}$ When modeling these types of data, scholars typically employ a generalized linear model (GLM) estimated via maximum-likelihood. The stochastic component of this model reflects our assumptions about the probability distribution from which the data are generated: $y_{ij} \sim P(Y | \theta_{ij})$, with a probability density or mass function such as the normal, binomial, or Poisson, and we assume that each dyad in the sample is independently drawn from that particular distribution, given $\theta_{ij}$. The systematic component characterizes the model for the parameters of that distribution and describes how $\theta_{ij}$ varies as a function of a set of nodal and dyadic covariates, $\mathbf{X}_{ij}$: $\theta_{ij} = \bm\beta^{T} \mathbf{X}_{ij}$. The key assumption we make when applying this modeling technique is that given $\mathbf{X}_{ij}$ and the parameters of our distribution, each of the dyadic observations is conditionally independent. Specifically, we construct the joint density function over all dyads using the observations from Table 1 as an example. $$\begin{aligned} \begin{aligned} P(y_{ij}, y_{ik}, \ldots, y_{lk} | \theta_{ij}, \theta_{ik}, \ldots, \theta_{lk}) &= P(y_{ij} | \theta_{ij}) \times P(y_{ik} | \theta_{ik}) \times \ldots \times P(y_{lk} | \theta_{lk}) \\ P(\mathbf{Y} \; | \; \bm{\theta}) &= \prod_{\alpha=1}^{n \times (n-1)} P(y_{\alpha} | \theta_{\alpha}) \\ \end{aligned}\end{aligned}$$ We next convert the joint probability into a likelihood: $\displaystyle \mathcal{L} (\bm{\theta} | \mathbf{Y}) = \prod_{\alpha=1}^{n \times (n-1)} P(y_{\alpha} | \theta_{\alpha})$. The likelihood as defined above is only valid if we are able to make the assumption that, for example, $y_{ij}$ is independent of $y_{ji}$ and $y_{ik}$ given the set of covariates we specified.[^3] Assuming that the dyad $y_{ij}$ is conditionally independent of the dyad $y_{ji}$ asserts that there is no level of reciprocity in a dataset, an assumption that in many cases would seem quite untenable. A harder problem to handle is the assumption that $y_{ij}$ is conditionally independent of $y_{ik}$, the difficulty here follows from the possibility that $i$’s relationship with $k$ is dependent on how $i$ relates to $j$ and how $j$ relates to $k$, or more simply put the “enemy of my enemy \[may be\] my friend”. Accordingly, inferences drawn from misspecified models that ignore potential interdependencies between dyadic observations are likely to have a number of issues including biased estimates of the effect of independent variables, uncalibrated confidence intervals, and poor predictive performance. **Additive Part of AME** {#additive-part-of-ame .unnumbered} ======================== The dependencies that tend to develop in relational data can be more easily understood when we move away from stacking dyads on top of one another and turn instead to a matrix design as illustrated in Table \[tab:netDesign\]. Operationally, this type of data structure is represented as a $n \times n$ matrix, $\mathbf{Y}$, where the diagonals are typically undefined. The $ij^{th}$ entry defines the relationship sent from $i$ to $j$ and can be continuous or discrete. Relations between actors in a network setting at times does not involve senders and receivers. Networks such as these are referred to as undirected and all the relations between actors are symmetric, meaning $y_{ij}=y_{ji}$. The most common type of dependency that arises in networks are first-order, or nodal dependencies, and these point to the fact that we typically find significant heterogeneity in activity levels across nodes. The implication of this across-node heterogeneity is within-node homogeneity of ties, meaning that values across a row, say $\{y_{ij},y_{ik},y_{il}\}$, will be more similar to each other than other values in the adjacency matrix because each of these values has a common sender $i$. This type of dependency manifests in cases where sender $i$ tends to be more active or less active in the network than other senders. Similarly, while some actors may be more active in sending ties to others in the network, we might also observe that others are more popular targets, this would manifest in observations down a column, $\{y_{ji},y_{ki},y_{li}\}$, being more similar. Last, we might also find that actors who are more likely to send ties in a network are also more likely to receive them, meaning that the row and column means of an adjacency matrix may be correlated. Another ubiquitous type of structural interdependency is reciprocity. This is a second-order, or dyadic, dependency relevant only to directed datasets, and asserts that values of $y_{ij}$ and $y_{ji}$ may be statistically dependent. The prevalence of these types of potential interactions within directed dyadic data also complicates the basic assumption of observational independence. We model first- and second-order dependencies in AME using a set of additive effects that are motivated by the social relations model (SRM) developed by [@warner:etal:1979; @li:loken:2002]. Specifically, we decompose the variance of observations in an adjacency matrix in terms of heterogeneity across row means (out-degree), heterogeneity along column means (in-degree), correlation between row and column means, and correlations within dyads: $$\begin{aligned} \begin{aligned} y_{ij} &= \mu + e_{ij} \\ e_{ij} &= a_{i} + b_{j} + \epsilon_{ij} \\ \{ (a_{1}, b_{1}), \ldots, (a_{n}, b_{n}) \} &{\stackrel{\mathclap{\normalfont\mbox{\tiny{iid}}}}{\sim}}N(0,\Sigma_{ab}) \\ \{ (\epsilon_{ij}, \epsilon_{ji}) : \; i \neq j\} &{\stackrel{\mathclap{\normalfont\mbox{\tiny{iid}}}}{\sim}}N(0,\Sigma_{\epsilon}), \text{ where } \\ \Sigma_{ab} = \begin{pmatrix} \sigma_{a}^{2} & \sigma_{ab} \\ \sigma_{ab} & \sigma_{b}^2 \end{pmatrix} \;\;\;\;\; &\Sigma_{\epsilon} = \sigma_{\epsilon}^{2} \begin{pmatrix} 1 & \rho \\ \rho & 1 \end{pmatrix} . \label{eqn:srmCov} \end{aligned}\end{aligned}$$ $\mu$ here provides a baseline measure of the density mean of a network, and $e_{ij}$ represents residual variation. The residual variation decomposes into parts: a row/sender effect ($a_{i}$), a column/receiver effect ($b_{j}$), and a within-dyad effect ($\epsilon_{ij}$). The row and column effects are modeled jointly to account for correlation in how active an actor is in sending and receiving ties. Heterogeneity in the row and column means is captured by $\sigma_{a}^{2}$ and $\sigma_{b}^{2}$, respectively, and $\sigma_{ab}$ describes the linear relationship between these two effects (i.e., whether actors who send a lot of ties also receive a lot of ties). Beyond these first-order dependencies, second-order dependencies are described by $\sigma_{\epsilon}^{2}$ and a within dyad correlation, or reciprocity, parameter $\rho$. We incorporate the covariance structure described in Equation \[eqn:srmCov\] into the systematic component of a GLM framework: $\bm\beta^{\top} \mathbf{X}_{ij} + a_{i} + b_{j} + \epsilon_{ij}$, where $ \bm\beta^{\top} \mathbf{X}_{ij}$ accommodates the inclusion of dyadic, sender, and receiver covariates. This approach incorporates row, column, and within-dyad dependence in way that is widely used and understood by applied researchers: a regression framework and additive random effects to accommodate variances and covariances often seen in relational data. Furthermore, this handles a diversity of outcome distributions. Multiplicative Part of AME {#multiplicative-part-of-ame .unnumbered} ========================== Missing from the additive effects portion of the model is an accounting of third-order dependence patterns that can arise in relational data. A third-order dependency is defined as the dependency between triads, not dyads. The ubiquity of third-order effects in relational datasets can arise from the presence of some set of shared attributes between nodes that affects their probability of interacting with one another.[^4] For example, finding common in the political economy literature is that democracies are more likely to form trade agreements with one another, and the shared attribute here is a country’s political system. A binary network where actors tend to form ties with others based on some set of shared characteristics often leads to a network graph with a high number of “transitive triads” in which sets of actors $\{i,j,k\}$ are each linked to one another. The left-most plot in Figure \[fig:homphStochEquivNet\] provides a representation of a network that exhibits this type of pattern. The relevant implication of this when it comes to conducting statistical inference is that–unless we are able to specify the list of exogenous variable that may explain this prevalence of triads–the probability of $j$ and $k$ forming a tie is not independent of the ties that already exist between those actors and $i$. --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![Graph on the left is a representation of an undirected network that exhibits a high degree of homophily (linkages forming because of shared attributes), while on the right we show an undirected network that exhibits stochastic equivalence.](homophNet "fig:"){width=".33\textwidth"} ![Graph on the left is a representation of an undirected network that exhibits a high degree of homophily (linkages forming because of shared attributes), while on the right we show an undirected network that exhibits stochastic equivalence.](stochEquivNet "fig:"){width=".33\textwidth"} --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- \[fig:homphStochEquivNet\] Another third-order dependence pattern that cannot be accounted for in the additive effects framework is stochastic equivalence. A pair of actors $ij$ are stochastically equivalent if the probability of $i$ relating to, and being related to, by every other actor is the same as the probability for $j$. This refers to the idea that there will be groups of nodes in a network with similar relational patterns. The occurrence of a dependence pattern such as this is not uncommon in the social science applications. Recent work estimates a stochastic equivalence structure to explain the formation of preferential trade agreements (PTAs) between countries [@manger:etal:2012]. Specifically, they suggest that PTA formation is related to differences in per capita income levels between countries. Countries falling into high, middle, and low income per capita levels will have patterns of PTA formation that are determined by the groups into which they fall. Such a structure is represented in the right-most panel of Figure \[fig:homphStochEquivNet\], here the lightly shaded group of nodes at the top can represent high-income countries, nodes on the bottom-left middle-income, and the darkest shade of nodes low-income countries. The behavior of actors in a network can at times be governed by group level dynamics, and failing to account for such dynamics leaves potentially important parts of the data generating process ignored. We account for third order dependence patterns using a latent variable framework, and our goal in doing so is twofold: 1) be able to adequately represent third order dependence patterns, 2) improve our ability to conduct inference on exogenous covariates. Latent variable models assume that relationships between nodes are mediated by a small number ($K$) of node-specific unobserved latent variables. We contrast the approach that we utilize within AME, the latent factor model (LFM), to the latent space model, which is among the most widely used in the networks literature.[^5] For the sake of exposition, we consider the case where relations are symmetric to describe the differences between these approaches. These approaches can be incorporated into the framework that we have been constructing through the inclusion of an additional term, $\alpha(\mu_{i}, \mu_{j})$, that captures latent third order characteristics of a network. General definitions for how $\alpha(u_{i}, u_{j})$ are defined for these latent variable models are shown in Equations \[eqn:latAlpha\]: $$\begin{aligned} \begin{aligned} \text{Latent space model} \\ &\alpha(\textbf{u}_{i}, \textbf{u}_{j}) = -|\textbf{u}_{i} - \textbf{u}_{j}| \\ &\textbf{u}_{i} \in \mathbb{R}^{K}, \; i \in \{1, \ldots, n \} \\ \text{Latent factor model} \\ &\alpha(\textbf{u}_{i}, \textbf{u}_{j}) = \textbf{u}_{i}^{\top} \Lambda \textbf{u}_{j} \\ &\textbf{u}_{i} \in \mathbb{R}^{K}, \; i \in \{1, \ldots, n \} \\ &\Lambda \text{ a } K \times K \text{ diagonal matrix} \label{eqn:latAlpha} \end{aligned}\end{aligned}$$ In the LSM approach, each node $i$ has some unknown latent position in $K$ dimensional space, $\textbf{u}_{i} \in \mathbb{R}^{K}$, and the probability of a tie between a pair $ij$ is a function of the negative Euclidean distance between them: $-|\textbf{u}_{i} - \textbf{u}_{j}|$. Because latent distances for a triple of actors obey the triangle inequality, this formulation models the tendencies toward homophily commonly found in social networks. This approach is implemented in the [[latentnet]{}]{} which is part of the [[statnet]{}]{} $\sf{R}$ package [@krivitsky:handcock:2015]. However, this approach also comes with an important shortcoming: it confounds stochastic equivalence and homophily. Consider two nodes $i$ and $j$ that are proximate to one another in $K$ dimensional Euclidean space, this suggests not only that $|\textbf{u}_{i} - \textbf{u}_{j}|$ is small but also that $|\textbf{u}_{i} - \textbf{u}_{l}| \approx |\textbf{u}_{j} - \textbf{u}_{l}|$, the result being that nodes $i$ and $j$ will by construction assumed to possess the same relational patterns with other actors such as $l$ (i.e., that they are stochastically equivalent). Thus LSMs confound strong ties with stochastic equivalence. This approach cannot adequately model data with many ties between nodes that have different network roles. This is problematic as real-world networks exhibit varying degrees of stochastic equivalence and homophily. In these situations, using the LSM would end up representing only a part of the network structure. In the latent factor model, each actor has an unobserved vector of characteristics, $\textbf{u}_{i} = \{u_{i,1}, \ldots, u_{i,K} \}$, which describe their behavior as an actor in the network. The probability of a tie from $i$ to $j$ depends on the extent to which $\textbf{u}_{i}$ and $\textbf{u}_{j}$ are “similar” (i.e., point in the same direction) and on whether the entries of $\Lambda$ are greater or less than zero. More specifically, the similarity in the latent factors, $\textbf{u}_{i} \approx \textbf{u}_{j}$, corresponds to how stochastically equivalent a pair of actors are and the eigenvalue determines whether the network exhibits positive or negative homophily. For example, say that we estimate a rank-one latent factor model (i.e., $K=1$), in this case $\textbf{u}_{i}$ is represented by a scalar $u_{i,1}$, similarly, $\textbf{u}_{j}=u_{j,1}$, and $\Lambda$ will have just one diagonal element $\lambda$. The average effect this will have on $y_{ij}$ is simply $\lambda \times u_{i} \times u_{j}$, where a positive value of $\lambda>0$ indicates homophily and $\lambda<0$ heterophily. This approach can represent both varying degrees of homophily and stochastic equivalence.[^6] In addition to summarizing dependence patterns in networks, scholars are often concerned with accounting for interdependencies so that they can better estimate the effects of exogenous covariates. Both the latent space and factor models attempt to do this as they are “conditional independence models” – in that they assume that ties are conditionally independent given all of the observed predictors and unknown node-specific parameters: $p( Y | X , U ) = \prod_{i<j} p( y_{i,j} | x_{i,j} , u_i , u_j)$. Typical parametric models of this form relate $y_{i,j}$ to $(x_{i,j},u_i,u_j)$ via a link function: $$\begin{aligned} p(y_{i,j} | x_{i,j}, u_i , u_j ) & = f( y_{i,j} : \eta_{i,j} ) \\ \eta_{i,j} &= \beta^\top x_{i,j} + \alpha(\textbf{u}_{i}, \textbf{u}_{j}).\end{aligned}$$ However, the structure of $\alpha(\textbf{u}_{i}, \textbf{u}_{j})$ can result in very different interpretations for any estimates of the regression coefficients $\beta$. For example, suppose the latent effects $\{ u_1,\ldots, u_n\}$ are near zero on average (if not, their mean can be absorbed into an intercept parameter and row and column additive effects). Under the LFM, the average value of $\alpha(\textbf{u}_{i}, \textbf{u}_{j}) = \textbf{u}_{i}^{\top} \Lambda \textbf{u}_{j}$ will be near zero and so we have $$\begin{aligned} \eta_{i,j} & = \beta^\top x_{i,j} + \textbf{u}_{i}^{\top} \Lambda \textbf{u}_{j} \\ \bar \eta & \approx \beta^\top \bar x.\end{aligned}$$ The implication of this is that the values of $\beta$ can be interpreted as yielding the “average” value of $\eta_{i,j}$. On the other hand, under the LSM $$\begin{aligned} \eta_{i,j} & = \beta^\top x_{i,j} - |\textbf{u}_{i} - \textbf{u}_{j}| \\ \bar \eta & \approx \beta^\top \bar x - \overline{ |\textbf{u}_{i} - \textbf{u}_{j}| } < \beta^\top \bar x .\end{aligned}$$ In this case, $\beta^\top \bar x$ does not represent an “average” value of the predictor $\eta_{i,j}$, it represents a maximal value as if all actors were zero distance from each other in the latent social space. For example, consider the simplest case of a normally distributed network outcome with an identity link: $$\begin{aligned} y_{i,j} & = \beta^\top x_{i,j} + \alpha(\textbf{u}_{i}, \textbf{u}_{j}) + \epsilon_{i,j} \\ \bar y & \approx \beta^\top \bar x + \overline{ \alpha(\textbf{u}_{i}, \textbf{u}_{j}) } .\end{aligned}$$ Under the LSM, $\bar y \approx \beta^\top \bar x - \overline{ |\textbf{u}_{i} - \textbf{u}_{j}| } < \beta^\top \bar x$, and so we no longer can interpret $\beta$ as representing the linear relationship between $y$ and $x$. Instead, it may be thought of as describing some sort of average hypothetical “maximal” relationship between $y_{i,j}$ and $x_{i,j}$. Thus the LFM provides two important benefits. First, we are able to capture a wider assortment of dependence patterns that arise in relational data, and, second, parameter interpretation is more straightforward. The AME approach considers the regression model shown in Equation \[eqn:ame\]: $$\begin{aligned} \begin{aligned} y_{ij} &= g(\theta_{ij}) \\ &\theta_{ij} = \bm\beta^{\top} \mathbf{X}_{ij} + e_{ij} \\ &e_{ij} = a_{i} + b_{j} + \epsilon_{ij} + \alpha(\textbf{u}_{i}, \textbf{v}_{j}) \text{ , where } \\ &\qquad \alpha(\textbf{u}_{i}, \textbf{v}_{j}) = \textbf{u}_{i}^{\top} \textbf{D} \textbf{v}_{j} = \sum_{k \in K} d_{k} u_{ik} v_{jk}. \\ \label{eqn:ame} \end{aligned}\end{aligned}$$ Using this framework, we are able to model the dyadic observations as conditionally independent given $\bm\theta$, where $\bm\theta$ depends on the the unobserved random effects, $\mathbf{e}$. $\mathbf{e}$ is then modeled to account for the potential first, second, and third-order dependencies that we have discussed. As described in Equation \[eqn:srmCov\], $a_{i} + b_{j} + \epsilon_{ij}$, are the additive random effects in this framework and account for sender, receiver, and within-dyad dependence. The multiplicative effects, $\textbf{u}_{i}^{\top} \textbf{D} \textbf{v}_{j}$, are used to capture higher-order dependence patterns that are left over in $\bm\theta$ after accounting for any known covariate information.[^7] ### **ERGMs** {#ergms .unnumbered} An alternative approach to accounting for third-order dependence patterns are ERGMs. Whereas AME seeks to estimate interdependencies in a network through a set of latent variables, ERGM approaches are useful when researchers are interested in the role that a specific network statistic(s) has in giving rise to an observed network. These network statistics could include the number of transitive triads in a network, balanced triads, reciprocal pairs and so on.[^8] In the ERGM framework, a set of statistics, $S(\mathbf{Y})$, define a model. Given the chosen set of statistics, the probability of observing a particular network dataset $\mathbf{Y}$ can be expressed as: $$\begin{aligned} \Pr(Y = y) = \frac{ \exp( \bm\beta^{T} S(y) ) }{ \sum_{z \in \mathcal{Y}} \exp( \bm\beta^{T} S(z) ) } \text{ , } y \in \mathcal{Y} \label{eqn:ergm}\end{aligned}$$ $\bm\beta$ represents a vector of model coefficients for the specified network statistics, $\mathcal{Y}$ denotes the set of all obtainable networks, and the denominator is used as a normalizing factor [@hunter:etal:2008]. This approach provides a way to state that the probability of observing a given network depends on the patterns that it exhibits, which are operationalized in the list of network statistics specified by the researcher. Within this approach one can test the role that a variety of network statistics play in giving rise to a particular network. Additionally, researchers can easily accommodate nodal and dyadic covariates. Further because of the Hammersley-Clifford theorem any probability distribution over networks can be represented by the form shown in Equation \[eqn:ergm\]. A notable issue when estimating ERGMs, however, is that the estimated model can become degenerate. Degeneracy here means that the model places a large amount of probability on a small subset of networks that fall in the set of obtainable networks, $\mathcal{Y}$, but share little resemblance with the observed network, $\mathbf{Y}$ [@schweinberger:2011].[^9] Some have argued that model degeneracy is simply a result of model misspecification [@handcock:2003a; @goodreau:etal:2008; @handcock:etal:2008]. However, this points to an important caveat in interpreting the implications of the Hammersley-Clifford theorem. Though this theorem ensures that any network can be represented through an ERGM, it says nothing about the complexity of the sufficient statistics ($S(y)$) required to do so. Failure to properly account for higher-order dependence structures through an appropriate specification can at best lead to model degeneracy, which provides an obvious indication that the specification needs to be altered, and at worst deliver a result that converges but does not appropriately capture the interdependencies in the network. The consequence of the latter case is a set of inferences that will continue to be biased as a result of unmeasured heterogeneity, thus defeating the major motivation for pursuing an inferential network model in the first place. In the following section we undertake a comparison of the latent distance model, ERGM, and the AME model using an application presented in @cranmer:etal:2016.[^10] In doing so, we are able to compare and contrast these various approaches. **Empirical Comparison** {#empirical-comparison .unnumbered} ======================== We utilize a cross-sectional network measuring whether an actor indicated that they collaborated with each other during the policy design of the Swiss CO$_{2}$ act (@ingold:2008). This is a directed relational matrix as an actor $i$ can indicate that they collaborated with $j$ but $j$ may not have stated that they collaborated with $i$. The Swiss government proposed this act in 1995 with the goal of undertaking a 10% reduction in CO$_{2}$ emissions by 2012. The act was accepted in the Swiss Parliament in 2000 and implemented in 2008. @ingold:2008, and subsequent work by @ingold:fischer:2014, sought to determine what drives collaboration among actors trying to affect climate change policy. The set of actors included in this network are those that were identified by experts as holding an important position in Swiss climate policy. In total, Ingold identifies 34 relevant actors: five state actors, eleven industry and business representatives, seven environmental NGOs and civil society organizations, five political parties, and six scientific institutions and consultants. We follow Ingold & Fischer and @cranmer:etal:2016 in developing a model specification to understand and predict link formation in this network.[^11] The LSM we fit on this network includes a two-dimensional Euclidean distance metric. The ERGM specification for this network includes the same exogenous variables as LSM, but also includes a number of endogenous characteristics of the network. The AME model we fit includes the same exogenous covariates and accounts for nodal and dyadic heterogeneity using the SRM.[^12] Third-order effects are represented by the latent factor model with $K=2$. Last, we also include a logistic model as that is still the standard in most of the field. Parameter estimates for these three approaches are shown in Table \[tab:regTable\]. The first point to note is that, in general, the parameter estimates returned by the AME while similar to those of ERGM are quite different from the LSM. For example, while the LSM returns a result for the `Opposition/alliance` variable that diverges from ERGM, the AME returns a result that is similar to Ingold & Fischer. Similar discrepancies appear for other parameters such as `Influence attribution` and `Alter’s influence degree`. Each of these discrepancies are eliminated when using AME. As described previously, this is because the LSM approach complicates the interpretation of the effects of exogenous variables due to the construction of the latent variable term.[^13] Logit LSM ERGM AME --------------------------------------- -------- ------------------ -------- ------------------ Intercept/Edges -4.44 0.95 -12.17 -3.40 (0.34) \[0.09; 1.85\] (1.40) \[-4.40; -2.51\] **Conflicting policy preferences** $\;\;\;\;$ Business vs. NGO -0.86 -1.37 -1.11 -1.38 (0.46) \[-2.39; -0.40\] (0.51) \[-2.47; -0.49\] $\;\;\;\;$ Opposition/alliance 1.21 0.00 1.22 1.08 (0.20) \[-0.40; 0.40\] (0.20) \[0.72; 1.49\] $\;\;\;\;$ Preference dissimilarity -0.07 -1.77 -0.44 -0.79 (0.37) \[-2.64; -0.91\] (0.39) \[-1.55; -0.07\] **Transaction costs** $\;\;\;\;$ Joint forum participation 0.88 1.51 0.90 0.92 (0.27) \[0.85; 2.17\] (0.28) \[0.40; 1.46\] **Influence** $\;\;\;\;$ Influence attribution 1.20 0.08 1.00 1.10 (0.22) \[-0.40; 0.54\] (0.21) \[0.70; 1.55\] $\;\;\;\;$ Alter’s influence indegree 0.10 0.01 0.21 0.11 (0.02) \[-0.03; 0.04\] (0.04) \[0.07; 0.15\] $\;\;\;\;$ Influence absolute diff. -0.03 0.04 -0.05 -0.07 (0.02) \[-0.01; 0.09\] (0.01) \[-0.11; -0.03\] $\;\;\;\;$ Alter = Government actor 0.63 -0.46 1.04 0.56 (0.25) \[-1.08; 0.14\] (0.34) \[-0.06; 1.16\] **Functional requirements** $\;\;\;\;$ Ego = Environmental NGO 0.88 -0.60 0.79 0.68 (0.26) \[-1.30; 0.08\] (0.17) \[-0.36; 1.73\] $\;\;\;\;$ Same actor type 0.74 1.17 0.99 1.03 (0.22) \[0.62; 1.72\] (0.23) \[0.62; 1.48\] **Endogenous dependencies** $\;\;\;\;$ Mutuality 1.22 0.81 (0.21) (0.25) $\;\;\;\;$ Outdegree popularity 0.95 (0.09) $\;\;\;\;$ Twopaths -0.04 (0.02) $\;\;\;\;$ GWIdegree (2.0) 3.42 (1.47) $\;\;\;\;$ GWESP (1.0) 0.58 (0.16) $\;\;\;\;$ GWOdegree (0.5) 8.42 (2.11) : Logit and ERGM results are shown with standard errors in parentheses. LSM and AME are shown with 95% posterior credible intervals provided in brackets. \[tab:regTable\] There are also a few differences between the parameter estimates that result from the ERGM and AME. Using the AME we find evidence that `Preference dissimilarity` is associated with a reduced probability of collaboration between a pair of actors, which is in line with the theoretical expectations of Ingold & Fischer.[^14] Additionally, the AME results differ from ERGM for the nodal effects related to whether a receiver of a collaboration is a government actor, `Alter=Government actor`, and whether the sender is an environmental NGO, `Ego=Environmental NGO`. Tie Formation Prediction {#tie-formation-prediction .unnumbered} ------------------------ To test which model more accurately captures the data generating process for this network, we utilize a cross-validation procedure to assess the out-of-sample performance for each of the models presented in Table \[tab:regTable\]. Our cross-validation approach proceeds as follows: - Randomly divide the $n \times (n-1)$ data points into $S$ sets of roughly equal size, letting $s_{ij}$ be the set to which pair $\{ij\}$ is assigned. - For each $s \in \{1, \ldots, S\}$: - Obtain estimates of the model parameters conditional on $\{y_{ij} : s_{ij} \neq s\}$, the data on pairs not in set $s$. - For pairs $\{kl\}$ in set $s$, let $\hat y_{kl} = E[y_{kl} | \{y_{ij} : s_{ij} \neq s\}]$, the predicted value of $y_{kl}$ obtained using data not in set $s$. The procedure summarized in the steps above generates a sociomatrix of out-of-sample predictions of the observed data. Each entry $\hat y_{ij}$ is a predicted value obtained from using a subset of the data that does not include $y_{ij}$. In this application we set $S$ to 45 which corresponds to randomly excluding approximately 2% of the data from the estimation.[^15] Using the set of out-of-sample predictions we generate from the cross-validation procedure, we provide a series of tests to assess model fit. The left-most plot in Figure \[fig:roc\] compares the four approaches in terms of their ability to predict the out-of-sample occurrence of collaboration based on Receiver Operating Characteristic (ROC) curves. ROC curves provide a comparison of the trade-off between the True Positive Rate (TPR), sensitivity, False Positive Rate (FPR), 1-specificity, for each model. Models that have a better fit according to this test should have curves that follow the left-hand border and then the top border of the ROC space. On this diagnostic, the AME model performs best closely followed by ERGM. The Logit and LSM approach lag notably behind the other specifications. A more intuitive visualization of the differences between these modeling approaches can be gleaned through examining the separation plots included on the right-bottom edge of the ROC plot. This visualization tool plots each of the observations, in this case actor pairs, in the dataset according to their predicted value from left (low values) to right (high values). Models with a good fit should have all network links, here these are colored by the modeling approach, towards the right of the plot. Using this type of visualization emphasizes that the AME and ERGM models perform better than the alternatives. ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ![Assessments of out-of-sample predictive performance using ROC curves, separation plots, and PR curves. AUC statistics are also provided.](Figure2a_color "fig:"){width=".5\textwidth"} ![Assessments of out-of-sample predictive performance using ROC curves, separation plots, and PR curves. AUC statistics are also provided.](Figure2b_color "fig:"){width=".5\textwidth"} ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ \[fig:roc\] The last diagnostic we highlight to assess predictive performance are precision-recall (PR) curves. In both ROC and PR space we utilize the TPR, also referred to as recall–though in the former it is plotted on the y-axis and the latter the x-axis. The difference, however, is that in ROC space we utilize the FPR, while in PR space we use precision. FPR measures the fraction of negative examples that are misclassified as positive, while precision measures the fraction of examples classified as positive that are truly positive. PR curves are useful in situations where correctly predicting events is more interesting than simply predicting non-events (@davis:goadrich:2006). This is especially relevant in the context of studying many relational datasets in political science such as conflict, because events in such data are extremely sparse and it is relatively easy to correctly predict non-events. In the case of our application dataset, the vast majority of dyads, 80%, do not have a network linkage, which points to the relevance of assessing performance using the PR curves as we do in the right-most plot of Figure \[fig:roc\]. We can see that the relative-ordering of the models remains similar but the differences in how well they perform become much more stark. Here we find that the AME approach performs notably better in actually predicting network linkages than each of the alternatives. Area under the curve (AUC) statistics are provided in Figure \[fig:roc\] and these also highlight AME’s superior out-of-sample performance.[^16] Capturing Network Attributes {#capturing-network-attributes .unnumbered} ---------------------------- We also assess which of these models best captures the network features of the dependent variable.[^17] To do this, we compare the observed network with a set of networks simulated from the estimated models.[^18] We simulate 1,000 networks from the three models and compare how well they align with the observed network in terms of four network statistics: (1) the empirical standard deviation of the row means (i.e., heterogeneity of nodes in terms of the ties they send); (2) the empirical standard deviation of the column means (i.e., heterogeneity of nodes in terms of the ties they receive); (3) the empirical within-dyad correlation (i.e., measure of reciprocity in the network); and (4) a normalized measure of triadic dependence. A comparison of the LSM, ERGM, and AME models among these four statistics is shown in Figure \[fig:ergmAmePerf\]. ![Network goodness of fit summary using [[amen]{}]{}.](Figure3_color "fig:"){width="100.00000%"} \[fig:ergmAmePerf\] Here it becomes quickly apparent that the LSM model fails to capture how active and popular actors are in the Swiss climate change mitigation network.[^19] The AME and ERGM specifications again both tend to do equally well. If when running this diagnostic, we found that the AME model did not adequately represent the observed network this would indicate that we might want to increase $K$ to better account for network interdependencies. No changes to the model specification as described by the exogenous covariates a researcher has chosen would be necessary. If the ERGM results did not align with the diagnostic presented in Figure \[fig:ergmAmePerf\], then this would indicate that an incorrect set of endogenous dependencies have been specified. **Conclusion** {#conclusion .unnumbered} ============== The AME approach to estimation and inference in network data provides a number of benefits over extant alternatives in political science. Specifically, it provides a modeling framework for dyadic data that is based on familiar statistical tools such as linear regression, GLM, random effects, and factor models.[^20] Further we have shown that alternatives such as the LSM complicate parameter interpretation due to the construction of the latent variable term. The benefit of AME is that its focus intersects with the interest of most IR scholars, which is primarily on the effects of exogenous covariates. For researchers in the social sciences this is of primary interest, as many studies that employ relational data still have conceptualizations that are monadic or dyadic in nature. ERGMs are best suited for cases in which scholars are interested in studying the role that particular types of node- and dyad-based network configurations play in generating the network. Though valuable this is often orthogonal to the interest of most researchers who are focused on studying the affect of a particular exogenous variable, such as democracy, on a dyadic variable like conflict while simply accounting for network dependencies. Additionally, through the application dataset utilized herein we show that the AME approach outperforms both ERGM and LSM in out-of-sample prediction, and also is better able to capture network dependencies than the LSM. More broadly, relational data structures are composed of actors that are part of a system.[^21] It is unlikely that this system can be viewed simply as a collection of isolated actors or pairs of actors. The assumption that dependencies between observations occur can at the very least be examined. Failure to take into account interdependencies leads to biased parameter estimates and poor fitting models. By using standard diagnostics such as shown in Figure \[fig:ergmAmePerf\], one can easily assess whether an assumption of independence is reasonable. We stress this point because a common misunderstanding that seems to have emerged within the social science literature relying on dyadic data is that a network based approach is only necessary if one has theoretical explanations that extend beyond the dyadic. This is not at all the case and findings that continue to employ a dyadic design may misrepresent the effects of the very variables that they are interested in. The AME approach that we have detailed here provides a statistically familiar way for scholars to account for unobserved network structures in relational data. Appendix {#appendix .unnumbered} ======== Additive and Multiplicative Effects Gibbs Sampler {#additive-and-multiplicative-effects-gibbs-sampler .unnumbered} ------------------------------------------------- To estimate, the effects of our exogenous variables and latent attributes we utilize a Bayesian probit model in which we sample from the posterior distribution of the full conditionals until convergence. Specifically, given observed data $\textbf{Y}$ and $\textbf{X}$ – where $\textbf{X}$ is a design array that includes our sender, receiver, and dyadic covariates – we estimate our network of binary ties using a probit framework where: $y_{ij,t} = 1(\theta_{ij,t}>0)$ and $\theta_{ij,t} = \bm\beta^{\top}\mathbf{X}_{ij,t} + a_{i} + b_{j} + \textbf{u}_{i}^{\top} \textbf{D} \textbf{v}_{j} + \epsilon_{ij}$. The derivation of the full conditionals is described in detail in @hoff:2005 and @hoff:2008, thus here we only outline the Markov chain Monte Carlo (MCMC) algorithm for the AME model that we utilize in this paper. - Given initial values of $\{\bm\beta, \textbf{a}, \textbf{b}, \textbf{U}, \textbf{V}, \Sigma_{ab}, \rho, \text{ and } \sigma_{\epsilon}^{2}\}$, the algorithm proceeds as follows: - sample $\bm\theta \; | \; \bm\beta, \textbf{X}, \bm\theta, \textbf{a}, \textbf{b}, \textbf{U}, \textbf{V}, \Sigma_{ab}, \rho, \text{ and } \sigma_{\epsilon}^{2}$ (Normal) - sample $\bm\beta \; | \; \textbf{X}, \bm\theta, \textbf{a}, \textbf{b}, \textbf{U}, \textbf{V}, \Sigma_{ab}, \rho, \text{ and } \sigma_{\epsilon}^{2}$ (Normal) - sample $\textbf{a}, \textbf{b} \; | \; \bm\beta, \textbf{X}, \bm\theta, \textbf{U}, \textbf{V}, \Sigma_{ab}, \rho, \text{ and } \sigma_{\epsilon}^{2}$ (Normal) - sample $\Sigma_{ab} \; | \; \bm\beta, \textbf{X}, \bm\theta, \textbf{a}, \textbf{b}, \textbf{U}, \textbf{V}, \rho, \text{ and } \sigma_{\epsilon}^{2}$ (Inverse-Wishart) - update $\rho$ using a Metropolis-Hastings step with proposal $p^{*} | p \sim$ truncated normal$_{[-1,1]}(\rho, \sigma_{\epsilon}^{2})$ - sample $\sigma_{\epsilon}^{2} \; | \; \bm\beta, \textbf{X}, \bm\theta, \textbf{a}, \textbf{b}, \textbf{U}, \textbf{V}, \Sigma_{ab}, \text{ and } \rho$ (Inverse-Gamma) - For each $k \in K$: - Sample $\textbf{U}_{[,k]} \; | \; \bm\beta, \textbf{X}, \bm\theta, \textbf{a}, \textbf{b}, \textbf{U}_{[,-k]}, \textbf{V}, \Sigma_{ab}, \rho, \text{ and } \sigma_{\epsilon}^{2}$ (Normal) - Sample $\textbf{V}_{[,k]} \; | \; \bm\beta, \textbf{X}, \bm\theta, \textbf{a}, \textbf{b}, \textbf{U}, \textbf{V}_{[,-k]}, \Sigma_{ab}, \rho, \text{ and } \sigma_{\epsilon}^{2}$ (Normal) - Sample $\textbf{D}_{[k,k]} \; | \; \bm\beta, \textbf{X}, \bm\theta, \textbf{a}, \textbf{b}, \textbf{U}, \textbf{V}, \Sigma_{ab}, \rho, \text{ and } \sigma_{\epsilon}^{2}$ (Normal)[^22] Ingold & Fischer Model Specification and Expected Effects {#ingold-fischer-model-specification-and-expected-effects .unnumbered} --------------------------------------------------------- AME Model Convergence {#sec:ameConvAppendix .unnumbered} --------------------- Trace plot for AME model presented in paper. ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ![Trace plot for AME model presented in paper. In this model, we utilize the SRM to account for first and second-order dependence. To account for third order dependencies we use the latent factor approach with $K=2$.[]{data-label="fig:ameConv"}](FigureA1a "fig:"){width=".45\textwidth"} ![Trace plot for AME model presented in paper. In this model, we utilize the SRM to account for first and second-order dependence. To account for third order dependencies we use the latent factor approach with $K=2$.[]{data-label="fig:ameConv"}](FigureA1b "fig:"){width=".45\textwidth"} ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ Multiplicative Effects Visualization {#multiplicative-effects-visualization .unnumbered} ------------------------------------ When it comes to estimating higher-order effects, ERGM is able to provide explicit estimates of a variety of higher-order parameters, however, this comes with the caveat that these are the “right” set of endogenous dependencies. The AME approach, as shown in Equation 4 of the manuscript, estimates network dependencies by examining patterns left over after taking into account the observed covariates. For the sake of space, we focus on examining the third-order dependencies left over after accounting for the observed covariates and network covariance structure modeled by the SRM. A visualization of remaining third-order dependencies is shown in Figure \[fig:uv\]. ![Circle plot of estimated latent factors.[]{data-label="fig:uv"}](FigureA2){width=".5\textwidth"} In Figure \[fig:uv\], the directions of $\hat{u}_{i}$’s and $\hat{v}_{i}$’s are noted in lighter and darker shades, respectively, of an actor’s type.[^23] The size of actors is a function of the magnitude of the vectors, and dashed lines between actors indicate greater than expected levels of collaboration based on the regression term and additive effects. In the case of the application dataset that we are using here organization names have been anonymized and no additional covariate information is available. However, if we were to observe nodes sharing certain attributes clustering together in this circle plot that would mean such an attribute could be an important factor in helping us to understand collaborations among actors in this network. Given how actors of different types are distributed in almost a random fashion in this plot, we can at least be sure that it is unlikely other third-order patterns can be picked up by that factor. Other Network Goodness of Fit Tests {#sec:otherNetGof .unnumbered} ----------------------------------- Below we show a standard set of statistics upon which comparisons are usually conducted:[^24] We simulate 1,000 networks from the LSM, ERGM, and AME model and compare how well they align with the observed network in terms of the statistics described in Table \[tab:netStat\]. The results are shown in Figure \[fig:gofAll\]. Values for the observed network are indicated by a gray bar and average values from the simulated networks for the AME, ERGM, and LSM are represented by a diamond, triangle, and square, respectively. The densely shaded interval around each point represents the 95% interval from the simulations and the taller, less dense the 90% interval.[^25] Looking across the panels in Figure \[fig:gofAll\] it is clear that there is little difference between the ERGM and AME models in terms of how well they capture network dependencies. The LSM model, however, does perform somewhat worse in comparison here as well. Particularly, when it comes to assessing the number of edge-wise shared partners and in terms of capturing the indegree and outdegree distributions of the collaboration network. ![Goodness of fit statistics to assess how well the LSM, ERGM, and AME approaches account for network dependencies. Grey bars indicate true values.[]{data-label="fig:gofAll"}](FigureA3){width="100.00000%"} Comparison with other AME Parameterizations {#sec:ameVsAmeAppendix .unnumbered} ------------------------------------------- Here we provide a comparison of the AME model we present in the paper that uses $K=2$ for multiplicative effects and show how results change when we use $K=\{1,3,4\}$. Trace plots for $K=\{1,3,4\}$ are available upon request. ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ![Assessments of out-of-sample predictive performance using ROC curves, separation plots, and PR curves. AUC statistics are provided as well for both curves.[]{data-label="fig:roc_ame"}](FigureA4a "fig:"){width=".5\textwidth"} ![Assessments of out-of-sample predictive performance using ROC curves, separation plots, and PR curves. AUC statistics are provided as well for both curves.[]{data-label="fig:roc_ame"}](FigureA4b "fig:"){width=".5\textwidth"} ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ![Network goodness of fit summary using [[amen]{}]{}.[]{data-label="fig:netPerfCoef_ameSR"}](FigureA5){width="100.00000%"} Comparison of [[amen]{}]{} & [[latentnet]{}]{} $\sf{R}$ Packages {#sec:ameVsLatentnetAppendix .unnumbered} ---------------------------------------------------------------- Here we provide a comparison of the AME model we present in the paper with a variety of parameterizations from the [[latentnet]{}]{} package. The number of dimensions in the latent space in each of these cases is set to 2. LSM (SR) represents a model in which random sender and receiver effects are included. -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![Assessments of out-of-sample predictive performance using ROC curves, separation plots, and PR curves. AUC statistics are provided as well for both curves.[]{data-label="fig:roc_latentSpace"}](FigureA6a "fig:"){width=".5\textwidth"} ![Assessments of out-of-sample predictive performance using ROC curves, separation plots, and PR curves. AUC statistics are provided as well for both curves.[]{data-label="fig:roc_latentSpace"}](FigureA6b "fig:"){width=".5\textwidth"} -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![Network goodness of fit summary using [[amen]{}]{}.[]{data-label="fig:netPerfCoef_latSpace"}](FigureA7){width="100.00000%"} Simulation Based Comparison of [[amen]{}]{} & [[latentnet]{}]{} {#simulation-based-comparison-of-amen-latentnet .unnumbered} --------------------------------------------------------------- We construct a simulation study to examine differences in the ability of LSM and LFM to capture network dependencies under varying scenarios of “egalitarianism”. By egalitarianism here we refer to how equally balanced the nodes are in terms of their number of ties. We construct six simulation scenarios representing varying degrees of egalitarianism. Note that to provide as fair a test as possible to the LSM we focus on comparing to just the LFM, the multiplicative effects portion of AME (see Equation 3 in the manuscript). This means that we exclude the additive effects described by the SRM portion of the model (see Equation 2 in the manuscript). For each scenario, we simulate fifty binary, directed networks with 100 nodes each and then evaluate the performance of LFM and LSM to predict this network structure. The results are shown in Figure \[fig:sim\_egal\] below. Each panel here represents one scenario in which we vary the degree of egalitariansim. The left most panel represents the situation in which the structure of the network is most egalitarian. The numbers at the top of each panel indicate the standard deviation of the degree distribution averaged across fifty simulations. Across the diagnoal of the visualization, we also provide an example of the type of network that was simulated. The size of nodes in each example network corresponds to the number of ties that node has. As we go from left to right, we can see much greater variance in the size of nodes within the network, which indicates that the level of egalitarianism is changing. We run a LFM and LSM on each of the simulated networks from each scenario, and compare the predictive performance based on AUC (ROC) and AUC (PR) statistics. We set $K=2$ for both the LFM and LSM and estimate each model without any covariates. The results of this analysis indicate that under these varying scenarios of egalitarianism LFM consistently outperforms the LSM. However, the performance of both models tends to decline as the structure of the simulated networks become less egalitarian (i.e., the extent of tie formation among just a few nodes becomes much higher than the typical node in the network). If covariate information was provided to the model about which nodes were more likely to form ties, then the predictive performance of both models would obviously improve. Additionally, if we were to estimate the full AME model (SRM + LFM) then the additive effects would be able to capture the degree heterogeneity. Typically, in most applied scenarios one would always to include both the additive and multiplicative effects portions when using AME.[^26] ![Predictive performance of LFM vs LSM for networks under five scenarios (the panels) that vary the extent to which the distribution of ties are egalitarian. We use a box plot to represent the performance of LFM and LSM across fifty simulations for each scenario. The set of network visualizations across the diagonal of the plot illustrate a representative network from one simulation under that scenario, and the size of nodes corresponds to their number of ties. The labels at the top of each panel indicate the standard deviation of the number of ties, which are averaged across the fifty simulations for that scenario.[]{data-label="fig:sim_egal"}](FigureA8){width="100.00000%"} Next, we construct a second simulation study to compare the predictive performance of LSM and LFM under varying levels of reciprocity. Here again we simulate a set of scenarios, and for each scenario we simulate fifty binary, directed networks with 100 nodes. The results are shown in Figure \[fig:sim\_recip\]. Each panel here represents represents one scenario with a certain degree of reciprocity. The left-most panel highlights the case where there is little to no reciprocity in the network and the right-most where the level of reciprocity is quite high. The average level of reciprocity across the fifty simulated networks is given at the top of each panel. ![Predictive performance of LFM vs LSM for networks with varying levels of reciprocity. We use a box plot to represent the performance of LFM and LSM across fifty simulations for each scenario. The labels at the top of each panel indicate the average level of reciprocity across the fifty simulated in that scenario.[]{data-label="fig:sim_recip"}](FigureA9){width="100.00000%"} To compare LFM and LSM, we again utilize AUC (ROC) and AUC (PR) statistics. $K$ is set to 2 for both models and no covariate information is provided. Here again we find that the LFM consistently outpeforms the LSM, though at higher levels of reciprocity the performance difference between the two approaches does shorten. If we were to estimate the full AME model, then we would be better able to capture reciprocity in the network, as dyadic reciprocity is estimated within the additive effects portion of AME. [^1]: There has been significant work done on treaty formation that would challenge this claim, e.g., see @manger:etal:2012 [@kinne:2013]. [^2]: @maoz:etal:2006 [@minhas:etal:2016] would each note the importance of taking into account network dynamics in the study of interstate conflict. [^3]: The difficulties of applying the GLM framework to data that have structural interdependencies between observations is a problem that has long been recognized. @beck:katz:1995, for example, detail the issues with pooling observations in time-series cross-section datasets. [^4]: Another reason why we may see the emergence of third-order effects is the “sociology” explanation: that individuals want to close triads because this is putatively a more stable or preferable social situation (@wasserman:faust:1994). [^5]: An alternative approach with a similar latent variable formulation is known as the stochastic block model [@nowicki:snijders:2001], however, this approach is typically only used to model community structure in networks and not used to conduct inference on exogenous covariates. [^6]: In the directed version of this approach, we use the singular value decomposition, here actors in the network have a vector of latent characteristics to describe their behavior as a sender, denoted by $\textbf{u}$, and as a receiver, $\textbf{v}$: $\textbf{u}_{i}, \textbf{v}_{j} \in \mathbb{R}^{K}$. This can alter the probability of an interaction between $ij$ additively: $\textbf{u}_{i}^{\top} \textbf{D} \textbf{v}_{j}$, where $\textbf{D}$ is a $K \times K$ diagonal matrix. [^7]: The MCMC algorithm describing the estimation procedure is available in the Appendix. [^8]: @morris:etal:2008 and @snijders:etal:2006 provide a detailed list of network statistics that can be included in an ERGM model specification. [^9]: For example, most of the probability may be placed on empty graphs, no edges between nodes, or nearly complete graphs, almost every node is connected by an edge. [^10]: The reason we use the same dataset is because of the model specification issue that arises when using ERGMs. As @cranmer:etal:2016 [p. 8] note, when using ERGMs scholars must model third-order effects and “must also specify them in a complete and correct manner” or the model will be misspecified. Thus to avoid providing an incorrect specification when comparing ERGM with the AME we use the specification that they constructed. [^11]: We do not review the specification in detail here, instead we just provide a summary of the variables to be included and the theoretical expectations of their effects in the Appendix. [^12]: Convergence diagnostics for AME are provided in the Appendix. [^13]: In the Appendix, we show that these differences persist even when incorporating sender and receiver random effects into the LSM. [^14]: See the Appendix for details. [^15]: Such a low number of observations were excluded in every sample (denoted a fold) because excluding any more observations would cause the ERGM specification to result in a degenerate model that empirically can not be fit. This is an example of the computational difficulties associated with ERGMs. [^16]: We also test AME against the multiple regression quadratic assignment procedure (MRQAP). This approach also perform notably worse than AME in terms of predicting tie formation. Results are available upon request. [^17]: We restrict our focus to the three approaches–LSM, ERGM, and AME–that explicitly seek to model network interdependencies. [^18]: In the Appendix, we compare the ability of these models to capture network attribute across a wider array of statistics (e.g., dyad-wise shared partners, incoming k-star, etc.), and the results are consistent with what we present below. [^19]: Further even after incorporating random sender and receiver effects into the LSM framework this problem is not completely resolved, see the Appendix for details. [^20]: A number of related approaches have been developed that also stem from latent variable models: @sewell:chen:2015 [@gollini:murphy:2016; @durante:etal:2017; @kao:etal:2018]. Each of these approaches differ in how they construct the latent variable term to account for third-order dependencies, but they each are based off of a similar framework as the model we present here. We hope that this paper motivates further interest in exploring the utility of latent variable models to studying networks in political science. [^21]: Additionally, in most political science applications, we are interested in how actors behave towards each other over time. Accounting for repeated interaction within AME can be done by including time-dependent regression terms such as lags of the dependent variable or simply time-varying regression parameters. [^22]: Subsequent to estimation, **D** matrix is absorbed into the calculation for $\textbf{V}$ as we iterate through $K$. [^23]: For example, actors from industry and business are assigned a color of blue and the direction of $\hat{u}_{i}$ for these actors is shown in light blue and $\hat{v}_{i}$ in dark blue [^24]: See @morris:etal:2008 for details on each of these parameters. If one was to examine goodness of fit in the [[ergm]{}]{} package these parameters would be calculated by default. [^25]: Calculation for the incoming k-star statistic is not currently supported by the [[latentnet]{}]{} package. [^26]: When using the [[amen]{}]{}, the SRM portion of the model will be included by default.
--- abstract: 'We give a new proof of the theorem of Krstić–McCool from the title. Our proof has potential applications to the study of finiteness properties of other subgroups of $\mathrm{SL}_2$ resulting from rings of functions on curves.' address: - | Department of Mathematics\ University of Virginia\ PO Box 400137\ Charlottesville VA 22094-4137\ USA - | Mathematics Department\ Yale University\ PO Box 208283\ New Haven CT 06520-8283\ USA author: - 'Kai-Uwe Bux' - Kevin Wortman bibliography: - 'link.bib' title: 'A geometric proof that $\mathrm{SL}_2(\mathbb{Z}[t,t^{-1}])$ is not finitely presented' --- We give a new proof of the theorem of Krstic-McCool from the title. Our proof has potential applications to the study of finiteness properties of other subgroups of SL\_2 resulting from rings of functions on curves. We give a new proof of the theorem of Krsti&\#x0107;&ndash;McCool from the title. Our proof has potential applications to the study of finiteness properties of other subgroups of SL&lt;sub&gt;2&lt;/sub&gt; resulting from rings of functions on curves. Introduction ============ Our main result is a strengthening of the theorem of Krstić–McCool from the title. \[propa\] The group $\SlOf[\Two]{\IntLaurent}$ is not finitely presented, indeed it is not even of type [FP${}_{\Two}$]{}. It will be clear from our proof that $\TheIntegers$ can be replaced in with any ring of integers in an algebraic number field. Note that the theorem of Krstić–McCool [@KMcC97] also allows for this replacement as well as for many other generalizations of the ring $\IntLaurent$, which include in particular any ring of the form ${\TheIntDomain}[\TheVariable,\TheVariable[][-\One]]$ where $\TheIntDomain$ is an integral domain. Let us recall the definition of type [FP${}_{\Two}$]{}. **Type** A group $\AbstractGroup$ is of [type [FP${}_{\TheType}$]{}]{} if $\TheIntegers$, regarded as a $\TheIntegers\AbstractGroup$–module via the trivial action, admits a partial projective resolution $$\TheProjective[\TheType] \rightarrow \TheProjective[\TheType-\One] \rightarrow\cdots\rightarrow \TheProjective[\One] \rightarrow \TheProjective[\Zero] \rightarrow \TheIntegers \rightarrow \Zero$$ by finitely generated $\TheIntegers\AbstractGroup$–modules $\TheProjective[\TheIndex]$. Every group is of type [FP${}_{\Zero}$]{}. Type [FP${}_{\One}$]{} is equivalent to the property of finite generation. Every finitely-presented group is of type [FP${}_{\Two}$]{}, but Bestvina–Brady showed the converse does not hold in general [@BeBr97 Example 6.3(3)]. **Purpose** In [@BuWo04], we studied finiteness properties of subgroups of linear reductive groups arising from rings of functions on algebraic curves defined over finite fields. For example, we showed that $\Sl_{\TheMatrixSize}\bigl(\GaloisField_{\ThePrimePower}[\TheVariable]\bigr)$ is not of type [FP${}_{\TheMatrixSize-\One}$]{} and $\Sl_{\TheMatrixSize}\bigl(\GaloisField_{\ThePrimePower}[\TheVariable,\TheVariable[][-\One]]\bigr)$ is not of type [FP${}_{\Two(\TheMatrixSize-\One)}$]{} where $\GaloisField[\ThePrimePower]$ is a finite field. We wrote this paper to show how the techniques in [@BuWo04] might be applied to a more general class of groups. In this paper we stripped down the general proof of the main result from [@BuWo04] to the special case of showing that $\Sl_{\Two}\bigl(\GaloisField_{\ThePrimePower}[\TheVariable,\TheVariable[][-\One]]\bigr)$ is not of type [FP${}_{\Two}$]{}, and then made some modest alterations until we arrived at the proof of presented below. It seems likely that more results along these lines can be proved, but it is not clear to us how much the results in [@BuWo04] can be generalized. Below we phrase a question that seems a good place to start. **Rings of functions on curves** Let $\TheCurve$ be an irreducible smooth projective curve defined over an algebraically closed field $\TheField$. We let ${\TheFieldOf{\TheCurve}}$ be the field of rational functions defined on $\TheCurve$, and we denote the set of nonzero elements of this field by $\UnitsOf{{\TheFieldOf{\TheCurve}}}$. For each point $\TheCurvePoint\in\TheCurve$, there is a discrete valuation $ \TheValuation[\TheCurvePoint] \mapcolon \UnitsOf{{\TheFieldOf{\TheCurve}}} \rightarrow \TheIntegers $ that assigns to any nonzero function $\TheFunction$ on $\TheCurve$ its vanishing order at $\TheCurvePoint$. Formally, we extend $\TheValuation[\TheCurvePoint]$ to all of ${\TheFieldOf{\TheCurve}}$ by $ \TheValuationOf[\TheCurvePoint]{\Zero} := \infty. $ We let $\ThePrimeSet[\One],\ThePrimeSet[\Two],\ldots, \ThePrimeSet[\TheNumberOfPlaces]\subseteq\TheCurve$ be collections of pairwise disjoint finite nonempty sets of closed points in $\TheCurve$. We call a ring $\TheRing{\leq}{\TheFieldOf{\TheCurve}}$ containing some nonconstant function and the constant function $\One$ an [$\TheNumberOfPlaces$–place ring]{} if the following two conditions are satisfied: \[cond:smooth\] For all $\TheFunction\in\TheRing$ and all $ \TheCurvePoint\in\TheCurve{-}\Parentheses{ \Union[\TheIndex=\One][\TheNumberOfPlaces]{ \ThePrimeSet[\TheIndex] } } , $ we have $\TheValuationOf[\TheCurvePoint]{\TheFunction} \geq \Zero$. \[cond:poles\] If there is an $\TheIndex$, an $\TheCurvePoint\in\ThePrimeSet[\TheIndex]$, and an $\TheFunction\in\TheRing$ such that $ \TheValuationOf[\TheCurvePoint]{\TheFunction} < \Zero , $ then $ \TheValuationOf[\AltCurvePoint]{\TheFunction} < \Zero $ for all $\AltCurvePoint\in\ThePrimeSet[\TheIndex]$. For example, if ${{\mathbb{P}}^{\One}}$ is the projective line, then $\TheFieldOf{{{\mathbb{P}}^{\One}}}$ is isomorphic to the field $ \TheFieldOf{\TheVariable} $ of rational functions in one variable. Thus, if $\TheIntDomain$ is a subring of $\TheField$, then $ \TheIntDomainAd{\TheVariable} {\leq}\TheFieldOf{{{\mathbb{P}}^{\One}}} $ is a $\One$–place ring with $\ThePrimeSet[\One]=\SetOf{\infty}$, while $J[\TheVariable,\TheVariable[][-\One]]$ is a $\Two$–place ring with $\ThePrimeSet[\One]=\SetOf{\infty}$ and $\ThePrimeSet[\Two]=\SetOf{\Zero}$. For an example of a $\One$–place ring $\TheRing$ that obeys condition 2 nontrivially, we can take $ \TheRing = \mathbb{Z}\bigl[\frac{\One}{\TheVariable[][\Two]-\Two}\bigr] {\leq}\CccOf{\TheVariable} $ with $ \ThePrimeSet[\One] = \bigl\{\sqrt{\Two},-\sqrt{\Two}\bigr\}. $ Note that the definition of an $\TheNumberOfPlaces$–place ring is a generalization of the definition of a ring of $\ThePrimeSet$–integers of a global function field. **Finiteness properties of linear groups** We ask the following question: \[quesb\] Is there an example of an $\TheNumberOfPlaces$–place ring $\TheRing$ such that $\SlOf[\TheMatrixSize]{\TheRing}$ is of type [FP${}_{\TheNumberOfPlaces\Parentheses{\TheMatrixSize-\One}}$]{}? Specifically, is there an $\TheMatrixSize\geq \Two$ such that $\SlOf[\TheMatrixSize]{\IntPoly}$ is of type [FP${}_{\TheMatrixSize-\One}$]{} or such that $\SlOf[\TheMatrixSize]{\IntLaurent}$ is of type [FP${}_{\Two\Parentheses{\TheMatrixSize-\One}}$]{}? There seems to be no known example as above, though relatively few candidates have been examined for this property. Krstić–McCool [@KMcC97; @KMcC99] proved that $ \SlOf[\Two]{J[\TheVariable,\TheVariable[][-\One]]} $ and $ \SlOf[\Three]{ \TheIntDomainAd{ \TheVariable } } $ are not finitely presented for any integral domain $\TheIntDomain$. In [@BuWo04], we prove that there exist no examples when $\TheRing$ is a ring of $\ThePrimeSet$–integers of a global function field. Examples of such rings include $ \GaloisFieldAd[\ThePrimePower]{\TheVariable} $ and $ \GaloisField_{\ThePrimePower}[\TheVariable,\TheVariable[][-\One]]. $ We also know that there are no examples as asked for in when $\TheNumberOfPlaces=\One$ and $\TheMatrixSize=\Two$. We give a proof of this fact in . This is an easy result, but as this general problem has not been studied extensively, it appears not to have been stated in this form in the literature. **About the proof** Our proof of is geometric in that it employs the action of $\SlOf[\Two]{\IntLaurent}$ on a product of two Bruhat–Tits trees. It is essentially a special case of our proof that arithmetic subgroups of $\Sl[\TheMatrixSize]$ over global function fields are not of type [FP${}_{\infty}$]{} [@BuWo04]. The proof uses a result of K.Brown’s which requires the action to have “nice” stabilizers. Unfortunately, the stabilizer types of $ \SlOf[\Two]{\TheRing} $ are unknown to us for many of the more interesting $\Two$–place rings $\TheRing$. This prevents us from applying our proof to groups other than $\Sl_{\Two}\bigl({\IntRing}[\TheVariable,\TheVariable[][-\One]]\bigr)$ where $\IntRing$ is the ring of integers in an algebraic number field. **Other finiteness properties** As an aside, we point out a few loosely related facts. In [@KMcC99], Krstić–McCool showed that $ \SlOf[\Three]{\TheIntDomainAd{\TheVariable}} $ is not finitely presented for any integral domain $\TheIntDomain$. Suslin proved in [@Susl77] that $ \SlOf[\TheMatrixSize]{\IntPoly} $ and $ \SlOf[\TheMatrixSize]{\IntLaurent} $ are finitely generated by elementary matrices when $\TheMatrixSize\geq\Three$. It is not known whether $ \SlOf[\Two]{\IntLaurent} $ is also generated by elementary matrices. In fact, even finite generation is an open problem for this group. **Homology** Our proof of can be seen as a variant of Stuhler’s proof [@Stuh80] that $ \SlOf[\Two]{\GaloisField_{\ThePrimePower} [\TheVariable,\TheVariable[][-\One]]} $ is not of type [FP${}_{\Two}$]{}. As Stuhler’s proof establishes the stronger fact that the second homology $ \HomologyOf[\Two]{ \SlOf[\Two]{\GaloisField_{\ThePrimePower}[\TheVariable,\TheVariable[][-\One]]};\TheIntegers} $ is infinitely generated, it is natural to wonder if the proof of below can be extended to show that $ \HomologyOf[\Two]{ \SlOf[\Two]{\IntLaurent} ; \TheIntegers } $ is infinitely generated. **Type** We will not use type [F${}_{\TheType}$]{} in this paper, but as it is related to type [FP${}_{\TheType}$]{}, we recall its definition here. A group $\AbstractGroup$ is of [type [F${}_{\TheType}$]{}]{} if there exists an Eilenberg–MacLane complex ${\operatorname{K}(\AbstractGroup,1)}$ with finite $\TheType$–skeleton. For $\TheType\geq\Two$, a group is of type [F${}_{\TheType}$]{} if and only if it is finitely presented and of type [FP${}_{\TheType}$]{}. In general, type [F${}_{\TheType}$]{} is stronger than type [FP${}_{\TheType}$]{}. **Outline of the paper** In , we present the main body of the proof of , leaving the verification that cell stabilizers are well-behaved for . In , we comment on . **Acknowledgments** We thank Benson Farb and Karen Vogtmann for suggesting that we should explore this direction. We thank Roger Alperin and Kevin P Knudson for helpful conversations. We also thank the referee for suggesting some improvements to the paper. The action on a product of trees {#sec:geometry} ================================ Let $\TheValuation[\TheRoot]$ be the degree valuation on $\TheNumberFieldOf{\TheVariable}$ given by $$\TheValuationOf[\TheRoot]{ \frac{ \ThePolynomialOf{\TheVariable} }{ \AltPolynomialOf{\TheVariable} } } = \DegOf{\AltPolynomialOf{\TheVariable}} - \DegOf{\ThePolynomialOf{\TheVariable}},$$ and let $\TheValuation[\AltRoot]$ be the valuation at $\Zero$, that is, the valuation corresponding to the irreducible polynomial $\TheVariable\in\TheNumberFieldAd{\TheVariable}$. Thus $$\TheValuationOf[\AltRoot]{ \frac{ \ThePolynomialOf{\TheVariable} }{ \AltPolynomialOf{\TheVariable} } \TheVariable[][\TheExponent] } = \TheExponent$$ if $\TheVariable$ does not divide $\ThePolynomialOf{\TheVariable}$ nor $\AltPolynomialOf{\TheVariable}$. Let $\TheTree[\TheRoot]$ (resp. $\TheTree[\AltRoot]$) be the Bruhat–Tits tree associated to $\SlOf[\Two]{\TheNumberFieldOf{\TheVariable}}$ with the valuation $\TheValuation[\TheRoot]$ (resp. $\TheValuation[\AltRoot]$). We consider these trees as metric spaces by assigning a length of $\One$ to each edge. For a definition as well as for many of the facts we will use in this proof, we refer to Serre’s book on trees [@Serr77]. **Outline** We put $$\AffBuild := \TheTree[\TheRoot] {\times}\TheTree[\AltRoot],$$ and we let $\SlOf[\Two]{{ \begingroup \def\tempa{\TheRoot,\AltRoot} \def\usual{\TheRoot,\AltRoot} \ifx\tempa\usual \def\next{\IntLaurent} \else \def\next{!!! FIXME !!!} \fi \expandafter\endgroup\next }}$ act diagonally on $\AffBuild$. We will begin by finding an $\SlOf[\Two]{{ \begingroup \def\tempa{\TheRoot,\AltRoot} \def\usual{\TheRoot,\AltRoot} \ifx\tempa\usual \def\next{\IntLaurent} \else \def\next{!!! FIXME !!!} \fi \expandafter\endgroup\next }}$–invariant cocompact subspace $\TheSubspace[\Zero]\subseteq \AffBuild$. Then for each $\MatrixIndex \in {\mathbb{N}}$, we will construct a $\One$–cycle $\TheLoop[\MatrixIndex]$ in $ \TheSubspace[\Zero]$ with the property that for any $\SlOf[\Two]{{ \begingroup \def\tempa{\TheRoot,\AltRoot} \def\usual{\TheRoot,\AltRoot} \ifx\tempa\usual \def\next{\IntLaurent} \else \def\next{!!! FIXME !!!} \fi \expandafter\endgroup\next }}$–invariant cocompact subspace $\AltSubspace\subseteq\AffBuild$ containing $\TheSubspace[\Zero]$, there exists some $\MatrixIndex\in{\mathbb{N}}$ such that $\TheLoop[\MatrixIndex]$ represents a nontrivial element of the first homology group $\HomologyOf[\One]{\AltSubspace}$. A direct application of K. Brown’s filtration criterion then shows that $\SlOf[\Two]{{ \begingroup \def\tempa{\TheRoot,\AltRoot} \def\usual{\TheRoot,\AltRoot} \ifx\tempa\usual \def\next{\IntLaurent} \else \def\next{!!! FIXME !!!} \fi \expandafter\endgroup\next }}$ is not of type [FP${}_{\Two}$]{} as long as the cell stabilizers of the $\SlOf[\Two]{{ \begingroup \def\tempa{\TheRoot,\AltRoot} \def\usual{\TheRoot,\AltRoot} \ifx\tempa\usual \def\next{\IntLaurent} \else \def\next{!!! FIXME !!!} \fi \expandafter\endgroup\next }}$–action on $\AffBuild$ are not of type [FP${}_{\Two}$]{}. We leave the verification of this last fact for . **Finding a cocompact subspace** A crucial part of our construction will take place in a flat plane inside $\AffBuild$, which we shall describe now. Let $\ValuationRing[\TheRoot]{\leq}\TheNumberFieldOf{\TheVariable}$ be the valuation ring associated to $\TheValuation[\TheRoot]$, that is, the ring of all $\TheFunction\in\TheNumberFieldOf{\TheVariable}$ with $ \TheValuationOf[\TheRoot]{\TheFunction}\geq\Zero. $ Let $ \TheLine[\TheRoot] \subseteq \TheTree[\TheRoot] $ be the unique bi-infinite geodesic stabilized by the diagonal subgroup of $ \SlOf[\Two]{\TheNumberFieldOf{\TheVariable}}. $ We parameterize $\TheLine[\TheRoot]$ by an isometry $ \TheIsometry[\TheRoot] \mapcolon {\mathbb{R}}\rightarrow\TheLine[\TheRoot] $ such that $\TheIsometryOf[\TheRoot]{\Zero}$ is the unique vertex stabilized by $\SlOf[\Two]{\ValuationRing[\TheRoot]}$ and such that the end corresponding to the positive reals is fixed by all upper triangular matrices in $ \SlOf[\Two]{\TheNumberFieldOf{\TheVariable}} $. Analogously, we define $ \TheIsometry[\AltRoot] \mapcolon {\mathbb{R}}\rightarrow\TheLine[\AltRoot] $. The plane we shall consider is the product $$\TheLine[\TheRoot]{\times}\TheLine[\AltRoot].$$ We define a diagonal matrix $\DiagMatrix\in \SlOf[\Two]{{ \begingroup \def\tempa{\TheRoot,\AltRoot} \def\usual{\TheRoot,\AltRoot} \ifx\tempa\usual \def\next{\IntLaurent} \else \def\next{!!! FIXME !!!} \fi \expandafter\endgroup\next }}$ by: $$\DiagMatrix := { \mathchoice{ \begin{pmatrix} \TheUnit & \Zero\\ \Zero & \TheUnit[][-\One] \end{pmatrix} }{ \left(\begin{smallmatrix} \TheUnit & \Zero\\ \Zero & \TheUnit[][-\One] \end{smallmatrix}\right) }{ \left(\begin{smallmatrix} \TheUnit & \Zero\\ \Zero & \TheUnit[][-\One] \end{smallmatrix}\right) }{ \left(\begin{smallmatrix} \TheUnit & \Zero\\ \Zero & \TheUnit[][-\One] \end{smallmatrix}\right) } }.$$ Note that for any $\MatrixExponent\in\TheIntegers$, we have $$\DiagMatrix[][\MatrixExponent] {\cdot}{( \TheIsometryOf[\TheRoot]{\Zero} , \TheIsometryOf[\AltRoot]{\Zero} )} = {( \TheIsometryOf[\TheRoot]{ { \begingroup \def\tempa{-} \def\minus{-} \ifx\tempa\minus \def\next{} \else \def\next{-} \fi \expandafter\endgroup\next }\Two\MatrixExponent } , \TheIsometryOf[\AltRoot]{ { \begingroup \def\tempa{-} \def\minus{-} \ifx\tempa\minus \def\next{-} \else \def\next{} \fi \expandafter\endgroup\next }\Two\MatrixExponent } )}.$$ Hence, if we denote by $\AntiDiagonal$ the line in $\TheLine[\TheRoot]{\times}\TheLine[\AltRoot]$ of the form $ \SetFamOf[ \TheReal\in{\mathbb{R}}]{ {( \TheIsometryOf[\TheRoot]{ { \begingroup \def\tempa{-} \def\minus{-} \ifx\tempa\minus \def\next{} \else \def\next{-} \fi \expandafter\endgroup\next }\TheReal } , \TheIsometryOf[\AltRoot]{ { \begingroup \def\tempa{-} \def\minus{-} \ifx\tempa\minus \def\next{-} \else \def\next{} \fi \expandafter\endgroup\next }\TheReal } )} }, $ then $\AntiDiagonal$ has a compact image under the quotient map $$\TheProjection \mapcolon \AffBuild \longrightarrow \AffBuild / \Sl_{\Two}\bigl({ \begingroup \def\tempa{\TheRoot,\AltRoot} \def\usual{\TheRoot,\AltRoot} \ifx\tempa\usual \def\next{\IntLaurent} \else \def\next{!!! FIXME !!!} \fi \expandafter\endgroup\next }\bigr).$$ Note that $$\TheSubspace[\Zero] := \TheProjectionOf[][-\One]{ \TheProjectionOf{ \AntiDiagonal } } \subseteq \AffBuild$$ is an $\SlOf[\Two]{{ \begingroup \def\tempa{\TheRoot,\AltRoot} \def\usual{\TheRoot,\AltRoot} \ifx\tempa\usual \def\next{\IntLaurent} \else \def\next{!!! FIXME !!!} \fi \expandafter\endgroup\next }}$–invariant cocompact subspace. **A family of loops in $\TheSubspace[\Zero]$** For any $\MatrixIndex\in\TheIntegers$, we define the unipotent matrix $ \UnipMatrix[\MatrixIndex] \in \SlOf[\Two]{ { \begingroup \def\tempa{\TheRoot,\AltRoot} \def\usual{\TheRoot,\AltRoot} \ifx\tempa\usual \def\next{\IntLaurent} \else \def\next{!!! FIXME !!!} \fi \expandafter\endgroup\next } } $ as $$\UnipMatrix[\MatrixIndex] = { \mathchoice{ \begin{pmatrix} \One & \TheUnit[][\MatrixIndex]\\ \Zero & \One \end{pmatrix} }{ \left(\begin{smallmatrix} \One & \TheUnit[][\MatrixIndex]\\ \Zero & \One \end{smallmatrix}\right) }{ \left(\begin{smallmatrix} \One & \TheUnit[][\MatrixIndex]\\ \Zero & \One \end{smallmatrix}\right) }{ \left(\begin{smallmatrix} \One & \TheUnit[][\MatrixIndex]\\ \Zero & \One \end{smallmatrix}\right) } }.$$ Note that $\UnipMatrix[\MatrixIndex]$ fixes a point of the form $ {( \TheIsometryOf[\TheRoot]{\TheReal} , \TheIsometryOf[\AltRoot]{\AltReal} )} \in \TheLine[\TheRoot]{\times}\TheLine[\AltRoot] $ if and only if $ \TheReal \geq { \begingroup \def\tempa{-} \def\minus{-} \ifx\tempa\minus \def\next{} \else \def\next{-} \fi \expandafter\endgroup\next }\MatrixIndex $ and $ \AltReal\geq { \begingroup \def\tempa{-} \def\minus{-} \ifx\tempa\minus \def\next{-} \else \def\next{} \fi \expandafter\endgroup\next }\MatrixIndex. $ Moreover, any points in the plane $ \TheLine[\TheRoot]{\times}\TheLine[\AltRoot] $ that are not fixed by $\UnipMatrix[\MatrixIndex]$ are moved outside of $\TheLine[\TheRoot]{\times}\TheLine[\AltRoot]$. For all $\MatrixIndex\in{\mathbb{N}}$, we define the geodesic segment $ \TheSegment[\MatrixIndex] \subseteq \AntiDiagonal $ to be the segment with endpoints $ {( \TheIsometryOf[\TheRoot]{ { \begingroup \def\tempa{} \def\minus{-} \ifx\tempa\minus \def\next{} \else \def\next{-} \fi \expandafter\endgroup\next }\MatrixIndex } , \TheIsometryOf[\AltRoot]{ { \begingroup \def\tempa{} \def\minus{-} \ifx\tempa\minus \def\next{-} \else \def\next{} \fi \expandafter\endgroup\next }\MatrixIndex } )} $ and $ {( \TheIsometryOf[\TheRoot]{ { \begingroup \def\tempa{-} \def\minus{-} \ifx\tempa\minus \def\next{} \else \def\next{-} \fi \expandafter\endgroup\next }\MatrixIndex } , \TheIsometryOf[\AltRoot]{ { \begingroup \def\tempa{-} \def\minus{-} \ifx\tempa\minus \def\next{-} \else \def\next{} \fi \expandafter\endgroup\next }\MatrixIndex } )}. $ Note that $\UnipMatrix[\MatrixIndex]$ fixes the endpoint of $\TheSegment[\MatrixIndex]$ given by $ {( \TheIsometryOf[\TheRoot]{ { \begingroup \def\tempa{-} \def\minus{-} \ifx\tempa\minus \def\next{} \else \def\next{-} \fi \expandafter\endgroup\next }\MatrixIndex } , \TheIsometryOf[\AltRoot]{ { \begingroup \def\tempa{-} \def\minus{-} \ifx\tempa\minus \def\next{-} \else \def\next{} \fi \expandafter\endgroup\next }\MatrixIndex } )} $ whereas $\UnipMatrix[-\MatrixIndex]$ fixes its other endpoint $ {( \TheIsometryOf[\TheRoot]{ { \begingroup \def\tempa{} \def\minus{-} \ifx\tempa\minus \def\next{} \else \def\next{-} \fi \expandafter\endgroup\next }\MatrixIndex } , \TheIsometryOf[\AltRoot]{ { \begingroup \def\tempa{} \def\minus{-} \ifx\tempa\minus \def\next{-} \else \def\next{} \fi \expandafter\endgroup\next }\MatrixIndex } )}. $ Since $\UnipMatrix[\MatrixIndex]$ and $\UnipMatrix[-\MatrixIndex]$ commute, the union of geodesic segments $$\TheLoop[\MatrixIndex] := \TheSegment[\MatrixIndex] \union \Parentheses{ \UnipMatrix[\MatrixIndex] {\cdot}\TheSegment[\MatrixIndex] } \union \Parentheses{ \UnipMatrix[-\MatrixIndex] {\cdot}\TheSegment[\MatrixIndex] } \union \Parentheses{ \UnipMatrix[\MatrixIndex] \UnipMatrix[-\MatrixIndex] {\cdot}\TheSegment[\MatrixIndex] }$$ is a loop. Note that $ \TheLoop[\MatrixIndex] \subseteq \TheSubspace[\Zero]. $ **How the loops can be filled** It is easy to describe a filling disc for $\TheLoop[\MatrixIndex]$ in $\AffBuild$. Just let $\TheTriangle[\MatrixIndex]$ be the closed triangle with geodesic sides and vertices at the endpoints of $\TheSegment[\MatrixIndex]$ and at the point $ {( \TheIsometryOf[\TheRoot]{ { \begingroup \def\tempa{-} \def\minus{-} \ifx\tempa\minus \def\next{} \else \def\next{-} \fi \expandafter\endgroup\next }\MatrixIndex } , \TheIsometryOf[\AltRoot]{ { \begingroup \def\tempa{} \def\minus{-} \ifx\tempa\minus \def\next{-} \else \def\next{} \fi \expandafter\endgroup\next }\MatrixIndex } )}, $ which is fixed by both $\UnipMatrix[\MatrixIndex]$ and $\UnipMatrix[-\MatrixIndex]$. Then we define $\TheCone[\MatrixIndex]$ to be the union of triangles $$\TheCone[\MatrixIndex] := \TheTriangle[\MatrixIndex] \union \Parentheses{ \UnipMatrix[\MatrixIndex] {\cdot}\TheTriangle[\MatrixIndex] } \union \Parentheses{ \UnipMatrix[-\MatrixIndex] {\cdot}\TheTriangle[\MatrixIndex] } \union \Parentheses{ \UnipMatrix[\MatrixIndex] \UnipMatrix[-\MatrixIndex] {\cdot}\TheTriangle[\MatrixIndex] }.$$ Since $\AffBuild$ is a $\Two$–complex, it does not allow for simplicial $\Three$–chains (using any appropriate simplicial decomposition of $\AffBuild$). Since $\AffBuild$ is contractible, it follows that there are no nontrivial simplicial $\Two$–cycles. Hence, there is a unique $\Two$–chain bounding $\TheLoop[\MatrixIndex]$, and this consists of the simplices forming $\TheCone[\MatrixIndex]$. Since $ {( \TheIsometryOf[\TheRoot]{ { \begingroup \def\tempa{-} \def\minus{-} \ifx\tempa\minus \def\next{} \else \def\next{-} \fi \expandafter\endgroup\next }\MatrixIndex } , \TheIsometryOf[\AltRoot]{ { \begingroup \def\tempa{} \def\minus{-} \ifx\tempa\minus \def\next{-} \else \def\next{} \fi \expandafter\endgroup\next }\MatrixIndex } )} \in \TheCone[\MatrixIndex], $ we have: \[essential\_loop\] Each loop $\TheLoop[\MatrixIndex]\subseteq\TheSubspace[\Zero]$ represents a nontrivial class in the first homology group of $ \AffBuild {-}\SetOf{ {( \TheIsometryOf[\TheRoot]{ { \begingroup \def\tempa{-} \def\minus{-} \ifx\tempa\minus \def\next{} \else \def\next{-} \fi \expandafter\endgroup\next }\MatrixIndex } , \TheIsometryOf[\AltRoot]{ { \begingroup \def\tempa{} \def\minus{-} \ifx\tempa\minus \def\next{-} \else \def\next{} \fi \expandafter\endgroup\next }\MatrixIndex } )} }. $ Note how our proof relies on the commutator relations $ \UnipMatrix[\MatrixIndex] \UnipMatrix[-\MatrixIndex] = \UnipMatrix[-\MatrixIndex] \UnipMatrix[\MatrixIndex] $ that were also essential in the argument of Krstić–McCool [@KMcC97]. **An unbounded sequence in the quotient** We will need to know that the points $ {( \TheIsometryOf[\TheRoot]{{ \begingroup \def\tempa{-} \def\minus{-} \ifx\tempa\minus \def\next{} \else \def\next{-} \fi \expandafter\endgroup\next }\MatrixIndex} , \TheIsometryOf[\AltRoot]{{ \begingroup \def\tempa{} \def\minus{-} \ifx\tempa\minus \def\next{-} \else \def\next{} \fi \expandafter\endgroup\next }\MatrixIndex} )} $ move farther and farther away from $\TheSubspace[\Zero]$. We will use this to show that for any $\SlOf[\Two]{{ \begingroup \def\tempa{\TheRoot,\AltRoot} \def\usual{\TheRoot,\AltRoot} \ifx\tempa\usual \def\next{\IntLaurent} \else \def\next{!!! FIXME !!!} \fi \expandafter\endgroup\next }}$–invariant cocompact subspace $\AltSubspace \subseteq \AffBuild$ containing $\TheSubspace[\Zero]$, there exists some $\MatrixIndex \in {\mathbb{N}}$ such that $\TheLoop[\MatrixIndex]$ represents a nontrivial element of the first homology group $\HomologyOf[\One]{\AltSubspace}$. Actually, it suffices to prove our claim for “half of the points”: \[escape\] The sequence $ \SetFamOf[\MatrixIndex\in{\mathbb{N}}]{ \TheProjectionOf{ {( \TheIsometryOf[\TheRoot]{{ \begingroup \def\tempa{-} \def\minus{-} \ifx\tempa\minus \def\next{} \else \def\next{-} \fi \expandafter\endgroup\next }\Two\MatrixIndex} , \TheIsometryOf[\AltRoot]{{ \begingroup \def\tempa{} \def\minus{-} \ifx\tempa\minus \def\next{-} \else \def\next{} \fi \expandafter\endgroup\next }\Two\MatrixIndex} )} } } $ is unbounded in the quotient space $ \AffBuild/\SlOf[\Two]{{ \begingroup \def\tempa{\TheRoot,\AltRoot} \def\usual{\TheRoot,\AltRoot} \ifx\tempa\usual \def\next{\IntLaurent} \else \def\next{!!! FIXME !!!} \fi \expandafter\endgroup\next }}. $ Note that $ \SlOf[\Two]{ \TheNumberFieldOf{\TheVariable} } {\times}\SlOf[\Two]{ \TheNumberFieldOf{\TheVariable} } $ acts on $ \TheTree[\TheRoot] {\times}\TheTree[\AltRoot] $ componentwise and recall that the valuations $\TheValuation[\TheRoot]$ and $\TheValuation[\AltRoot]$ define a metric on $ \SlOf[\Two]{ \TheNumberFieldOf{\TheVariable} } {\times}\SlOf[\Two]{ \TheNumberFieldOf{\TheVariable} } $ so that vertex stabilizers are bounded subgroups. Thus, to prove that a set of vertices in the quotient $ \AffBuild/\SlOf[\Two]{{ \begingroup \def\tempa{\TheRoot,\AltRoot} \def\usual{\TheRoot,\AltRoot} \ifx\tempa\usual \def\next{\IntLaurent} \else \def\next{!!! FIXME !!!} \fi \expandafter\endgroup\next }} $ is not bounded, it suffices to prove that it has an unbounded preimage under the canonical projection $$\bigl(\SlOf[\Two]{ \TheNumberFieldOf{\TheVariable} } {\times}\SlOf[\Two]{ \TheNumberFieldOf{\TheVariable} }\bigr) / \Sl_{\Two}\bigl({ \begingroup \def\tempa{\TheRoot,\AltRoot} \def\usual{\TheRoot,\AltRoot} \ifx\tempa\usual \def\next{\IntLaurent} \else \def\next{!!! FIXME !!!} \fi \expandafter\endgroup\next }\bigr) \longrightarrow \AffBuild / \Sl_{\Two}\bigl({ \begingroup \def\tempa{\TheRoot,\AltRoot} \def\usual{\TheRoot,\AltRoot} \ifx\tempa\usual \def\next{\IntLaurent} \else \def\next{!!! FIXME !!!} \fi \expandafter\endgroup\next }\bigr)$$ where $ \SlOf[\Two]{{ \begingroup \def\tempa{\TheRoot,\AltRoot} \def\usual{\TheRoot,\AltRoot} \ifx\tempa\usual \def\next{\IntLaurent} \else \def\next{!!! FIXME !!!} \fi \expandafter\endgroup\next }} $ is embedded diagonally in $ \SlOf[\Two]{ \TheNumberFieldOf{\TheVariable} } {\times}\SlOf[\Two]{ \TheNumberFieldOf{\TheVariable} }. $ Put $ \MatrixPair := {(\DiagMatrix,\DiagMatrix[][-\One])} \in \SlOf[\Two]{\TheNumberFieldOf{\TheVariable}} {\times}\SlOf[\Two]{\TheNumberFieldOf{\TheVariable}}, $ and observe that $$\MatrixPair[][\MatrixIndex]{\cdot}{( \TheIsometryOf[\TheRoot]{\Zero} , \TheIsometryOf[\AltRoot]{\Zero} )} = {( \TheIsometryOf[\TheRoot]{{ \begingroup \def\tempa{-} \def\minus{-} \ifx\tempa\minus \def\next{} \else \def\next{-} \fi \expandafter\endgroup\next }\Two\MatrixIndex} , \TheIsometryOf[\TheRoot]{{ \begingroup \def\tempa{} \def\minus{-} \ifx\tempa\minus \def\next{-} \else \def\next{} \fi \expandafter\endgroup\next }\Two\MatrixIndex} )}.$$ As we have argued, it suffices to prove that the sequence $ \SlOf[\Two]{{ \begingroup \def\tempa{\TheRoot,\AltRoot} \def\usual{\TheRoot,\AltRoot} \ifx\tempa\usual \def\next{\IntLaurent} \else \def\next{!!! FIXME !!!} \fi \expandafter\endgroup\next }} \MatrixPair[][\MatrixExponent] $ is unbounded in $ \SlOf[\Two]{ \TheNumberFieldOf{\TheVariable} } {\times}\SlOf[\Two]{ \TheNumberFieldOf{\TheVariable} } $ modulo $ \SlOf[\Two]{{ \begingroup \def\tempa{\TheRoot,\AltRoot} \def\usual{\TheRoot,\AltRoot} \ifx\tempa\usual \def\next{\IntLaurent} \else \def\next{!!! FIXME !!!} \fi \expandafter\endgroup\next }} . $ So assume, for a contradiction, this sequence is bounded. By definition, this means that there is a global constant $\TheValuationBound$ satisfying the following condition: > For any $\MatrixExponent\in{\mathbb{N}}$, there is a matrix $ > \TheMatrix[\MatrixExponent] > = > { \mathchoice{ \begin{pmatrix} > \OneOne[\MatrixExponent] & \OneTwo[\MatrixExponent]\\ > \TwoOne[\MatrixExponent] & \TwoTwo[\MatrixExponent] > \end{pmatrix} > }{ \left(\begin{smallmatrix} > \OneOne[\MatrixExponent] & \OneTwo[\MatrixExponent]\\ > \TwoOne[\MatrixExponent] & \TwoTwo[\MatrixExponent] > \end{smallmatrix}\right) > }{ \left(\begin{smallmatrix} > \OneOne[\MatrixExponent] & \OneTwo[\MatrixExponent]\\ > \TwoOne[\MatrixExponent] & \TwoTwo[\MatrixExponent] > \end{smallmatrix}\right) > }{ \left(\begin{smallmatrix} > \OneOne[\MatrixExponent] & \OneTwo[\MatrixExponent]\\ > \TwoOne[\MatrixExponent] & \TwoTwo[\MatrixExponent] > \end{smallmatrix}\right) > } > } \in > \SlOf[\Two]{{ \begingroup > \def\tempa{\TheRoot,\AltRoot} \def\usual{\TheRoot,\AltRoot} \ifx\tempa\usual > \def\next{\IntLaurent} \else > \def\next{!!! FIXME !!!} \fi > \expandafter\endgroup\next > }} > $ such that the values of $\TheValuation[\TheRoot]$ of the coefficients of $ > \TheMatrix[\MatrixExponent] > \DiagMatrix[][\MatrixExponent] > $ are bounded from below by $\TheValuationBound$ and the values of $\TheValuation[\AltRoot]$ of the coefficients of $ > \TheMatrix[\MatrixExponent] > \DiagMatrix[][-\MatrixExponent] > $ are also bounded from below by $\TheValuationBound$. Recall that $ \DiagMatrix = { \mathchoice{ \begin{pmatrix} \TheUnit & \Zero\\ \Zero & \TheUnit[][-\One] \end{pmatrix} }{ \left(\begin{smallmatrix} \TheUnit & \Zero\\ \Zero & \TheUnit[][-\One] \end{smallmatrix}\right) }{ \left(\begin{smallmatrix} \TheUnit & \Zero\\ \Zero & \TheUnit[][-\One] \end{smallmatrix}\right) }{ \left(\begin{smallmatrix} \TheUnit & \Zero\\ \Zero & \TheUnit[][-\One] \end{smallmatrix}\right) } } $ and that $ \TheValuationOf[\TheRoot]{\TheUnit} =-\One $ whereas $ \TheValuationOf[\AltRoot]{\TheUnit} =\One . $ Since $$\TheValuationBound \leq \TheValuationOf[\TheRoot]{ \OneOne[\MatrixExponent] \TheUnit[][\MatrixExponent] } = \TheValuationOf[\TheRoot]{ \OneOne[\MatrixExponent] } + \MatrixExponent \TheValuationOf[\TheRoot]{\TheUnit} = \TheValuationOf[\TheRoot]{ \OneOne[\MatrixExponent] } - \MatrixExponent$$ and $$\TheValuationBound \leq \TheValuationOf[\AltRoot]{ \OneOne[\MatrixExponent] \TheUnit[][-\MatrixExponent] } = \TheValuationOf[\AltRoot]{ \OneOne[\MatrixExponent] } - \MatrixExponent \TheValuationOf[\AltRoot]{\TheUnit} = \TheValuationOf[\AltRoot]{ \OneOne[\MatrixExponent] } - \MatrixExponent$$ we find that $ \TheValuationOf[\TheRoot]{ \OneOne[\MatrixExponent] } \geq\One $ and $ \TheValuationOf[\AltRoot]{ \OneOne[\MatrixExponent] } \geq \One $ whenever $\MatrixIndex\geq\One-\TheValuationBound$, which implies $\OneOne[\MatrixExponent]=\Zero$. However, the same argument shows $\TwoOne[\MatrixExponent]=\Zero$, for $\MatrixExponent\geq\One-\TheValuationBound$. But then, $\TheMatrix[\One-\TheValuationBound]\not\in \SlOf[\Two]{{ \begingroup \def\tempa{\TheRoot,\AltRoot} \def\usual{\TheRoot,\AltRoot} \ifx\tempa\usual \def\next{\IntLaurent} \else \def\next{!!! FIXME !!!} \fi \expandafter\endgroup\next }}$. **Brown’s criterion** The following is an immediate consequence of [@Bro87a Theorem 2.2]. \[Brown\] Suppose a group $\AbstractGroup$ acts by cell-permuting homeomorphisms on a contractible CW–complex $\TheComplex$ such that stabilizers of $\TheDim$–cells are of type [FP${}_{\TheType+\One-\TheDim}$]{}. Assume that $\TheComplex$ admits a filtration $$\TheComplex[\Zero] \subseteq\TheComplex[\One] \subseteq\TheComplex[\Two] \subseteq \cdots \subseteq \TheComplex = \Union[\FiltrIndex\in{\mathbb{N}}]{\TheComplex[\FiltrIndex]}$$ by $\AbstractGroup$–invariant, cocompact subcomplexes $ \TheComplex[\FiltrIndex]. $ Then $\AbstractGroup$ is not of type [FP${}_{\TheType+\One}$]{} if each of the reduced homology homomorphisms $$\RedHomologyOf[\TheType]{\TheComplex[\Zero]} \longrightarrow \RedHomologyOf[\TheType]{\TheComplex[\FiltrIndex]}$$ is nontrivial. In the following section, we will verify that cell stabilizers of the $\SlOf[\Two]{{ \begingroup \def\tempa{\TheRoot,\AltRoot} \def\usual{\TheRoot,\AltRoot} \ifx\tempa\usual \def\next{\IntLaurent} \else \def\next{!!! FIXME !!!} \fi \expandafter\endgroup\next }}$–action on $\AffBuild$ are of type [FP${}_{\infty}$]{}. Assuming this hypothesis for the moment, we can finish the proof of as follows: The family of loops $\TheLoop[\TheNumber]$ is contained within the cocompact subspace $\TheSubspace[\Zero]$, which is a subcomplex of (a suitable subdivision) of $\AffBuild$. Since the quotient $ { \AffBuild / \SlOf[\Two]{{ \begingroup \def\tempa{\TheRoot,\AltRoot} \def\usual{\TheRoot,\AltRoot} \ifx\tempa\usual \def\next{\IntLaurent} \else \def\next{!!! FIXME !!!} \fi \expandafter\endgroup\next }} } $ has countably many cells, we can extend $\TheSubspace[\Zero]$ to a filtration $$\TheSubspace[\Zero] \subseteq \TheSubspace[\One] \subseteq \TheSubspace[\Two] \subseteq \TheSubspace[\Three] \subseteq \cdots \subseteq \AffBuild$$ of $\AffBuild$ by $\SlOf[\Two]{{ \begingroup \def\tempa{\TheRoot,\AltRoot} \def\usual{\TheRoot,\AltRoot} \ifx\tempa\usual \def\next{\IntLaurent} \else \def\next{!!! FIXME !!!} \fi \expandafter\endgroup\next }}$–invariant, cocompact subcomplexes $\TheSubspace[\FiltrIndex]$. By , for each index $\FiltrIndex$ there is a natural number $\MatrixIndex$ such that $$\TheSubspace[\FiltrIndex] \subseteq \AffBuild {-}\SetOf{ {( \TheIsometryOf[\TheRoot]{{ \begingroup \def\tempa{-} \def\minus{-} \ifx\tempa\minus \def\next{} \else \def\next{-} \fi \expandafter\endgroup\next }\MatrixIndex} , \TheIsometryOf[\AltRoot]{{ \begingroup \def\tempa{} \def\minus{-} \ifx\tempa\minus \def\next{-} \else \def\next{} \fi \expandafter\endgroup\next }\MatrixIndex} )} }.$$ Therefore, by , $\TheLoop[\MatrixIndex]$ represents a nontrivial class in $ \RedHomologyOf[\One]{\TheSubspace[\FiltrIndex]} $, thus showing that $$\RedHomologyOf[\One]{\TheSubspace[\Zero]} \longrightarrow \RedHomologyOf[\One]{\TheSubspace[\FiltrIndex]}$$ is nontrivial. By , $\SlOf[\Two]{{ \begingroup \def\tempa{\TheRoot,\AltRoot} \def\usual{\TheRoot,\AltRoot} \ifx\tempa\usual \def\next{\IntLaurent} \else \def\next{!!! FIXME !!!} \fi \expandafter\endgroup\next }}$ is not of type [FP${}_{\Two}$]{}. Finiteness properties of cell stabilizers {#sec:stabilizers} ========================================= It remains to verify the hypothesis about cell stabilizers. Borel and Serre [@BoSe73 11.1] have shown that arithmetic groups are of type [F${}_{\infty}$]{}. Therefore, the following lemma proves what we need, and more: \[stabilizers\] The cell stabilizers of the $ \SlOf[\Two]{{ \begingroup \def\tempa{\TheRoot,\AltRoot} \def\usual{\TheRoot,\AltRoot} \ifx\tempa\usual \def\next{\IntLaurent} \else \def\next{!!! FIXME !!!} \fi \expandafter\endgroup\next }} $-action on $\AffBuild$ are arithmetic groups. This section is devoted entirely to the proof of this lemma. The set $ \TheBasis := \SetOf[{ \TheUnit[][\TheExponent] }]{ \TheExponent\in\TheIntegers } $ is a $\TheNumberField$–vector space basis for $ { \begingroup \def\tempa{\TheRoot,\AltRoot} \def\usual{\TheRoot,\AltRoot} \def\next{!!! FIXME !!!} \ifx\tempa\usual \def\next{\RatLaurent} \fi \def\usual{\AltRoot} \ifx\tempa\usual \def\next{\RatInvPoly} \fi \def\usual{\TheRoot} \ifx\tempa\usual \def\next{\RatPoly} \fi \expandafter\endgroup\next } $ such that the subring ${ \begingroup \def\tempa{\TheRoot,\AltRoot} \def\usual{\TheRoot,\AltRoot} \ifx\tempa\usual \def\next{\IntLaurent} \else \def\next{!!! FIXME !!!} \fi \expandafter\endgroup\next }$ consists precisely of those elements in ${ \begingroup \def\tempa{\TheRoot,\AltRoot} \def\usual{\TheRoot,\AltRoot} \def\next{!!! FIXME !!!} \ifx\tempa\usual \def\next{\RatLaurent} \fi \def\usual{\AltRoot} \ifx\tempa\usual \def\next{\RatInvPoly} \fi \def\usual{\TheRoot} \ifx\tempa\usual \def\next{\RatPoly} \fi \expandafter\endgroup\next }$ whose coefficients with respect to $\TheBasis$ are all in $\TheNumberRing$. **Stabilizers of standard vertices** We fix the following family of [standard vertices]{} in $\AffBuild$. For $\TheVertInd\in{\mathbb{N}}$, put $$\AffVertex[\TheVertInd] := {( \TheIsometryOf[\TheRoot]{\TheVertInd} , \TheIsometryOf[\AltRoot]{\Zero} )}.$$ Recall that $\SlOf[\Two]{\TheNumberFieldOf{\TheVariable}}$ acts on the tree $\TheTree[\TheRoot]$. The vertex $ \TheIsometryOf[\TheRoot]{\TheVertInd} \in \TheTree[\TheRoot] $ has the stabilizer $$\SetOf[{ { \mathchoice{ \begin{pmatrix} a & b\\ c & d \end{pmatrix} }{ \left(\begin{smallmatrix} a & b\\ c & d \end{smallmatrix}\right) }{ \left(\begin{smallmatrix} a & b\\ c & d \end{smallmatrix}\right) }{ \left(\begin{smallmatrix} a & b\\ c & d \end{smallmatrix}\right) } } \in \SlOf[\Two]{\TheNumberFieldOf{\TheVariable}} }]{ \TheValuationOf[\TheRoot]{a}, \TheValuationOf[\TheRoot]{d} \geq \Zero;\,\, \TheValuationOf[\TheRoot]{b} \geq -\TheVertInd;\,\, \TheValuationOf[\TheRoot]{c} \geq \TheVertInd }.$$ Thus, the stabilizer $$\StabilizerOf[{ \begingroup \def\tempa{\TheRoot,\AltRoot} \def\usual{\TheRoot,\AltRoot} \def\next{!!! FIXME !!!} \ifx\tempa\usual \def\next{\RatLaurent} \fi \def\usual{\AltRoot} \ifx\tempa\usual \def\next{\RatInvPoly} \fi \def\usual{\TheRoot} \ifx\tempa\usual \def\next{\RatPoly} \fi \expandafter\endgroup\next }]{ \AffVertex[\TheVertInd] }$$ of the vertex $\AffVertex[\TheVertInd]$ under the diagonal $ \SlOf[\Two]{{ \begingroup \def\tempa{\TheRoot,\AltRoot} \def\usual{\TheRoot,\AltRoot} \def\next{!!! FIXME !!!} \ifx\tempa\usual \def\next{\RatLaurent} \fi \def\usual{\AltRoot} \ifx\tempa\usual \def\next{\RatInvPoly} \fi \def\usual{\TheRoot} \ifx\tempa\usual \def\next{\RatPoly} \fi \expandafter\endgroup\next }} $-action on $ \AffBuild = \TheTree[\TheRoot]{\times}\TheTree[\AltRoot] $ is $$\SetOf[{ { \mathchoice{ \begin{pmatrix} a & b\\ c & d \end{pmatrix} }{ \left(\begin{smallmatrix} a & b\\ c & d \end{smallmatrix}\right) }{ \left(\begin{smallmatrix} a & b\\ c & d \end{smallmatrix}\right) }{ \left(\begin{smallmatrix} a & b\\ c & d \end{smallmatrix}\right) } } \in \SlOf[\Two]{{ \begingroup \def\tempa{\TheRoot,\AltRoot} \def\usual{\TheRoot,\AltRoot} \def\next{!!! FIXME !!!} \ifx\tempa\usual \def\next{\RatLaurent} \fi \def\usual{\AltRoot} \ifx\tempa\usual \def\next{\RatInvPoly} \fi \def\usual{\TheRoot} \ifx\tempa\usual \def\next{\RatPoly} \fi \expandafter\endgroup\next }} }]{ \begin{array}{l} \TheValuationOf[\TheRoot]{a}, \TheValuationOf[\TheRoot]{d} \geq \Zero;\,\, \TheValuationOf[\TheRoot]{b} \geq -\TheVertInd;\,\, \TheValuationOf[\TheRoot]{c} \geq \TheVertInd \\ \TheValuationOf[\AltRoot]{a}, \TheValuationOf[\AltRoot]{b}, \TheValuationOf[\AltRoot]{c}, \TheValuationOf[\AltRoot]{d} \geq \Zero \end{array} },$$ which is an affine algebraic $\TheNumberField$–group: Because of the bounds on the valuations $\TheValuation[\TheRoot]$ and $\TheValuation[\AltRoot]$, each matrix in $ \StabilizerOf[{ \begingroup \def\tempa{\TheRoot,\AltRoot} \def\usual{\TheRoot,\AltRoot} \def\next{!!! FIXME !!!} \ifx\tempa\usual \def\next{\RatLaurent} \fi \def\usual{\AltRoot} \ifx\tempa\usual \def\next{\RatInvPoly} \fi \def\usual{\TheRoot} \ifx\tempa\usual \def\next{\RatPoly} \fi \expandafter\endgroup\next }]{ \AffVertex[\TheVertInd] } $ can be considered as a $\Four$–tuple $ \TupelOf{a,b,c,d} $ in the finite dimensional $\TheNumberField$–vector space $ \TheVectorSpace[\Zero] {\times}\TheVectorSpace[\TheVertInd] {\times}\TheVectorSpace[\overline{\TheVertInd}] {\times}\TheVectorSpace[\Zero] $ where $$\TheVectorSpace[\TheVertInd] := \SetOf[{ \Sum[\TheIndex=\Zero][\TheVertInd]{ \TheCoefficient[\TheIndex] \TheVariable[][\TheIndex] } }]{ \TheCoefficient[\TheIndex]\in\TheNumberField },\qquad \TheVectorSpace[\wbar{\TheVertInd}] := \begin{cases} \TheNumberField & \text{for\ } \TheVertInd = \Zero \\ \SetOf{\Zero} & \text{for\ } \TheVertInd > \Zero. \end{cases}$$ The requirement that the determinant be $\One$ translates into a system of algebraic equations defining an affine variety in $ \TheVectorSpace[\Zero] {\times}\TheVectorSpace[\TheVertInd] {\times}\TheVectorSpace[\wbar{\TheVertInd}] {\times}\TheVectorSpace[\Zero]. $ This variety is an affine $\TheNumberField$–group by means of matrix multiplication. Note that the vector space $\TheVectorSpace[\TheVertInd]$ carries an integral structure: the lattice of integer points is $ \bigl\{ \Sum[\TheIndex=\Zero][\TheVertInd]{ \TheCoefficient[\TheIndex] \TheVariable[][\TheIndex] } ~\big|~ \TheCoefficient[\TheIndex] \in \TheNumberRing \bigr\}. $ Thus, the stabilizer $ \StabilizerOf[{ \begingroup \def\tempa{\TheRoot,\AltRoot} \def\usual{\TheRoot,\AltRoot} \ifx\tempa\usual \def\next{\IntLaurent} \else \def\next{!!! FIXME !!!} \fi \expandafter\endgroup\next }]{ \AffVertex[\TheVertInd] } $ of $\AffVertex[\TheVertInd]$ in $ \SlOf[\Two]{{ \begingroup \def\tempa{\TheRoot,\AltRoot} \def\usual{\TheRoot,\AltRoot} \ifx\tempa\usual \def\next{\IntLaurent} \else \def\next{!!! FIXME !!!} \fi \expandafter\endgroup\next }} $ is the arithmetic subgroup $$\SetOf[{ { \mathchoice{ \begin{pmatrix} a & b\\ c & d \end{pmatrix} }{ \left(\begin{smallmatrix} a & b\\ c & d \end{smallmatrix}\right) }{ \left(\begin{smallmatrix} a & b\\ c & d \end{smallmatrix}\right) }{ \left(\begin{smallmatrix} a & b\\ c & d \end{smallmatrix}\right) } } \in \SlOf[\Two]{{ \begingroup \def\tempa{\TheRoot,\AltRoot} \def\usual{\TheRoot,\AltRoot} \ifx\tempa\usual \def\next{\IntLaurent} \else \def\next{!!! FIXME !!!} \fi \expandafter\endgroup\next }} }]{ \begin{array}{l} \TheValuationOf[\TheRoot]{a}, \TheValuationOf[\TheRoot]{d} \geq \Zero;\,\, \TheValuationOf[\TheRoot]{b} \geq -\TheVertInd;\,\, \TheValuationOf[\TheRoot]{c} \geq \TheVertInd \\ \TheValuationOf[\AltRoot]{a}, \TheValuationOf[\AltRoot]{b}, \TheValuationOf[\AltRoot]{c}, \TheValuationOf[\AltRoot]{d} \geq \Zero \end{array} }.$$ The idea of the proof is to push this result forward to other vertices. **Other vertices are translates** We claim that every vertex $ \AltAffVertex = {( \AltVertex[\TheRoot] , \AltVertex[\AltRoot] )} \in\AffBuild $ can be written as $\TheMatrix{\cdot}\AffVertex[\TheVertInd]$ for some $\TheMatrix\in\GLnOf{{ \begingroup \def\tempa{\TheRoot,\AltRoot} \def\usual{\TheRoot,\AltRoot} \def\next{!!! FIXME !!!} \ifx\tempa\usual \def\next{\RatLaurent} \fi \def\usual{\AltRoot} \ifx\tempa\usual \def\next{\RatInvPoly} \fi \def\usual{\TheRoot} \ifx\tempa\usual \def\next{\RatPoly} \fi \expandafter\endgroup\next }}$ and some $\TheVertInd\in{\mathbb{N}}\union\SetOf{\Zero}$. To see this, we will use that the ray $$\TheRay[\AltRoot] := \TheIsometryOf[\AltRoot]{\Zero} {-\negthinspace\negthinspace\negthinspace- \negthinspace\negthinspace\negthinspace-}\TheIsometryOf[\AltRoot]{\One} {-\negthinspace\negthinspace\negthinspace- \negthinspace\negthinspace\negthinspace-}\TheIsometryOf[\AltRoot]{\Two} {-\negthinspace\negthinspace\negthinspace- \negthinspace\negthinspace\negthinspace-}\TheIsometryOf[\AltRoot]{\Three} {-\negthinspace\negthinspace\negthinspace- \negthinspace\negthinspace\negthinspace-}\cdots$$ is a fundamental domain for the action of $\SlOf[\Two]{{ \begingroup \def\tempa{\AltRoot} \def\usual{\TheRoot,\AltRoot} \def\next{!!! FIXME !!!} \ifx\tempa\usual \def\next{\RatLaurent} \fi \def\usual{\AltRoot} \ifx\tempa\usual \def\next{\RatInvPoly} \fi \def\usual{\TheRoot} \ifx\tempa\usual \def\next{\RatPoly} \fi \expandafter\endgroup\next }}$ on $\TheTree[\AltRoot]$. This follows from the discussion in Serre [@Serr77 page 86f] and the fact that $\TheVariable\mapsto\TheVariable[][-\One]$ induces a ring automorphism of $\RatLaurent$ that interchanges $\RatPoly$ and $\RatInvPoly$. The matrix $ { \mathchoice{ \begin{pmatrix} \TheUnit[][\AltVertInd] & \Zero\\ \Zero & \One \end{pmatrix} }{ \left(\begin{smallmatrix} \TheUnit[][\AltVertInd] & \Zero\\ \Zero & \One \end{smallmatrix}\right) }{ \left(\begin{smallmatrix} \TheUnit[][\AltVertInd] & \Zero\\ \Zero & \One \end{smallmatrix}\right) }{ \left(\begin{smallmatrix} \TheUnit[][\AltVertInd] & \Zero\\ \Zero & \One \end{smallmatrix}\right) } } $ translates $\TheIsometryOf[\AltRoot]{\AltVertInd}$ to $\TheIsometryOf[\AltRoot]{\Zero}$ as $\TheVariable$ is a uniformizing element for the valuation $\TheValuation[\AltRoot]$. Thus, within two moves, we can adjust the second coordinate of $\AltAffVertex$ to $ \TheIsometryOf[\AltRoot]{\Zero}. $ Now, we consider $ { \begingroup \def\tempa{\TheRoot} \def\usual{\TheRoot,\AltRoot} \def\next{!!! FIXME !!!} \ifx\tempa\usual \def\next{\RatLaurent} \fi \def\usual{\AltRoot} \ifx\tempa\usual \def\next{\RatInvPoly} \fi \def\usual{\TheRoot} \ifx\tempa\usual \def\next{\RatPoly} \fi \expandafter\endgroup\next }. $ In this case, the discussion in Serre [@Serr77] applies directly: the ray $$\TheRay[\TheRoot] := \TheIsometryOf[\TheRoot]{\Zero} {-\negthinspace\negthinspace\negthinspace- \negthinspace\negthinspace\negthinspace-}\TheIsometryOf[\TheRoot]{\One} {-\negthinspace\negthinspace\negthinspace- \negthinspace\negthinspace\negthinspace-}\TheIsometryOf[\TheRoot]{\Two} {-\negthinspace\negthinspace\negthinspace- \negthinspace\negthinspace\negthinspace-}\TheIsometryOf[\TheRoot]{\Three} {-\negthinspace\negthinspace\negthinspace- \negthinspace\negthinspace\negthinspace-}\cdots$$ is a fundamental domain in $\TheTree[\TheRoot]$ for the action of $ \SlOf[\Two]{{ \begingroup \def\tempa{\TheRoot} \def\usual{\TheRoot,\AltRoot} \def\next{!!! FIXME !!!} \ifx\tempa\usual \def\next{\RatLaurent} \fi \def\usual{\AltRoot} \ifx\tempa\usual \def\next{\RatInvPoly} \fi \def\usual{\TheRoot} \ifx\tempa\usual \def\next{\RatPoly} \fi \expandafter\endgroup\next }}. $ This allows us to adjust the first coordinate. Note that every matrix in $ \SlOf[\Two]{{ \begingroup \def\tempa{\TheRoot} \def\usual{\TheRoot,\AltRoot} \def\next{!!! FIXME !!!} \ifx\tempa\usual \def\next{\RatLaurent} \fi \def\usual{\AltRoot} \ifx\tempa\usual \def\next{\RatInvPoly} \fi \def\usual{\TheRoot} \ifx\tempa\usual \def\next{\RatPoly} \fi \expandafter\endgroup\next }} $ fixes the vertex $ \TheIsometryOf[\AltRoot]{\Zero} \in \TheTree[\AltRoot]. $ Thus, we do not change the second coordinate during the third and final move. We conclude: \[shape\] Every vertex stabilizer in $ \SlOf[\Two]{{ \begingroup \def\tempa{\TheRoot,\AltRoot} \def\usual{\TheRoot,\AltRoot} \ifx\tempa\usual \def\next{\IntLaurent} \else \def\next{!!! FIXME !!!} \fi \expandafter\endgroup\next }} $ is of the form $$\TheMatrix \StabilizerOf[{ \begingroup \def\tempa{\TheRoot,\AltRoot} \def\usual{\TheRoot,\AltRoot} \def\next{!!! FIXME !!!} \ifx\tempa\usual \def\next{\RatLaurent} \fi \def\usual{\AltRoot} \ifx\tempa\usual \def\next{\RatInvPoly} \fi \def\usual{\TheRoot} \ifx\tempa\usual \def\next{\RatPoly} \fi \expandafter\endgroup\next }]{\AffVertex[\TheVertInd]} \TheMatrix[][-\One] \intersect \SlOf[\Two]{{ \begingroup \def\tempa{\TheRoot,\AltRoot} \def\usual{\TheRoot,\AltRoot} \ifx\tempa\usual \def\next{\IntLaurent} \else \def\next{!!! FIXME !!!} \fi \expandafter\endgroup\next }}$$ for some $\TheVertInd$ and some matrix $\TheMatrix\in\GLnOf{{ \begingroup \def\tempa{\TheRoot,\AltRoot} \def\usual{\TheRoot,\AltRoot} \def\next{!!! FIXME !!!} \ifx\tempa\usual \def\next{\RatLaurent} \fi \def\usual{\AltRoot} \ifx\tempa\usual \def\next{\RatInvPoly} \fi \def\usual{\TheRoot} \ifx\tempa\usual \def\next{\RatPoly} \fi \expandafter\endgroup\next }}$. We also make the following: \[raw\_variety\] Since multiplication by $\TheMatrix$ can lower valuations only by a bounded amount, we can find $\TheRange\in{\mathbb{N}}$ such that $$\TheMatrix \StabilizerOf[{ \begingroup \def\tempa{\TheRoot,\AltRoot} \def\usual{\TheRoot,\AltRoot} \def\next{!!! FIXME !!!} \ifx\tempa\usual \def\next{\RatLaurent} \fi \def\usual{\AltRoot} \ifx\tempa\usual \def\next{\RatInvPoly} \fi \def\usual{\TheRoot} \ifx\tempa\usual \def\next{\RatPoly} \fi \expandafter\endgroup\next }]{\AffVertex[\TheVertInd]} \TheMatrix[][-\One] \subseteq \SetOf[{ { \mathchoice{ \begin{pmatrix} a & b\\ c & d \end{pmatrix} }{ \left(\begin{smallmatrix} a & b\\ c & d \end{smallmatrix}\right) }{ \left(\begin{smallmatrix} a & b\\ c & d \end{smallmatrix}\right) }{ \left(\begin{smallmatrix} a & b\\ c & d \end{smallmatrix}\right) } } \in \SlOf[\Two]{{ \begingroup \def\tempa{\TheRoot,\AltRoot} \def\usual{\TheRoot,\AltRoot} \def\next{!!! FIXME !!!} \ifx\tempa\usual \def\next{\RatLaurent} \fi \def\usual{\AltRoot} \ifx\tempa\usual \def\next{\RatInvPoly} \fi \def\usual{\TheRoot} \ifx\tempa\usual \def\next{\RatPoly} \fi \expandafter\endgroup\next }} }]{ a,b,c,d \in \AltVectorSpace[\TheRange] }$$ where $ \AltVectorSpace[\TheRange] := \bigl\{ \Sum[\TheIndex=-\TheRange][\TheRange]{ \TheCoefficient[\TheIndex] \TheVariable[][\TheIndex] } ~\big|~ \TheCoefficient[\TheIndex] \in \TheNumberField \bigr\}. $ **Finite dimensional approximations** We want to use and argue that $ \TheMatrix \StabilizerOf[{ \begingroup \def\tempa{\TheRoot,\AltRoot} \def\usual{\TheRoot,\AltRoot} \def\next{!!! FIXME !!!} \ifx\tempa\usual \def\next{\RatLaurent} \fi \def\usual{\AltRoot} \ifx\tempa\usual \def\next{\RatInvPoly} \fi \def\usual{\TheRoot} \ifx\tempa\usual \def\next{\RatPoly} \fi \expandafter\endgroup\next }]{\AffVertex[\TheVertInd]} \TheMatrix[][-\One] $ is an affine $\TheNumberField$–group with arithmetic subgroup $$\TheMatrix \StabilizerOf[{ \begingroup \def\tempa{\TheRoot,\AltRoot} \def\usual{\TheRoot,\AltRoot} \def\next{!!! FIXME !!!} \ifx\tempa\usual \def\next{\RatLaurent} \fi \def\usual{\AltRoot} \ifx\tempa\usual \def\next{\RatInvPoly} \fi \def\usual{\TheRoot} \ifx\tempa\usual \def\next{\RatPoly} \fi \expandafter\endgroup\next }]{\AffVertex[\TheVertInd]} \TheMatrix[][-\One] \intersect \SlOf[\Two]{{ \begingroup \def\tempa{\TheRoot,\AltRoot} \def\usual{\TheRoot,\AltRoot} \ifx\tempa\usual \def\next{\IntLaurent} \else \def\next{!!! FIXME !!!} \fi \expandafter\endgroup\next }}.$$ This is accomplished as follows. \[variety\] Fix $\TheRange\in{\mathbb{N}}$ and let $ \AffGroup $ be a $\TheNumberField$–subvariety of the affine $\TheNumberField$–variety $ \SetOf[{ { \mathchoice{ \begin{pmatrix} a & b\\ c & d \end{pmatrix} }{ \left(\begin{smallmatrix} a & b\\ c & d \end{smallmatrix}\right) }{ \left(\begin{smallmatrix} a & b\\ c & d \end{smallmatrix}\right) }{ \left(\begin{smallmatrix} a & b\\ c & d \end{smallmatrix}\right) } } \in \SlOf[\Two]{{ \begingroup \def\tempa{\TheRoot,\AltRoot} \def\usual{\TheRoot,\AltRoot} \def\next{!!! FIXME !!!} \ifx\tempa\usual \def\next{\RatLaurent} \fi \def\usual{\AltRoot} \ifx\tempa\usual \def\next{\RatInvPoly} \fi \def\usual{\TheRoot} \ifx\tempa\usual \def\next{\RatPoly} \fi \expandafter\endgroup\next }} }]{ a,b,c,d \in \AltVectorSpace[\TheRange] }. $ Assume that $\AffGroup$ is a $\TheNumberField$–group with respect to matrix multiplication. Then $ \AffGroup \intersect \SlOf[\Two]{{ \begingroup \def\tempa{\TheRoot,\AltRoot} \def\usual{\TheRoot,\AltRoot} \ifx\tempa\usual \def\next{\IntLaurent} \else \def\next{!!! FIXME !!!} \fi \expandafter\endgroup\next }} $ is an arithmetic subgroup of $\AffGroup$. The integer points in $\AltVectorSpace[\TheRange]$ are $ \AltVectorSpace[\TheRange] \intersect \IntLaurent. $ Thus the integer points in $\AffGroup$ are $ \AffGroup\intersect \SlOf[\Two]{\IntLaurent}. $ We note that and imply: \[arithmetic\] All vertex stabilizers in $ \SlOf[\Two]{{ \begingroup \def\tempa{\TheRoot,\AltRoot} \def\usual{\TheRoot,\AltRoot} \ifx\tempa\usual \def\next{\IntLaurent} \else \def\next{!!! FIXME !!!} \fi \expandafter\endgroup\next }} $ are arithmetic groups. **Extending the argument to cell stabilizers** So far we have argued that vertex stabilizers are arithmetic. To extend this argument to stabilizers of cells of higher dimension, note that the action of $\SlOf[\Two]{{ \begingroup \def\tempa{\TheRoot,\AltRoot} \def\usual{\TheRoot,\AltRoot} \def\next{!!! FIXME !!!} \ifx\tempa\usual \def\next{\RatLaurent} \fi \def\usual{\AltRoot} \ifx\tempa\usual \def\next{\RatInvPoly} \fi \def\usual{\TheRoot} \ifx\tempa\usual \def\next{\RatPoly} \fi \expandafter\endgroup\next }}$ on $\AffBuild$ is type-preserving. Hence the stabilizer of a cell is the intersection of the stabilizers of its vertices. To recognize such a group as arithmetic using the above method, we just have to choose $\TheRange$ large enough to accommodate for all the involved vertex stabilizers simultaneously. This concludes the proof of . Comments on {#sec:finite_generation} ============ \[sec:comments\] We shall begin with answering when $\TheNumberOfPlaces=\One$ and $\TheMatrixSize=\Two$. \[prop:finite\_generation\] If $\TheRing$ is a $\One$–place ring, then $ \SlOf[\Two]{\TheRing} $ is not finitely generated. By our hypothesis on $\TheRing$, there is an algebraically closed field $\TheField$, and an irreducible smooth projective curve $\TheCurve$ defined over $\TheField$ such that $\TheRing$ is a subring of the field of rational functions $\TheFieldOf{\TheCurve}$. Let $\ThePrimeSet[\One] \subseteq \TheCurve$ be the finite set of closed points given in the definition of $\TheRing$ as a $\One$–place ring, and pick some $\TheCurvePoint \in \ThePrimeSet[\One]$. We let $\TheTree$ be the Bruhat–Tits tree for $ \SlOf[\Two]{\TheFieldOf{\TheCurve}} $ with the valuation $\TheValuation[\TheCurvePoint]$. We regard $\TheTree$ as a metric space by assigning unit length to all edges. Denote the geodesic in $\TheTree$ corresponding to the diagonal subgroup of $\SlOf[\Two]{\TheFieldOf{\TheCurve}}$ by $\TheLine$, and parameterize $\TheLine$ by an isometry $ \TheIsometry \mapcolon \TheReals \rightarrow \TheLine $ such that the end of $\TheLine$ corresponding to the positive reals is fixed by upper-triangular matrices. It follows from the definition of a $\One$–place ring, that there exists an element $ \TheFunction \in \TheRing $ such that $ \TheValuationOf[\TheCurvePoint]{\TheFunction} < \Zero. $ We use this element to define for each $\MatrixIndex\in{\mathbb{N}}$ a matrix $$\UnipMatrix[\MatrixIndex] := { \mathchoice{ \begin{pmatrix} \One & \TheFunction[][\MatrixIndex]\\ \Zero & \One \end{pmatrix} }{ \left(\begin{smallmatrix} \One & \TheFunction[][\MatrixIndex]\\ \Zero & \One \end{smallmatrix}\right) }{ \left(\begin{smallmatrix} \One & \TheFunction[][\MatrixIndex]\\ \Zero & \One \end{smallmatrix}\right) }{ \left(\begin{smallmatrix} \One & \TheFunction[][\MatrixIndex]\\ \Zero & \One \end{smallmatrix}\right) } }$$ Note that for sufficiently large $\MatrixIndex$, there is an $\TheReal[\MatrixIndex]>\Zero$ such that $$\UnipMatrix[\MatrixIndex] {\cdot}\TheIsometryOf{ \ClosedInterval{\Zero}{\TheReal[\MatrixIndex]} } \intersect \TheIsometryOf{ \ClosedInterval{\Zero}{\TheReal[\MatrixIndex]} } = \SetOf{ \TheIsometryOf{\TheReal[\MatrixIndex]} }.$$ Note also that $ \TheReal[\MatrixIndex] = -\MatrixIndex\TheValuationOf[\TheCurvePoint]{\TheFunction} + \TheShiftConstant $ for some $\TheShiftConstant\in\TheReals$. We claim that for any $\TheRadius>\Zero$, the $\TheRadius$–metric neighborhood of the orbit $ \SlOf[\Two]{\TheRing} {\cdot}\TheIsometryOf{\Zero} \subseteq \TheTree $ is not connected. Indeed, for large $\MatrixIndex$, the unique path between $ \TheIsometryOf{\Zero} $ and $ \UnipMatrix[\MatrixIndex]{\cdot}\TheIsometryOf{\Zero} $ contains $ \TheIsometryOf{\TheReal[\MatrixIndex]}, $ thus it suffices to show that $ \SlOf[\Two]{\TheRing} {\cdot}\TheIsometryOf{\TheReal[\MatrixIndex]} $ is an unbounded sequence in the quotient space $ {\TheTree}/{\SlOf[\Two]{\TheRing}}. $ Observe that for each $ \MatrixIndex\in{\mathbb{N}}, $ the diagonal matrix $$\DiagMatrix[\MatrixIndex] := { \mathchoice{ \begin{pmatrix} \TheFunction[][\MatrixIndex] & \Zero\\ \Zero & \TheFunction[][-\MatrixIndex] \end{pmatrix} }{ \left(\begin{smallmatrix} \TheFunction[][\MatrixIndex] & \Zero\\ \Zero & \TheFunction[][-\MatrixIndex] \end{smallmatrix}\right) }{ \left(\begin{smallmatrix} \TheFunction[][\MatrixIndex] & \Zero\\ \Zero & \TheFunction[][-\MatrixIndex] \end{smallmatrix}\right) }{ \left(\begin{smallmatrix} \TheFunction[][\MatrixIndex] & \Zero\\ \Zero & \TheFunction[][-\MatrixIndex] \end{smallmatrix}\right) } }$$ acts by translations on $\TheLine$ and that $ \DiagMatrix[\MatrixIndex] {\cdot}\TheIsometryOf{\Zero} = \TheIsometryOf{ -\Two\MatrixIndex\TheValuationOf[\TheCurvePoint]{\TheFunction} }. $ Thus, to prove our claim it suffices to show that $ \SlOf[\Two]{\TheRing} \DiagMatrix[\MatrixIndex]{\cdot}\TheIsometryOf{\Zero} $ is an unbounded sequence in $ {\TheTree/\SlOf[\Two]{\TheRing}}. $ Since point stabilizers in $\SlOf[\Two]{\TheFieldOf{\TheCurve}}$ are bounded, we can further reformulate our task as showing the sequence $\SlOf[\Two]{\TheRing}\DiagMatrix[\MatrixIndex]$ is unbounded in $ {\SlOf[\Two]{\TheFieldOf{\TheCurve}}/\SlOf[\Two]{\TheRing}}. $ For this, we will employ a proof by contradiction: Assuming that $ \SlOf[\Two]{\TheRing}\DiagMatrix[\MatrixIndex] $ is bounded, there exist matrices $$\TheMatrix[\MatrixExponent] = { \mathchoice{ \begin{pmatrix} \OneOne[\MatrixExponent] & \OneTwo[\MatrixExponent]\\ \TwoOne[\MatrixExponent] & \TwoTwo[\MatrixExponent] \end{pmatrix} }{ \left(\begin{smallmatrix} \OneOne[\MatrixExponent] & \OneTwo[\MatrixExponent]\\ \TwoOne[\MatrixExponent] & \TwoTwo[\MatrixExponent] \end{smallmatrix}\right) }{ \left(\begin{smallmatrix} \OneOne[\MatrixExponent] & \OneTwo[\MatrixExponent]\\ \TwoOne[\MatrixExponent] & \TwoTwo[\MatrixExponent] \end{smallmatrix}\right) }{ \left(\begin{smallmatrix} \OneOne[\MatrixExponent] & \OneTwo[\MatrixExponent]\\ \TwoOne[\MatrixExponent] & \TwoTwo[\MatrixExponent] \end{smallmatrix}\right) } } \in \SlOf[\Two]{\TheRing}$$ such that the image of the matrix entries of $ \TheMatrix[\MatrixExponent]\DiagMatrix[\MatrixExponent] $ under the valuation $\TheValuation[\TheCurvePoint]$ are bounded from below by a constant $\TheValuationBound$. In particular, $$\TheValuationBound \leq \TheValuationOf[\TheCurvePoint]{ \TheFunction[][\MatrixExponent] \OneOne[\MatrixExponent] } = \MatrixExponent\TheValuationOf[\TheCurvePoint]{\TheFunction} +\TheValuationOf[\TheCurvePoint]{\OneOne[\MatrixExponent]}.$$ Since $\TheValuationOf[\TheCurvePoint]{\TheFunction}<\Zero$, it follows that $ \TheValuationOf[\TheCurvePoint]{\OneOne[\MatrixExponent]} > \Zero $ for all but finitely many $\MatrixExponent$. Combining conditions  and  of the definition of a $\One$–place ring, $ \TheValuationOf[\AltCurvePoint]{\OneOne[\MatrixExponent]} \geq \Zero $ for all $ \AltCurvePoint\in \TheCurve. $ Therefore, $\OneOne[\MatrixExponent]$ is a constant function on $\TheCurve$. As $ \TheValuationOf[\TheCurvePoint]{\OneOne[\MatrixExponent]} > \Zero, $ we conclude that $\OneOne[\MatrixExponent]=\Zero$. Similarly, $\TwoOne[\MatrixExponent]=\Zero$ for sufficiently large $\MatrixExponent$ which contradicts that $\TheMatrix[\MatrixExponent]$ is invertible. We have completed our proof of the claim that for any $\TheRadius>\Zero$, the $\TheRadius$–metric neighborhood of the orbit $ \SlOf[\Two]{\TheRing} {\cdot}\TheIsometryOf{\Zero} \subseteq \TheTree $ is not connected. now follows from an application of the following lemma. Suppose a finitely generated group $\AbstractGroup$ acts on a geodesic metric space $\TheSpace$. Then, for any point $\TheSpacePoint \in \TheSpace$, there is a number $\TheRadius>\Zero$ such that the metric $\TheRadius$–neighborhood of the orbit of $\AbstractGroup{\cdot}\TheSpacePoint \subseteq \TheSpace$ is connected. Let $ \SetOf{ \TheGenerator[\One],\TheGenerator[\Two],\ldots, \TheGenerator[\TheLastIndex] } $ be a finite generating set for $\AbstractGroup$. Choose $\TheRadius$ such that the ball $ \TheBallOf[\TheRadius]{\TheSpacePoint} $ contains all translates $ \TheGenerator[\TheIndex]{\cdot}\TheSpacePoint. $ Then $ \AbstractGroup{\cdot}\TheBallOf[\TheRadius]{\TheSpacePoint} = \NbhdOf[\TheRadius]{ \AbstractGroup{\cdot}\TheSpacePoint } $ is connected. **The question of** After modest adjustments, the proofs in apply to $\SlOf[\Two]{\TheRing}$ for many other $\Two$–place rings $\TheRing$. Thus, the only obstruction to substituting one of these groups for $\SlOf[\Two]{\IntLaurent}$ in the proof of is proving results about finiteness properties of stabilizers as in . Certainly there are more $\Two$–place rings that produce stabilizers of type [FP${}_{\Two}$]{} than the rings $ {\IntRing}[\TheVariable,\TheVariable[][-\One]] $ where $\IntRing$ is a ring of integers in an algebraic number field, but this is not the case for all $\Two$–place rings. For instance, this is clearly not the case for any uncountable ring $\TheRing$. For a countable example, consider $ {\TheIntegers}[\AltVariable,\TheVariable,\TheVariable[][-\One]] $ as the $\Two$–place ring contained in $ \overline{\TheComplexNumbersOf{\AltVariable}}({\mathbb{P}}^{\One}) \cong \overline{\TheComplexNumbersOf{\AltVariable}}(\TheVariable) $ where $ \overline{\TheComplexNumbersOf{\AltVariable}} $ is the algebraic closure of the field $ \TheComplexNumbersOf{\AltVariable} $ (we take $ \ThePrimeSet[\One] := \SetOf{\Zero} $ and $ \ThePrimeSet[\Two] := \SetOf{\infty} $). Then the stabilizer in $ \SlOf[\Two]{ {\TheIntegers}[\AltVariable,\TheVariable,\TheVariable[][-\One]]} $ of the “standard vertex” $ \AffVertex[\Zero] $ in the product of Bruhat–Tits trees corresponding to valuations at $\Zero$ and $\infty$ is equal to $ \SlOf[\Two]{ \TheIntegersAd{\AltVariable} } $ and thus is not finitely generated by since $ \TheIntegersAd{\AltVariable} $ is a $\One$–place ring. **The question of higher finiteness properties** Note that the results of can easily be extended to the groups $ \SlOf[\TheMatrixSize]{\IntPoly} $ and $ \SlOf[\TheMatrixSize]{\IntLaurent}. $ Thus, the complication in extending our proof of to these groups lies in generalizing the material of . Of course, for the general $\TheNumberOfPlaces$–place ring $\TheRing$ and for $\TheMatrixSize>\Two$, most of the details of this paper cannot be easily extended to $ \SlOf[\TheMatrixSize]{\TheRing}. $
--- author: - | [^1]\ Physik-Institut der Universität Zürich, Winterthurerstr. 190, 8057 Zürich, Switzerland E-mail: title: 'Heavy Quark Production in Deep-Inelastic Scattering' --- Introduction ============ Deep inelastic scattering (DIS) at HERA offers unique opportunities to test and refine our understanding of heavy quark production in terms of perturbative QCD. The dominant mechanism here is boson-gluon fusion (BGF): a photon coupling to the scattered positron interacts with a gluon from the proton to form a quark-antiquark pair. A quantitative description of this process requires the gluon momentum distribution in the proton, a partonic matrix element and a fragmentation function. The gluon density is known to an accuracy of a few percent from the analyses of scaling violations of the proton structure function $F_2$ measured at HERA [@f2h1zeus]. The masses of the charm and, even more so, of the beauty quark ensure that a hard scale is present that renders QCD perturbation theory to be applicable to the calculation of the hard subprocess. Fragmentation functions, which account for the long-range effects binding the heavy quarks in observable hadrons, are extracted from ${e^+e^-}$ annihilation data, where the kinematics of the hard process is well determined; results with high precision appeared recently [@alephsldbfrag]. Compared with the clean ${e^+e^-}$ case, the complication in $ep$ collisions lies in the strongly interacting initial state. However, relative to other production environments like hadron-hadron collisions or two-photon interactions, uncertainties related to hadronic structure are reduced to a minimum. QCD calculations have been performed up to fixed order $\alpha_s^2$ in the so-called massive scheme, where only gluons and light quarks are active partons in the initial state. They are available in the form of Monte Carlo integration programs [@hvqdis], which, by using Peterson fragmentation functions [@peterson], provide differential hadronic cross sections. Due to the higher quark mass, the QCD predictions are expected to be more reliable for beauty than for charm. However, we note that the NLO corrections to the predicted DIS cross section are around 40% of the LO result in both cases. At very high momentum transfers, a treatment in terms of heavy quark densities in the proton may be more adequate; but differences between these schemes are not yet significant in the range covered by HERA so far [@smithvfns]. [\[fig:f2scalv\] Charm contribution to the proton structure function, compared with NLO QCD.]{} Charm ===== Most of the HERA results on charm make use of the “golden” decay channel $D^{\ast +} {\rightarrow}D^0\pi^+ $ followed by $D^0{\rightarrow}K^-\pi ^+$; ZEUS also uses semileptonic decays. The contribution of charmed final states to DIS is quantified as the ratio $F_2^c/F_2$, where $F_2^c$ is defined in an analogous way to the proton structure function $F_2$ by $$\frac{d^2\sigma(ep{\rightarrow}cX)}{dx\, dQ^2} = \frac{2\pi\alpha^2}{xQ^4} (1+(1-y)^2)\cdot F_2^c (x,Q^2)\; ;$$ $x$, $y$, and $Q^2$ are the standard DIS scaling variables. Measurements of $F_2^c$ by both experiments [@zeusf2cdl; @h1f2c] and by using different channels yield consistent results. The charm contribution $F_2^c/F_2$ is found to be about 20 to 30 % in most of the kinematic region at HERA. It is large where gluon-induced reactions dominate, and decreases only as $Q^2$ becomes smaller than $\sim$10 GeV$^2$, or at higher $x$ values ${\stackrel{>}{_{\sim}}}0.01$, where the quark content in the proton takes over. Figure \[fig:f2scalv\] displays the measured values of $F_2^c$. The $Q^2$ dependence, for fixed values of $x$, is steeper than for the inclusive structure function. The NLO QCD calculation [@hvqdis], with a gluon distribution extracted from H1 $F_2$ data, agrees well with the data, which demonstrates the overall consistency of the boson-gluon-fusion picture. At low $x$, the data tend to be somewhat higher and to vary stronger with $Q^2$ than the prediction. The available statistics makes more detailed investigations possible. It was observed earlier that the NLO QCD calculations with Peterson fragmentation (HVQDIS) do not reproduce the rapidity ($\eta$) distribution of the produced charm meson well in the forward region (the outgoing proton direction) [@zeusf2cdl; @h1glue]. The double differential $D^{\ast}$ cross section [@h1f2c] displayed in Figure \[fig:dstddiff\] reveals that in the H1 data this is predominantly a feature of the low $p_T(D^{\ast})$ region. The measurement is also compared with the CASCADE Monte Carlo program [@cascade], based on the CCFM evolution equation [@ccfm], which resums higher order contributions at low $x$. Using an unintegrated gluon distribution extracted from inclusive H1 data, it reproduces the data in the low $p_T$ region well, but overshoots at higher $p_T$. Such shape differences imply that the extrapolated result for $F_2^c$ is model-dependent, but also the $F_2^c$ prediction depends on the evolution scheme. H1 has performed a consistent extraction and comparison in the CCFM scheme and found somewhat better agreement in the low $x$ region than in the standard Altarelli-Parisi scheme. A possibility to include higher order processes in the modeling of heavy quark production is to use the concept of photon structure also at non-zero virtuality. One can classify events as “direct” or “resolved” according to the measured value of $x_{\gamma}^{OBS}=(E-p_z)_{2\; jets} / (E-p_z)_{all\; hadrons}$ in dijet events. Keeping the LO photoproduction language, this corresponds to the momentum fraction of the incoming parton in the photon, but more generally, $x_{\gamma}^{OBS}$ is sensitive to any kind of non-collinear radiation in the event. The ratio of resolved [*vs.*]{} direct cross sections has been determined in this approach by ZEUS [@zeusvirgam] and is displayed as a function of virtuality in Figure \[fig:virgam\]. In contrast to the situation for light quarks, the ratio in the DIS regime is very similar to that at $Q^2\approx 0$; as expected, using e.g. the virtual photon structure function set SaS1D [@sas] implemented in the HERWIG program [@herwig]. The CASCADE model, with gluon emissions ordered in angle rather than in $k_T$, effectively incorporates the perturbative, anomalous part of the photon structure and reproduces the data well at all $Q^2$, but not the AROMA Monte Carlo program [@aroma], which does not include such contributions. Charm production has also been measured separately in $e^-p$ and $e^+p$ DIS by ZEUS [@zeuseminus]. The data are shown in Figure \[fig:eminus\] as a function of $Q^2$. The $e^-p$ and $e^+p$ results are only barely consistent with each other; for $Q^2>20$ GeV$^2$, the discrepancy amounts to 3 standard deviations. However, both measurements are compatible with the theoretical expectation, in which no mechanism exists to generate an asymmetry with respect to the lepton beam charge at such low four-momentum transfers. In summary, the BGF concept at NLO works well for charm in DIS, up to high $Q^2$. The HERA data reach the precision to identify regions (e.g. at low $x$), where refinements are becoming necessary. Beauty ====== Beauty production at HERA is suppressed by two orders of magnitude with respect to charm. All HERA measurements of $b$ production so far rely on inclusive semi-leptonic decays, using identified muons or electrons in dijet events. Two observables have been used to discriminate the $b$ signal from background sources. The high mass of the $b$ quark gives rise to large transverse momenta $p_T^{rel}$ of the lepton relative to the direction of the associated jet. Using this method, both collaborations have published photoproduction cross section measurements [@h1openb; @zeusopenb], which are higher than NLO QCD expectations. More recently, with the precision offered by the H1 vertex detector [@cst], it has become possible to observe tracks from secondary $b$ vertices and to exploit the long lifetime as a $b$ tag, using e.g. the impact parameter $\delta$. This improves the photoproduction result [@h1bosaka] and provides a first measurement in DIS [@h1bbudapest], where resolved contributions involving the non-perturbative hadronic structure of the photon are expected to be suppressed [@grs]. The DIS case is therefore complementary and theoretically simpler. The sensitivity to determine the beauty component is maximized by combining both variables in a likelihood fit to the two-dimensional distribution in $\delta$ and $p_T^{rel}$. The consistency of the two observables has been established with the larger statistics available in the photoproduction regime The $\delta$ distribution for muons in dijet DIS events selected from a dataset corresponding to 10.5 [pb$^{-1}$]{}, is shown in Figure \[fig:disdelta\] together with the decomposition from the two-dimensional fit, which yields a ${b\bar{b}}$ fraction of $f_b = (43\pm 8)\,\%$. A DIS cross section of $ \sigma_{ep{\rightarrow}{b\bar{b}}X{\rightarrow}\mu X}^{vis} = \;39\;\pm\;8\; \pm 10\;{\rm pb}\ $ is extracted in the kinematic range given by $2<Q^2<100$ GeV$^2$, $0.05<y<0.7\,$, $p_T(\mu)>2$ GeV and $35^\circ<\theta(\mu)<130^\circ$. This can be directly compared to NLO QCD calculations implemented in the HVQDIS [@hvqdis] program, after folding the predicted $b$ hadron distributions with a decay lepton spectrum. The result, $11\pm 2$ pb, is much lower than the H1 measurements. The data have also been compared with the CASCADE Monte Carlo simulation; the result of 15 pb also falls considerably below the measurements. We summarize the HERA $b$ results [@h1openb; @zeusopenb; @h1bosaka; @h1bbudapest] as a function of $Q^2$ in Figure \[fig:herab\], where the ratio of the measured cross sections to theoretical expectations based on the NLO QCD calculations [@hvqdis; @fmnr] is displayed. It is consistent with being independent of $Q^2$. The discrepancy between data and theory is similar to the situation observed in ${\bar{p}p}$ and, more recently, ${\gamma\gamma}$ interactions [@andreev-gutierrez]. The first measurement in DIS indicates that in $ep$ collisions this is not a feature of hadron-hadron like scattering alone. [99]{} C. Adloff [*et al.*]{} \[H1 Collaboration\], Eur. Phys. J. C [**21**]{} (2001) 33 \[hep-ex/0012053\]; J. Breitweg [*et al.*]{} \[ZEUS Collaboration\], Eur. Phys. J. C [**7**]{} (1999) 609 \[hep-ex/9809005\], contrib. paper no. 628 to this conference. A. Heister [*et al.*]{} \[ALEPH Collaboration\], Phys. Lett. B [**512**]{} (2001) 30 \[hep-ex/0106051\];\ K. Abe [*et al.*]{} \[SLD Collaboration\], Phys. Rev. Lett.  [**84**]{} (2000) 4300 \[hep-ex/9912058\]. B. W. Harris and J. Smith, Phys. Rev. D [**57**]{} (1998) 2806 \[hep-ph/9706334\]. C. Peterson, D. Schlatter, I. Schmitt and P. Zerwas, Phys. Rev. D [**27**]{} (1983) 105. A. Chuvakin, J. Smith and B. W. Harris, Eur. Phys. J. C [**18**]{} (2001) 547 \[hep-ph/0010350\]. J. Breitweg [*et al.*]{} \[ZEUS Collaboration\], Eur. Phys. J. C [**12**]{} (2000) 35 \[hep-ex/9908012\];\ contrib. paper no. 853 to ICHEP 2000, Osaka, Japan, 2000. C. Adloff [*et al.*]{} \[H1 Collaboration\], hep-ex/0108039. C. Adloff [*et al.*]{} \[H1 Collaboration\], Nucl. Phys. B [**545**]{} (1999) 21 \[hep-ex/9812023\]. H. Jung and G. P. Salam, Eur. Phys. J. C [**19**]{} (2001) 351 \[hep-ph/0012143\]. M. Ciafaloni, Nucl. Phys. B [**296**]{} (1988) 49; S. Catani, F. Fiorani and G. Marchesini, Nucl. Phys. B [**336**]{} (1990) 18; Phys. Lett. B [**234**]{} (1990) 339. S. Chekanov [*et al.*]{} \[ZEUS Collaboration\], contrib. paper no. 495 to this conference. G. A. Schuler and T. Sjostrand, Nucl. Phys. B [**407**]{} (1993) 539. G. Marchesini, B. R. Webber, G. Abbiendi, I. G. Knowles, M. H. Seymour and L. Stanco, Comput. Phys. Commun.  [**67**]{} (1992) 465. G. Ingelman, J. Rathsman and G. A. Schuler, Comput. Phys. Commun.  [**101**]{} (1997) 135 \[hep-ph/9605285\]. S. Chekanov [*et al.*]{} \[ZEUS Collaboration\], contrib. paper no. 493 to this conference. C. Adloff [*et al.*]{} \[H1 Collaboration\], Phys. Lett. B [**467**]{} (1999) 156 \[hep-ex/9909029\], Erratum ibid. B [**518**]{} (2001) 331. J. Breitweg [*et al.*]{} \[ZEUS Collaboration\], Eur. Phys. J. C [**18**]{} (2001) 625 \[hep-ex/0011081\]. D. Pitzl [*et al.*]{}, Nucl. Instrum. Meth. A [**454**]{} (2000) 334 \[hep-ex/0002044\]. F. Sefkow, hep-ex/0011034. T. Sloan, hep-ex/0105064. M. Gluck, E. Reya and M. Stratmann, Phys. Rev. D [**54**]{} (1996) 5515 \[hep-ph/9605297\]. S. Frixione, M. L. Mangano, P. Nason and G. Ridolfi, Nucl. Phys. B [**412**]{} (1994) 225 \[hep-ph/9306337\]; Phys. Lett. B [**348**]{} (1995) 633 \[hep-ph/9412348\]. V. Andreev, these proceedings;\ P. Gutierrez, these proceedings. [^1]: On behalf of the H1 and ZEUS Collaborations
--- abstract: 'Potentials of mean force (PMFs)—free energies along a selected set of collective variables—are ubiquitous in molecular simulation, and of significant value in understanding and engineering molecular behaviors. PMFs are most commonly estimated using variants of histogramming techniques, but such approaches obscure two important facets of these functions. First, the empirical observations along the collective variable are defined by an ensemble of discrete observations and the coarsening of these observations into a histogram bins incurs unnecessary loss of information. Second, the potential of mean force is itself almost always a continuous function, and its representation by a histogram introduces inherent approximations due to the discretization. In this study, we relate the observed discrete observations to the inferred underlying continuous probability distribution over the collective variables and derive histogram-free techniques for estimating the potential of mean force. We reformulate PMF estimation as minimization of a Kullback-Leibler divergence between a continuous trial function and the discrete empirical distribution and show this is equivalent to likelihood maximization of a trial function given a set of sampled data. We then present a fully Bayesian treatment of this formalism, which enables the incorporation of powerful Bayesian tools such as the inclusion of regularizing priors, uncertainty quantification, and model selection techniques. We demonstrate this new formalism in the analysis of umbrella sampling simulations for the $\chi$ torsion of a valine sidechain in the L99A mutant of T4 lysozyme with benzene bound in the cavity.' author: - 'Michael R. Shirts' - 'Andrew L. Ferguson' bibliography: - 'zotero.bib' title: Statistically optimal continuous potentials of mean force from umbrella sampling and multistate reweighting --- Introduction ============ Potentials of mean force (PMFs)—also known as free energies along, or as a function of, a selected set of collective variables—are important quantities that are ubiquitous in molecular simulation studies. Applications of potentials of mean force include determining the kinetics of a reaction using the free energy along the reaction path [@Chandler:JCP:1978; @Northrup:P:1982; @Schenter:JCP:2003], understanding the behavior of collective interactions such as hydrophobicity  [@SanBiagio:EBJ:1998; @Sobolewski:JPCB:2007; @Makowski:JPCB:2010], elucidating transport mechanisms through molecular pores [@Hub:P:2008; @Hub:JACS:2010; @Allen:BC:2006; @Medovoy:B:2016; @Sigg:JGP:2014], and the parameterization of low-dimensional (generalized) Langevin or Fokker-Planck equations as effective reduced models of the system dynamics [@Yang:JMB:2007; @Hummer:JCP:2003; @Kopelevich:JCP:2005; @Rzepiela:JCP:2014; @Chiavazzo:P:2014]. PMFs are typically estimated from unbiased or biased molecular simulation trajectories using a variant of histogramming techniques, most commonly a type of multiple histogram reweighting technique such as the weighted histogram analysis technique (WHAM) [@Kumar:JCC:1992]. However, the process of histogramming in order to obtain the PMF obscures two important issues. First, the true distribution of observations along the desired collective variable or variables in the infinite limit is virtually never actually a histogram but rather a continuous function, so the process of histogramming inherently introduces unnecessary discretization errors. Second, what we actually observe when we perform a simulation is neither a histogram, nor a continuous function, but a discrete set of delta functions, at the observed values of the collective variables. Approximating the “true” PMF attained in the limit of infinite sampling of the discrete observations as a histogram inherently entails a loss of information. What is required to resolve these problems is improved approaches to estimate a continuous PMF along collective variables directly from the discrete set of empirical observations collected in the simulations, without the unnecessary introduction of approximation and information loss that histogramming incurs. We are certainly not the first to observe the disadvantages of histogramming approaches. A number of recent studies have proposed histogram-free methodologies to estimate PMFs. Westerlund et al. [@Westerlund:JCTC:2018] have presented an approach that builds PMFs based on Gaussian mixture models, outperforming histogramming, k-nearest neighbors (kNN) and kernel density estimators (KDE). Schofield [@Schofield:JPCB:2017] presented an adaptive parameterization scheme for a variety of different possible continuous functions for PMFs. Lee and co-workers [@Lee:JCTC:2014; @Lee:JCTC:2013] presented a variational approach (variational free energy profile, or vFEP) to minimize likelihoods of observations from trial continuous free energy surfaces. Stecher et al. [@Stecher:JCTC:2014] have discussed reconstructing free energy surfaces from umbrella sampling using Gaussian process regression that comes inherently equipped with uncertainty estimates. Schneider et al. [@Schneider:PRL:2017] discuss fitting higher-dimensional PMFs using artificial neural networks. The umbrella integration method of Käster and Thiel [@Kastner:JCP:2005; @Kastner:JCP:2009; @Kastner:JCP:2012] constructs the PMF by numerical integration of a weighted average of the derivative of the free energy with respect to the order parameter. Meng and Roux presented a multivariate linear regression framework to link the biased probability densities of individual umbrella windows to yield a global free energy surface in the desired collective variables, though it uses histograms for some of the intermediate steps [@Meng:JCTC:2015]. The present work shares particular similarities with the vFEP approach of Lee and co-workers [@Lee:JCTC:2013; @Lee:JCTC:2014] and the adaptive parameterization approach of Schofield [@Schofield:JPCB:2017], but builds upon and goes beyond these works in two main aspects. First, as we detail in our mathematical development, we use the multistate Bennett acceptance ratio (MBAR) approach to furnish the provably minimum variance estimators of the free energy differences required to align independent biased sampling run, and then use these values to compute the maximum likelihood estimate of the unbiased PMF. Second, we show how this approach can easily placed in a fully Bayesian framework that enables transparent incorporation of Bayesian priors, Bayesian uncertainty quantification, and Bayesian model selection. The calculation of PMFs parameterized by a small number of collective variables is largely motivated by the “curse of dimensionality”. Molecular systems are intrinsically exceedingly high-dimensional (with numbers of degrees of freedom in the tens or hundreds of thousands), which makes study of the system properties in the full configuration space of limited use in understanding and controlling molecular behaviors. Instead, system microstates are frequently projected into a handful of collective variables motivated by the physics of the problem at hand, and PMFs are then constructed over this reduced dimensional space to further analyze. There are a number of ways to estimate PMFs in these collective variables. One could in theory run a simulation and simply calculate the probability of visiting a representative set of the collective variable. However, free energy barriers in collective variable space exceeding several $k_BT$ in height—where $k_B$ is Boltzmann’s constant and $T$ is temperature—are crossed with exponentially small probability in standard (unbiased) simulations, resulting in non-ergodic kinetic trapping and the inability to sample transition states and mechanisms. A number of methods have been proposed to overcome this problem, which typically involve introducing some form of bias or artificial smoothing of the underlying free energy landscape to enhance sampling of low probability (high free energy) regions and accelerate transitions between high probability (low free energy) metastable states. Perhaps the most popular and straightforward way to perform biased sampling and PMF estimation is to run an ensemble of $K$ independent simulations, each of which biases the collective variable using a—usually, but not necessarily, harmonic—biasing potential. Each biasing potential forces the simulation to spend the majority of its time visiting locations with specific ranges of the collective variables consistent with the biases. Assuming sampling orthogonal to the collective variables is sufficient fast, good sampling of the the thermally-relevant domain of the collective variable can be achieved by tiling collective variable space sufficiently densely with biasing potentials such that neighboring biased simulations sample overlapping configuration spaces. The unbiased PMF can then be determined using a range of mathematical approaches based in importance sampling [@Shirts:AP:2017; @Shirts:JCP:2008; @Kumar:JCC:1992; @Ferguson:JCC:2017]. Provided the collective variables employed are “good” in the sense that they adequately separate out the relevant metastable states, this methodology, which goes by the name umbrella sampling [@Torrie:JCP:1977], is a very straightforward and popular approach that works in as many dimensions as one can adequately cover the space with biasing potentials with sufficient configurational overlap. Assuming the potential only depends on the difference in collective variable from the restraint point, then the unbiased PMF can be estimated by *post hoc* analysis of the collective variable at each frame of each biased simulation trajectory without requiring records of the total energies, forces, or any other information from the simulation [@Kumar:JCC:1992]. In this paper, we establish a mathematical framework to relate a discrete observed empirical distribution determined in a set of biased simulations to the unknown and typically continuous “true” potential of mean force in the collective variables one would expect in the limit of infinite sampling. We present a Bayesian treatment of this formalism to enable the incorporation of regularizing priors, uncertainty quantification, and model selection techniques. We demonstrate our approach in the analysis of umbrella sampling simulations for the $\chi$ torsion of a valine sidechain in lysozyme L99A with benzene bound in the cavity. The focus of the paper is to present analysis methodology, and so we assume that the data collected from biased simulations is sufficient to provide robust estimates of the PMF using reasonable methods. As such, it is our goal to calculate the best estimate of the PMF given a set of sampled data from umbrella sampling simulations, where appropriate definitions of “best” are explored within this paper. Although we do not do so here, we observe that it is possible to use current best estimates of the PMF to adaptively direct additional rounds of sampling, thereby iteratively improving and refine the PMF. Such adaptive methods include metadynamics [@Laio:P:2002; @Huber:JCMD:1994; @Barducci:PRL:2008], adiabatic free energy dynamics [@Rosso:JCP:2002], temperature accelerated dynamics [@Sorensen:JCP:2000], temperature accelerated molecular dynamics [@Maragliano:CPL:2006] / driven adiabatic free energy dynamics [@Abrams:JPCB:2008], adaptive biasing force approaches [@Darve:JCP:2008], variationally enhanced sampling [@Valsson:PRL:2014], and conformational flooding [@Grubmuller:PRE:1995]. This class of method has both significant advantages, such as optimally directing computational effort towards under-sampled regions of collective variable space and efficiently reducing uncertainties in the PMF, and significant additional challenges, such as under-sampling slow degrees of motion, and the problems of analyzing simulations that are history-dependent and thus only asymptotically approach equilibrium sampling. For the purposes of this paper we will therefore consider only equilibrium sampling as the way to generate biased sampling trajectories for the purposes of PMF estimation. However, the approach we present is extensible to any collective variable biasing enhanced sampling technique that generates equilibrium samples, and is independent of the type of shape of biasing potential, as long as the potential is not time-dependent. Importantly, we also note that our approach is also applicable to data generated with temperature, restraint, or Hamiltonian exchange [@Sugita:JCP:2000; @Bergonzo:JCTC:2014; @Li:JCC:2014; @Dickson:JCC:2016; @Kastner:WCMS:2011], or expanded ensemble [@Fenwick:JCP:2003; @Chodera:JCP:2011]. The only requirement on the data is that samples are collected at equilibrium with respect to a time-independent (i.e., stationary) probability distribution, and the biased samples cover the range of interest of the collective variable. Theory: PMF estimation from umbrella sampling data ================================================== Consider $K$ umbrella sampling simulations with different biasing potentials tiling a collective variable space and enforcing good sampling of all thermally-relevant system configurations with desired values of the collective variable. Typically, the collective variable is 1–3 dimensional, but the formalism holds for arbitrary dimensionality provided the space can be sufficiently densely sampled and sufficient overlaps achieved between neighboring biased distributions. For clarity of exposition, in the present work we will assume the usual case that the biased simulation data are collected at a single temperature and this temperature is the one at which we wish to estimate the unbiased PMF. However, the approach we outline here can be easily adapted to work with simulations performed at multiple temperatures [@Sugita:CPL:1999; @Hansmann:CPL:1997; @Ferguson:JCC:2017; @Chodera:JCP:2011] or Hamiltonians [@Fukunishi:JCP:2002; @Kwak:PRL:2005], or indeed performed without biasing potentials. The reduced potentials $u_{B,k}$ of these states—where we express energies in terms of reduced quantities, $u(\vec{x}) = (k_B T)^{-1} U(\vec{x})$—are written in terms of the original potential $u(\vec{x})$ as: $$u_{B,k}(\vec{x}) = u(\vec{x}) + b_k(\Phi(\vec{x})-\vec{\xi}_{0,k})$$ where the subscript $k$ indexes the biased simulation, the subscript $B$ reminds us that the potential is biased, and $b_k(\vec{\xi})$ is a user-defined biasing potential—most commonly a harmonic potential although other forms are perfectly acceptable—as a function of the collective variables $\vec{\xi}$ in which the umbrella sampling was performed. The value of the collective variables corresponding to a particular system configuration $\vec{x}$ is defined by a low-dimensional mapping $\Phi(\vec{x})=\vec{\xi}$, and the restraint point of the biasing potential in the collective variables is defined by $\vec{\xi}_{0,k}$. The biasing potentials are then chosen so that the set of all simulations with biasing potentials give roughly equal sampling across the relevant range of $\vec{\xi}$ and neighboring biased simulations share overlap in configurational space. We note two features of our description of umbrella sampling that are germane to our subsequent mathematical developments. First, we do not use the term “windows” as is frequently done when discussing umbrella sampling, as this word possesses significant ambiguity. “Window” could refer to either a specific interval of values of the collective variable $\vec{\xi}$, or it could refer one of the $k$ simulations run with biasing potential $b_k$. These two concepts are related in that simulations with a biasing potential generally sample values in a relatively restricted volume around $\vec{\xi}_{0,k}$, but they are certainly not the same thing. A biased simulation can, in principle, yield any value of $\xi$ (although values far from any of the bias minima are highly unlikely) so the simulation results are not strictly within any finite “window” of $\vec{\xi}$ if run for long enough. Second, we do not make the problematic assumption that the free energy of biasing a particular simulation is equal to the value of the PMF at the restraint point $\vec{\xi}_{0,k}$ of the $k$th biasing potential. This approximation is often called the “stiff spring” approximation, as it assumes the collective variable sampling remains very close to the equilibrium position $\vec{\xi}_{0,k}$ of the bias. But the value of the free energy of biasing is a weighted average over all configurations visited by the biasing potential, and so this approximation deteriorates with increasingly weak biasing potentials. Because one has to include biasing potentials of finite width to sufficiently sample the entire volume of $\vec{\xi}$ of interest, there is always a tradeoff between the strength and number of biasing potentials used: fewer biasing potentials require weaker biases, and weaker biases result in less accurate approximations to the free energy at $\vec{\xi}_{0,k}$ under the “stiff spring” approximation. An analysis of this approximation (in the non-equilibrium pulling case) can be found in [@hummer:P:2010a], but the approach presented in the present work completely avoids this particular problem. We also note that the problem of approximating the PMF using free energy of the biasing potential is exacerbated by histogramming—as is done in WHAM—which introduces *additional* bias into the free energy calculation itself through binning of the energies as well as the free energies. Any sort of averaging of the PMF can be problematic because it tends to artificially lower barriers, which are frequently some of the most critical features of the PMF that we wish to accurately resolve. Given umbrella sampling data from biased simulations, we seek the statistically optimal estimate of the PMF over the collective variables $F(\vec{\xi})$. This distribution contains exactly the same information content and is essentially interchangeable with the unbiased probability distribution $P(\vec{\xi})$. These two distributions are simply related through the logarithm: $$P(\vec{\xi}) = e^{-\beta F(\vec{\xi})} \label{eqn:logP}$$ and we will work with whichever of the pair is most natural for the discussion at hand. It is typically the case in molecular simulation that we work with relative, rather than absolute, free energies, in which case $F(\vec{\xi})$ is only defined up to an arbitrary additive constant. In this case our estimate of the unbiased probability distribution $P(\vec{\xi})$ is only defined up to an arbitrary multiplicative constant, but this can be set by enforcing normalization. When we perform a simulation, the observed, *empirical* probability distribution, given a set of samples $\{\vec{x}_n\}_{n=1}^N$ distributed over the space of our collective variables $\vec{\xi}$ is: $$P_E(\vec{\xi}|\{\vec{x}_n\}) = \sum_{n=1}^N W(\vec{x}_n)\delta(\Phi(\vec{x}_n)-\vec{\xi}) \label{eqn:PE}$$ Where $W(\vec{x}_n)$ are weights associated with each sample. This is the most precise description of our sampled probability density that we can express after a simulation, because it only involves non-zero probability where we actually have measurements, and has zero probability at values of $\vec{\xi}$ that are not observed. If we only perform a single, unbiased simulation on a continuous space, then $W(\vec{x}_n) = 1/N$ for every sample, where $N$ is the number of samples, since—in continuous space with arbitrarily high resolution of system configurations and collective variable mapping—each observation occurs only once. However, as we describe in the next section, if we have $K$ biased simulations, we can incorporate data from all $\sum_{k=1}^K N_k = N$ points gathered over all of the $K$ states to better estimate $P_E(\vec{\xi})$ [@Shirts:JCP:2008]. MBAR and the empirical PMF -------------------------- The multistate Bennett acceptance ratio (MBAR) is the statistically optimal approach to estimate the reduced free energies $f_k = \int e^{-u_k(\vec{x})} d\vec{x}$, from $\{\vec{x}_1,\vec{x}_2,\ldots,\vec{x}_N\}$ observations at $K$ thermodynamic state points [@Shirts:JCP:2008]. These $K$ thermodynamic states are defined by the reduced potentials $\{u_1,u_2,\ldots,u_K\}$, and we assume that the $\{\vec{x}_n\}_{n=1}^N$ are distributed according to the Boltzmann distribution corresponding to the the reduced potential of the state they are collected from. With these assumptions, the MBAR estimate for the reduced free energy differences between these $K$ states is [@Shirts:JCP:2008]: $$e^{-\hat{f}_i} = \sum_{n=1}^{N} \frac{e^{-u_i(\vec{x}_n)}}{\sum_{k=1}^K N_k \, e^{\hat{f}_k - u_k(\vec{x}_n)}} \label{equation:estimator-of-free-energies}$$ where $N_k$ is the number of samples taken from state $K$. This system of equations must be solved self-consistently for the estimated reduced free energies $\hat{f}_i$. Since the reduced free energies are typically only defined up to an additive constant, we usually choose to pin exactly one of the estimated free energies $\hat{f}_i$ equal to any constant value we choose and the rest follow as relative free energy differences. We note that MBAR may be considered a binless estimator of free energy differences that can be derived from WHAM in the limit of zero-width bins [@Shirts:JCP:2008; @Tan:JCP:2012; @Bartels:CPL:2000]. After we have solved for these $\hat{f}_i$, then we can calculate the weight $W_i$ of sample $\vec{x}_n$ in any state $i$ as [@Shirts:JCP:2008; @Bartels:CPL:2000]: $$W_i(\vec{x}_n) = \frac{e^{\hat{f}_i-u_i(\vec{x}_n)}}{\sum_{k=1}^K N_k \, e^{\hat{f}_k - u_k(\vec{x}_n)}} \label{eq:MBARweight}$$ The weight $W_i(\vec{x}_n)$ of sample $\vec{x}_n$ at thermodynamic state point $i$ represents the contribution to the average of an observable $A$ in state $i$ under a reweighting from the *mixture distribution*, consisting of all samples collected from all $K$ state points, to the state $i$ [@Shirts:AP:2017]. The probability of each sample in the mixture distribution is $p(\vec{x}_n) = \sum_{k=1}^K \frac{N_k}{N}p_k(\vec{x}_n) = \sum_{k=1}^K \frac{N_k}{N} e^{\hat{f}_k-u_k(\vec{x}_n)}$—in other words, simply the average of all of the individual $p_k$ probability distributions weighted by the number of samples $N_k$ drawn from each [@Shirts:AP:2017]. It can be easily checked from eq. \[eq:MBARweight\] that the $W_k(\vec{x}_n)$ are normalized such that [@Shirts:JCP:2008]: $$\sum_{k=1}^K N_k W_k(\vec{x}_n)=1 \label{eq:normal}$$ and also from eq. \[equation:estimator-of-free-energies\] and eq. \[eq:MBARweight\] that [@Shirts:JCP:2008]: $$\sum_{n=1}^N W_i(\vec{x}_n)=1 \label{eq:normal2}$$ The expectation value of the observable $A$ estimated over all samples at all state points may then be written as: $$\langle A\rangle_i = \sum_{n=1}^{N} W_{i}(\vec{x}_n) A(\vec{x}_n) \label{eqn:obs}$$ as discussed in eqs. 9 and 15 of the original MBAR paper [@Shirts:JCP:2008]. We denote the weight of sample $\vec{x}$ as obtained via MBAR in the *unbiased* state as $W(\vec{x}_n)$, and in each of the $k = 1 \ldots K$ *biased* states as $W_k(\vec{x}_n)$. By eq. \[eqn:logP\], the exponential of minus the potential of mean force $F_i$ in state $i$ is a probability density. By combining eq. \[eqn:logP\] and eq. \[eqn:obs\] under the particular choice for the observable $A(\vec{x}_n) = \delta\left(\Phi(\vec{x}_n)-\vec{\xi}\right)$, we have within the MBAR framework that: $$e^{-F_i(\vec{\xi})} = \langle \delta\left(\Phi(\vec{x}_n)-\vec{\xi}\right) \rangle_i = \sum_{n=1}^N W_i(\vec{x}_n)\delta\left(\Phi(\vec{x}_n)-\vec{\xi}\right) \label{eqn:weightedSum}$$ where $\Phi(\vec{x})$ maps from the full coordinate space to the lower dimensional collective variable space of interest, and we have implicitly placed the PMF in reduced form so that it is a pure number. We will maintain this convention throughout the remainder of this paper. To change into real energy units we simply multiply through by $k_B T$ so that $F_{\mathrm{units}} = (k_B T) F$. We will use $F(\vec{\xi})$ to refer to the unbiased PMF and $F_k(\vec{\xi})$ to the biased free energy PMF obtained from each of the $k = 1 \ldots K$ biased states. Eq. \[eqn:weightedSum\] makes clear that the MBAR estimate of the probability density distribution is a weighted sum of delta functions at the observed points. (Technically, it’s a distribution, not a function, since it is a sum of delta functions, which are themselves are distributions, but this formal distinction doesn’t affect any of the development in this paper.) It is instructive to compare this to the empirical distribution function when collecting samples from a single state where $W_i(\vec{x}_n) = 1/N$: $$e^{-F_i(\vec{\xi})} = \frac{1}{N}\sum_{n=1}^N \delta\left(\Phi(\vec{x}_n)-\vec{\xi}\right)$$ from which it can be seen that the empirical distribution $P_E(\vec{\xi}|\{\vec{x}_n\})$ generated using MBAR in eq. \[eqn:PE\] is a *weighted* empirical distribution function using data from all states. The representation of the empirical probability distribution function $P_E(\vec{\xi}|\{\vec{x}_n\})$ of delta functions has both advantages and disadvantages. Estimating expectation values of observables that are a function of $\vec{\xi}$ is becomes simply a weighted sum over all observations $$\langle A \rangle_i = \int A(\vec{\xi}) e^{-F_i(\vec{\xi})} d\vec{\xi} = \sum_{n=1}^N W_i(\vec{x}_n) A(\vec{x}_n). \label{eqn:expect}$$ However, it is very complicated to interpret or visualize this delta function representation. Neither can we work with this empirical representation in logarithmic form $F(\vec{\xi}) = - \ln e^{-F(\vec{\xi})}$ because the logarithm of a sum of delta functions isn’t defined, so only the exponential form has a well-defined mathematical meaning. The empirical cumulative distribution function is defined as $$\mathrm{CDF}(\vec{\xi}) = \int_{\vec{\xi}_{\mathrm{low}}}^{\vec{\xi}} e^{-F(\vec{\xi'})} d\vec{\xi'}$$ where $\vec{\xi}_{\mathrm{low}}$ is some arbitrarily defined lower bound for the integral over the collective variables, but this is only well-defined in one dimension. To reiterate, expectations of quantities of interest can be computed by eq. \[eqn:expect\] without recourse to $F_i(\vec{\xi})$ directly, but representing $F_i(\vec{\xi})$ as a continuous function is valuable for interpretation and understanding of the underlying molecular PMF. Developing statistically optimal representations of $F_i(\vec{\xi})$ that can be visualized and exploited to understand and engineer molecular behaviors is the key motivator of the remainder of this work. Representations of $F(\vec{\xi})$ as a continuous function ---------------------------------------------------------- In most cases, to visualize either a $P(\vec{\xi})$ or $F(\vec{\xi})$, or to use them in some other type of mathematical modeling, we need to choose how to represent them as continuous functions. Additionally, in the infinite sampling limit for molecular systems, they generally *should* be continuous functions due to the inherent continuity of the distribution supported by non-pathological choices of $\vec{\xi}$. We now proceed to describe a number of possible choices for continuous representations of $F(\vec{\xi})$. Most of the mathematical machinery that we develop can, in principle, be deployed in arbitrarily high dimensionalities of $\vec{\xi}$, although the capacity to achieve sufficient sampling will always present an issue. We note at appropriate junctures in the text any special considerations that may arise when generalizing to high-dimensional parameterizations. **1. Represent the PMF at specific locations $\vec{\xi}_0$ as the free energy of imposing each of the biasing restraints centered at $\vec{\xi}_0$.** Assuming we have well-localized biasing potentials, then the free energy difference between the biased simulation and the unbiased simulation can be estimated as the free energy to restrain the simulation by each of the biasing functions, and is the “stiff spring” approximation. As described above, this method entails significant drawbacks in overestimating valleys and underestimating peaks, and in a lack of resolution between umbrella centers. We do not pursue this further. **2. Create a histogram out of the empirical distribution.** This was default choice made in the `pymbar` package’s `computePMF` function, which has occasionally been erroneously called the “MBAR estimate of the PMF” in the literature. As we have shown, the use of MBAR is completely independent of the determination of the PMF, although it can be *used* in various algorithms to estimate the PMF. We can calculate the expectation of the binning function $I_i(\vec{\xi}_i,\delta,\vec{x}) = 1$ if $\Phi(\vec{x}) > (\vec{\xi}_i-\delta/2)$ and $\Phi(\vec{x}) < (\vec{\xi}_i+\delta/2)$ and $I_i(\vec{\xi}_i,\delta,\vec{x}) = 0$ otherwise, where the $\vec{\xi}_i$ are the centers of the histogram bins and with some abuse of notation $\delta$ denotes the multidimensional bin widths, which—for clarity of exposition—we select to be equal in all dimensions. The binning function is used to essentially assign a fractional count to each bin according to the value of $W(\vec{x}_n)$ for $\vec{x}_n$ within the bin. The potential of mean force with $J$ total indicator functions is the expectation: $$e^{-F(\vec{\xi})} = \sum_{i=1}^{J} \sum_{n=1}^N W(\vec{x}_n)I_i(\vec{\xi}_i,\delta,\vec{x}_n)$$ where the second sum, as discussed above, is over all $N$ samples collected from all biased simulations. Since we are calculating an expectation of a function, MBAR gives a straightforward estimate for the error in the uncertainties, as outlined in the original MBAR paper [@Shirts:JCP:2008]. If the bin widths chosen adaptively with the number of samples, the uncertainty becomes more complicated, since a different data set would have a different set of bin widths. If we wished, we could fit this histogram to a smooth function, using a least square fitting method, choosing the function to balance variance and bias. However it is better to avoid any histogramming steps altogether due to the inherent, unnecessary, and often uncontrolled bias that they introduce. This is especially true with multidimensional histograms, where the curse of dimensionality causes the number of bins required, and thus the number of samples for equal resolution, to scale exponentially with dimensionality. When WHAM is employed to perform the PMF estimation [@Kumar:JCC:1992], the histograms used to compute the free energies are the same as the ones used to calculate the PMF, which has a tendency to smooth out the PMF [@Fajer:JCC:2009]. With MBAR, one can choose exactly how wide to make the histograms, since the histograms can be of any width that one chooses to best represent the underlying data, and are not constrained by the choice of separation in $\vec{\xi}$ between biasing functions $b_k(\vec{\xi})$ [@Shirts:JCP:2008]. **3. Employ a kernel density approximation.** We can replace each delta function in the empirical PMF with a smooth function with weight centered at each sample and scaled by the weight. The most common choice is an isotropic Gaussian kernel $K(\vec{\xi}_i,\delta,\vec{\xi}) = (2\pi \delta^2)^{-1/2} e^{-\frac{(\vec{\xi}-\vec{\xi}_i)^2}{2\delta^2}}$, where $\delta$ now plays the role of the kernel bandwidth, but anisotropic Gaussians, “top hat,” and triangle functions are also frequently used. We observe that histogramming can be considered a form of kernel density estimation using indicator functions, with the center of the mass the preassigned bin center rather than the location of the sample. The bandwidth $\delta$ can be calculated in a number of ways, although the optimal choice is frequently not obvious [@Park::1992; @Cao:CSDA:1994; @Jones:JASA:1996; @Sheather:JRSSBM:1991]. For example, the maximum likelihood approach with the empirical distribution shrinks $\delta$ to zero, so other approaches must be used. The PMF in the kernel density approximation then becomes: $$e^{-F(\vec{\xi})} = \sum_{n=1}^N W(\vec{x}_n)K(\Phi(\vec{x}_n),\delta,\vec{\xi})$$ and $F$ is calculated by taking the logarithm, which is non-zero everywhere. **4. Identify a parameterized continuous probability distribution that best represents the empirical distribution.** The fundamental difficulty with this approach is that there is no unambiguous “best” continuous distribution that stands independent of any other assumptions beyond those made so far. Specifically, the closest parameter-independent continuous function to a set of $\delta$ functions, for any reasonable definitions of close, are continuous functions that are essentially indistinguishable from the $\delta$ functions themselves. It is necessary, therefore, to instead impose some constraints upon the family of continuous functions that that represent our understanding of the empirical distribution as a discrete finite-data sampling of what should be a smooth and continuous distribution in the limit of infinite samples. This is an extremely flexible and generic point-of-view which allows for a variety of ways to represent the function with minimal bias and which naturally admits Bayesian formulations. The examination of this fourth perspective is our focus for the remainder of the paper. We now proceed to present a number of possible “best” choices for the representation for this continuous function along with proposed quantitative definitions of “best”. Kullback-Leibler divergence as a measure of distance ---------------------------------------------------- Before we start examining mathematical forms of the trial PMF, we need to decide how we will evaluate how “close” a (continuous) trial function $P_T(\vec{\xi}|\vec{\theta})$ of some arbitrary parameters $\vec{\theta}$ is to the empirical distribution $P_E(\vec{\xi}|\{\vec{x}_n\})$. For the purposes of the present mathematical development we will leave the form of $P_T(\vec{\xi}|\vec{\theta})$ abstract, but it can be useful to consider that a number of possible parameterizations for the trial function are possible, including linear interpolants, cubic splines, or piecewise cubic Hermite interpolating polynomial (PCHIP) interpolations. For non-pathological continuous representations of $P_T(\vec{\xi}|\vec{\theta})$, the corresponding PMF is simply $F(\vec{\xi}|\vec{\theta}) = - \ln P_T(\vec{\xi}|\vec{\theta})$. One logical definition of “closeness” is the Kullback-Leibler (KL) divergence from the empirical distribution in the state of interest (the one without any biasing distribution) to our trial distribution $P_T(\vec{\xi}|\vec{\theta})$, over the volume $\Gamma$ of collective variables. The Kullback-Leibler divergence from $Q$ to $P$, denoted $D_{\mathrm{KL}}(P||Q)$, can be interpreted as a measure of the information lost when $Q$ is used to approximate $P$, and is defined as: $$D_{\mathrm{KL}}(P||Q) = \int_{\Gamma} P({{\mbox{\boldmath{$x$}}}}) \ln \frac{P({{\mbox{\boldmath{$x$}}}})}{Q({{\mbox{\boldmath{$x$}}}})} d{{\mbox{\boldmath{$x$}}}}$$ In later usage, we will generally omit the explicit reference to the volume $\Gamma$ over the collective variable space. We will develop several different formulations of the KL divergence that each consist of a weighted sum of the function evaluated at each sampled point, and the integral of the simulation over all the entire PMF (or sum of several integrals). We present them here and then later report the results of numerical tests to demonstrate their performance. **C.1. Unbiased state Kullback-Leibler divergence.** The KL divergence from $P_T(\vec{\xi}|\vec{\theta})$ to $P_E(\vec{\xi}|\{\vec{x}_n\})$ is: $$\begin{aligned} D_{\mathrm{KL}}(\vec{\theta}) &=& \int P_E(\vec{\xi}|\{\vec{x}_n\}) \ln \frac{P_E(\vec{\xi}|\{\vec{x}_n\})}{P_T(\vec{\xi}|\vec{\theta})} d\vec{\xi} \nonumber \\ &=& \int \left[P_E(\vec{\xi}|\{\vec{x}_n\}) \ln P_E(\vec{\xi}|\{\vec{x}_n\})\right. \nonumber \\ & & \left.-P_E(\vec{\xi}|\{\vec{x}_n\}) \ln P_T(\vec{\xi}|\vec{\theta})\right] d\vec{\xi} \end{aligned}$$ The first term in the integral is somewhat problematic, in that it has a factor of $\ln P_E(\vec{\xi}|\{\vec{x}_n\})$, which is not well-defined for delta functions. Even taking Gaussian approximations for the delta functions and allowing them to shrink to zero-width fails to yield a well-defined value since the entire integral $\int P_E(\vec{\xi}) \ln P_E(\vec{\xi})$ is unbounded in the positive direction as the width of the $\delta$ function goes to zero. Fortunately, whatever the value may be, it is independent of the parameters $\vec{\theta}$. Accordingly, we may neglect the first term in our minimization with respect to $\vec{\theta}$ and focus only on minimization of the second term. For the purposes of functional optimization we will—with some abuse of terminology—use $D_{\mathrm{KL}}(\vec{\theta})$ to stand for the second, $\vec{\theta}$-dependent term, with the dropping of the first parameter-independent term understood. Using eq. \[eqn:logP\], the normalized trial probability distribution can be equivalently expressed in terms of a trial potential of mean force $F_T(\vec{\xi}|\vec{\theta})$: $$P_T(\vec{\xi}|\vec{\theta}) = \frac{e^{-F_T(\vec{\xi}|\vec{\theta})}}{\int_{\Gamma} e^{-F_T(\vec{\xi}'|\vec{\theta})} d\vec{\xi}'} \label{eqn:trialDist}$$ If we set $W(\vec{x}) = W_{\mathrm{unbiased}}(\vec{x})$ to be the weighting function for our unbiased reduced potential energy $u(\vec{x})$, and seek the trial potential of mean force in the unbiased state $F_T(\vec{\xi}|\vec{\theta}) = F(\vec{\xi}|\vec{\theta})$, the function to be minimized reduces to: $$\begin{aligned} D_{\mathrm{KL}}(\vec{\theta}) &=& \int -P_E(\vec{\xi}|\{\vec{x}_n\}) \ln P_T(\vec{\xi}|\vec{\theta}) d\vec{\xi} \nonumber \\ &=& \int P_E(\vec{\xi}|\{\vec{x}_n\}) F(\vec{\xi}|\vec{\theta}) d\vec{\xi} + \int P_E(\vec{\xi}) \ln \int e^{-F(\vec{\xi}'|\vec{\theta})} d\vec{\xi}' d\vec{\xi} \nonumber \\ &=& \int P_E(\vec{\xi}|\{\vec{x}_n\}) F(\vec{\xi}|\vec{\theta}) d\vec{\xi} + \ln \int e^{-F(\vec{\xi}'|\vec{\theta})} d\vec{\xi}' \nonumber \\ &=& \sum_{n=1}^N W(\vec{x}_n) F(\vec{\xi}_n|\vec{\theta}) + \ln \int e^{-F(\vec{\xi}'|\vec{\theta})} d\vec{\xi}' \label{eq:kldiverge}\end{aligned}$$ Between the 2nd and 3rd steps we can integrate out the $P_E(\vec{\xi}|\{\vec{x}_n\})$ term as $P_E(\vec{\xi}|\{\vec{x}_n\})$ is normalized, is independent of the dummy variable $\vec{\xi}'$, and $\vec{\xi}_n = \Phi(\vec{x}_n)$, and between the 3rd and 4th steps we employ eq. \[eqn:expect\] to estimate the expectation value over the data. Minimization of eq. \[eq:kldiverge\] presents a prescription to adjust $\vec{\theta}$ to find the potential of mean force $F(\vec{\xi}_n|\vec{\theta})$ which is the logarithm of the closest distribution to the empirical delta function distribution calculated from MBAR. Before proceeding to do so, it is instructive to make three observations about eq. \[eq:kldiverge\]. First, we note that the biasing functions only appear through the weights $W(\vec{x}_n)$, which penalize points $\vec{x}_n$ with values of $\Phi(\vec{x}_n) = \xi_n$ inconsistent with the given bias. The calculation of the PMF does *not* otherwise include the biasing functions. Second, the contribution $F(\vec{\theta}) = -\ln \int e^{-F(\vec{\xi}'|\vec{\theta})} d\vec{\xi}'$ independent of the samples can be considered to penalize PMFs that are simply low everywhere. Third, low free energy regions of the PMF contribute more to the integral $F(\vec{\theta}) = -\ln \int e^{-F(\vec{\xi}'|\vec{\theta})} d\vec{\xi}'$ than high free energy regions. Accordingly, we should expect better estimates at the low values of $F$ (high probability states), but may sacrifice accuracy at large values of $F$ (low probability states). **C.2. Summed biased state Kullback-Leibler divergence.** We can measure closeness to the KL divergence in a slightly different way, and try to find a single function that minimizes the sum of KL divergences from the $K$ empirical distribution functions observed at each biased sample state to the trial function with the biased potential added. The motivation for this ansatz is that it will force the trial function close to the potential of mean force in all regions the biased simulations have high density and therefore good sampling. When summing over the $K$ different biased simulations, we elect to weight the KL divergence proportional to the number of samples $N_k$ from that state. The motivation for this choice is that states with few samples should contribute less information than states with many. We will see that this assumption leads to particularly simple results. Under these choices we define the sample-weighted sum of Kullback-Leibler divergences and function to be minimized as: $$\begin{aligned} \sum_{k=1}^{K} N_k D_{\mathrm{KL}}(\vec{\theta}) &=& \sum_{k=1}^K N_k \left(\sum_{n=1}^N W_k(\vec{x}_n) F_k(\vec{\xi}_n|\vec{\theta})\right. \nonumber \\ & & + \left. \ln \int e^{-F_{k}(\vec{\xi}'|\vec{\theta})} d\vec{\xi}'\right) \nonumber \\ &=& \sum_{k=1}^{K} N_k \left(\sum_{n=1}^N W_k(\vec{x}_n) \left(F(\vec{\xi}_n|\vec{\theta}) + b_k(\vec{\xi}_n)\right)\right. \nonumber \\ & & + \left. \ln \int e^{-F(\vec{\xi}'|\vec{\theta})-b_k(\vec{\xi})} \right) d\vec{\xi}' \nonumber \\ &=& \sum_{n=1}^N \left(\sum_{k=1}^K N_k W_k(\vec{x}_n)\right) F(\vec{\xi}_n|\vec{\theta}) \nonumber \\ & & + \sum_{k=1}^{K} N_k \ln \int e^{-F(\vec{\xi}'|\vec{\theta})-b_k(\vec{\xi}_n)} d\vec{\xi}' \nonumber \\ &=& \sum_{n=1}^N F(\vec{\xi}_n|\vec{\theta}) \nonumber \\ & & + \sum_{k=1}^{K} N_k \ln \int e^{-F(\vec{\xi}'|\vec{\theta})-b_k(\vec{\xi}_n)} d\vec{\xi}'\label{eq:sumkldiverge}\end{aligned}$$ where $F_k(\vec{\xi})$ is the potential of mean force of the $k$th biased state, $F(\vec{\xi}_n)$ and $F_k(\vec{\xi}_n)$ are the values of $F$ and $F_k$ at $\Phi(\vec{x}_n) = \vec{\xi}_n$, $b_k(\vec{\xi}_n)$ is the value of the biasing potential associated with biased simulation $k$ at $\Phi(\vec{x}_n) = \vec{\xi}_n$, and $F_{k}(\vec{\xi}|\vec{\theta}) = F(\vec{\xi}|\vec{\theta}) + b_k(\vec{\xi})$. We note that in moving from the second to third line we dropped the term $\sum_{k=1}^{K}\left(\sum_{n=1}^N W_k(\vec{x}_n) b_k(\vec{\xi}_n)\right)$ because it is independent of the $\vec{\theta}$, and thus does not affect the minimization, and in moving from the third to fourth line we appeal to the normalization condition for $W_k(\vec{x}_n)$ in eq. \[eq:normal\]. The latter operation eliminates the weights from each individual state, leaving as the first term in our final expression an unweighted sum over the trial functions at the empirical data points. The second term is a weighted sum over an integral over the trial functions and biasing potentials and contains significant contributions only where the biasing potential is low. Large biasing potentials result in small contributions and essentially free variations of the trial function. However, as long as the trial function has significant weight in one of the biasing functions, then it will be constrained over that region of space. In our numerical tests discussed below, it appears that eq. \[eq:sumkldiverge\] gives additional accuracy in the densely sampled regions by sacrificing accuracy in the sparsely sampled regions, but provides superior global fits compared to those achieved by minimization of eq. \[eq:kldiverge\]. **C.3. Summed sampled biased state Kullback-Leibler divergence.** The final alternative we consider is to sum the KL divergences from the $K$ empirical distribution functions with the biased potential added as we do in the preceding section, but only using the $N_k$ actual samples from each biased state. In this case, each weight will be simply $1/N_k$, as each of the $N_k$ samples will be equally weighted. We will continue to weight each state by the number of samples $N_k$ collected from the state, as states with more samples contribute proportionally more information to the KL divergence. Following a similar development to that which led to eq. \[eq:sumkldiverge\] and again dropping terms that are not dependent on $\vec{\theta}$ yields the expression to be minimized as: $$\begin{aligned} \sum_{k=1}^{K} N_k D_{\mathrm{KL}}(\vec{\theta}) &=& \sum_{k=1}^K N_k \left(\sum_{n=1}^{N_k}\frac{1}{N_k}F_k(\vec{\xi}_n|\vec{\theta})\right. \nonumber \\ & & + \left. \ln \int e^{-F_{k}(\vec{\xi}'|\vec{\theta})} d\vec{\xi}'\right) \nonumber \\ &=& \sum_{k=1}^{K} N_k \left(\sum_{n=1}^{N_k} \frac{1}{N_k} \left(F(\vec{\xi}_n|\vec{\theta}) + b_k(\vec{\xi}_n)\right)\right. \nonumber \\ & & + \left. \ln \int e^{-F(\vec{\xi}'|\vec{\theta})-b_k(\vec{\xi})} \right) d\vec{\xi}' \nonumber \\ &=& \sum_{k=1}^{K} \sum_{n=1}^{N_k} F(\vec{\xi}_n|\vec{\theta}) \nonumber \\ & & + \sum_{k=1}^K N_k \ln \int e^{-F(\vec{\xi}'|\vec{\theta})-b_k(\vec{\xi}_n)} d\vec{\xi}' \nonumber \\ &=& \sum_{n=1}^{N} F(\vec{\xi}_n|\vec{\theta}) \nonumber \\ & & + \sum_{k=1}^K N_k \ln \int e^{-F(\vec{\xi}'|\vec{\theta})-b_k(\vec{\xi}_n)} d\vec{\xi}' \label{eq:weightedsimplesum}\end{aligned}$$ Somewhat surprisingly, this result is exactly the same as eq. \[eq:sumkldiverge\]. This emerges due to the normalization condition for $W_k(\vec{x}_n)$ defined by eq. \[eq:normal\]. Accordingly, whether we sum the contribution to the KL divergence of each sample over all states using the MBAR weights, or simply sum the contribution of each sample to its biased state, we will be minimizing the same function, provided we weight by the number of samples $N_k$ from each distribution. We could, in principle, also choose to sum over the $K$ KL divergences without weighting each biased distribution by $N_k$. Doing so and following the steps leading to eq. \[eq:weightedsimplesum\] yields the expression: $$\begin{aligned} \sum_{k=1}^{K} D_{\mathrm{KL}}(\vec{\theta}) &=& \sum_{k=1}^{K}\frac{1}{N_k}\sum_{n=1}^{N_k} F(\vec{\xi}_n|\vec{\theta}) \nonumber \\ & & + \sum_{k=1}^K \ln \int e^{-F(\vec{\xi}'|\vec{\theta})-b_k(\vec{\xi}_n)} d\vec{\xi}' \label{eq:simplesum}\end{aligned}$$ which is both less mathematically elegant and less intuitively satisfying than eq. \[eq:weightedsimplesum\] since simulations conducted at a state point with small $N_k$ contribute equally to those with large $N_k$. Likewise, if we follow the logic of eq. \[eq:sumkldiverge\] but employing equal weightings, we end up with a similarly unsatisfying result: $$\begin{aligned} \sum_{k=1}^{K} D_{\mathrm{KL}}(\vec{\theta}) &=& \sum_{n=1}^{N} \left(\sum_{k=1}^K W_k(\vec{x}_n)\right) F(\vec{\xi}_n|\vec{\theta}) \nonumber \\ & & + \sum_{k=1}^K \ln \int e^{-F(\vec{\xi}'|\vec{\theta})-b_k(\vec{\xi}_n)} d\vec{\xi}' \label{eq:simplesumkldiverge}\end{aligned}$$ which is not only more complicated than eq. \[eq:sumkldiverge\], but also differs (as numerical tests confirm) from eq. \[eq:simplesum\] unless all $N_k$ are equal, in which case $\sum_{k=1}^K W_k(\vec{x}_n)= K/N = 1/N_k$, and equality is restored. Due to these mathematically and intuitively unsatisfying features, we will not pursue eq. \[eq:simplesum\] and eq. \[eq:simplesumkldiverge\] further. Likelihood as a measure of distance {#subsec:likelihood} ----------------------------------- As an alternative to the Kullback-Leibler divergence, we can measure distances using likelihoods. Specifically, we can take our trial probability distribution $P_T(\vec{\xi}|\vec{\theta})$ and compute the *likelihood* of one of our $N$ observations by evaluating the $P_T$ associated with that observation. The observations taken together comprise our data $D$. Assuming the samples are independent and identically distributed (i.i.d.) observations, then we can calculate the total likelihood as the product of the individual likelihoods. The trial probability distribution as a function of $\theta$ that maximizes this likelihood will be the one closest to the empirical distribution. In a similar manner to the KL divergence, we may construct this distribution in a number of ways. We shall show that the two choices we propose contain the same information as the KL divergence expressions, but offer greater interpretability and amenability to a Bayesian treatment. **D.1. Product over unbiased state likelihoods.** Perhaps the simplest choice is to consider the joint likelihood of each weighted sample in the unbiased state. In this case, since we can consider each sample to be observed according to its weight $W(\vec{x}_n)N$ (the expected number of counts at $\vec{x}_n$ given the empirical distribution), then the overall likelihood as a function of $\vec{\theta}$ is: $$\begin{aligned} \ell(\vec{\theta}|\{\vec{x}_n\}) = \prod_{n=1}^N P_T(\vec{\xi}_n|\vec{\theta})^{W(\vec{x}_n)N} \label{eq:like1}\end{aligned}$$ and the log likelihood is: $$\begin{aligned} \ln \ell(\vec{\theta}|\{\vec{x}_n\}) &=& \sum_{n=1}^N NW(\vec{x}_n) \ln P_T(\vec{\xi}_n|\vec{\theta}) \nonumber \\ &=& \sum_{n=1}^N NW(\vec{x}_n) \left(-F(\vec{\xi}_n|\vec{\theta}) - \ln \int e^{-F(\vec{\xi}'|\vec{\theta}) d\vec{\xi}'}\right) \nonumber \\ &=& -N\sum_{n=1}^N W(\vec{x}_n) F(\vec{\xi}_n|\vec{\theta}) - N\ln \int e^{-F(\vec{\xi}'|\vec{\theta})} d\vec{\xi}' \nonumber \\\label{eq:likelihoodunbiased}\end{aligned}$$ In going from the second to the third line, we employ normalization condition in eq. \[eq:normal2\]. As expected [@Eguchi:JMA:2006], we quickly verify that eq. \[eq:likelihoodunbiased\] is identical to eq. \[eq:kldiverge\] up to a factor of $(-N)$, so maximizing this log likelihood is the same as minimizing the unbiased state KL divergence. **D.2. Product over biased state likelihoods.** We could also calculate the overall likelihood as the product of the likelihoods of the individual samples in each of the biased simulations: $$\begin{aligned} \ell(\vec{\theta}|\{\vec{x}_n\}) &=& \prod_{k=1}^{K} \prod_{n=1}^{N_k} P_T(\vec{\xi}_n|k,\vec{\theta}) \label{eq:like2}\end{aligned}$$ where we have denoted the probability distribution resulting from the trial PMF plus the $k$th bias as $P_T(\vec{\xi}_n|k,\vec{\theta})$. The corresponding log likelihood is: $$\begin{aligned} \ln \ell(\vec{\theta}|\{\vec{x}_n\}) &=& \sum_{k=1}^K \sum_{n=1}^{N_k} \ln P_T(\vec{\xi}_n|k,\vec{\theta}) \nonumber \\ &=& \sum_{k=1}^K \sum_{n=1}^{N_k} \left(- F(\vec{\xi}_n|\vec{\theta}) - b_k(\vec{\xi}_n)\right. \nonumber\\ & & - \left.\ln \int e^{-F(\vec{\xi}'|\vec{\theta})-b_k(\vec{\xi}_n)} d\vec{\xi}'\right) \nonumber \\ &=& \sum_{k=1}^K \left(\sum_{n=1}^{N_k} -F(\vec{\xi}_n|\vec{\theta})\right. \nonumber \\ & & \left.-N_k \ln \int e^{-F(\vec{\xi}'|\vec{\theta})-b_k(\vec{\xi}_n)} d\vec{\xi}'\right) \nonumber \\ &=& -\sum_{n=1}^{N} F(\vec{\xi}_n|\vec{\theta}) \nonumber \\ & & - \sum_{k=1}^K N_k \ln \int e^{-F(\vec{\xi}'|\vec{\theta})-b_k(\vec{\xi}_n)} d\vec{\xi}' \label{eq:likelihoodbiased}\end{aligned}$$ where in going from the second to third line we drop the $b_k(\vec{\xi}_n)$ term as independent of $\vec{\theta}$ and therefore irrelevant to the maximization. Eq. \[eq:likelihoodbiased\] is identical to eq. \[eq:sumkldiverge\] up to a minus sign, so maximizing the product of biased state likelihoods is equivalent to minimizing the summed biased KL divergence. **D.3. Weighted product over biased state likelihoods.** We could try to construct a likelihood that was consistent with the KL divergence in eq. \[eq:simplesum\] by constructing a sum of KL divergences over each state weighted by the reciprocal of the number of samples in each state: $$\ell(\vec{\theta}|\{\vec{x}_n\}) = \prod_{k=1}^{K} \prod_{n=1}^{N_k} P_T(\vec{\xi}_n|k,\vec{\theta})^{\frac{1}{N_k}}, \label{eqn:likeWeight}$$ for which the corresponding log likelihood is: $$\begin{aligned} \ln \ell(\vec{\theta}|\{\vec{x}_n\}) &=& \sum_{k=1}^K \sum_{n=1}^{N_k} \frac{1}{N_k} \ln P_T(\vec{\xi}_n|k,\vec{\theta}) \nonumber \\ &=& \sum_{k=1}^K \frac{1}{N_k} \sum_{n=1}^{N_k} \left(- F(\vec{\xi}_n|\vec{\theta}) - b_k(\vec{\xi}_n)\right. \nonumber\\ & & - \left.\ln \int e^{-F(\vec{\xi}'|\vec{\theta})-b_k(\vec{\xi}_n)} d\vec{\xi}'\right) \nonumber \\ &=& \sum_{k=1}^K \frac{1}{N_k} \sum_{n=1}^{N_k} -F(\vec{\xi}_n|\vec{\theta}) \nonumber \\ & & - \sum_{k=1}^K \ln \int e^{-F(\vec{\xi}'|\vec{\theta})-b_k(\vec{\xi}_n)} d\vec{\xi}' \nonumber \\ &=& -\sum_{k=1}^K \frac{1}{N_k} \sum_{n=1}^{N_k} F(\vec{\xi}_n|\vec{\theta}) \nonumber \\ & & - \sum_{k=1}^K \ln \int e^{-F(\vec{\xi}'|\vec{\theta})-b_k(\vec{\xi}_n)} d\vec{\xi}' \label{eq:likelihooddirect}\end{aligned}$$ Eq. \[eq:likelihooddirect\] is identical to eq. \[eq:simplesum\] up to a minus sign, and so maximizing the former is equivalent to minimizing the latter. However, as discussed above, there appears to be no real theoretical or practical justification reason to weight samples in the manner expressed in eq. \[eqn:likeWeight\] and for this reason we do not advocate the use of this formulation. Least squares as a measure of distance -------------------------------------- Finally, we could choose to adopt a functional form, and then perform a least squares fit to the empirical distribution or to the empirical PMF in order to define a distance between the distributions. Although seemingly quite a natural and straightforward approach, it does not give rise to easily interpretable or implementable expressions. Accordingly, we defer an analysis of the least squares approach to the Appendix and do not pursue this further. How does vFEP fit into this framework? -------------------------------------- We now examine the correspondence of our development with the variational free energy profile (vFEP) approach developed by Lee and co-workers [@Lee:JCTC:2013; @Lee:JCTC:2014]. We first note the potential ambiguity within vFEP regarding the definition of the term ‘window’, which as described before, could refer to either a biasing potential, the data collected from a simulation run with that biasing potential, or a region of collective variable space within which a biased simulation has high probability density are related, but not equivalent, concepts. In the present comparison with vFEP, we will assume “window” as used in the vFEP definition refers to a biasing potential plus the data collected during simulations with that biasing potential. Under this definition of “window”, samples in the window are not included or excluded based on the associated values of $\vec{\xi}$, only on the basis of biased simulation from which they were collected. Using the original vFEP notation, $Z^{a} = \int e^{-F_{i,a}(\theta,x)} dx$ is the partition function of biased simulation $a$ and $F_{i,a}(\theta,x) = F_i(\theta,x) + W_a(x)$ is the biased trial partition function determined by parameters $\theta$ and collective variable $x$, where $W_a(x)$ is the biasing potential, and vectors in $x$ and $\theta$ are implicit. Since $W_a(x)$ is not a function of $\theta$ and does not affect the minimization, the log likelihood to be maximized with respect to the parameters $\theta$ of the trial function $F$ is: $$\begin{aligned} \ln \ell(\theta) &=& \sum_a \left[-\ln Z^a - \frac{1}{N_a} \sum_{i=1}^N F_{i,a}(\theta,x_a)\right] \nonumber \\ &=& \sum_a \left[ -\frac{1}{N_a} \sum_{i=1}^N F_i(\theta,x_a) - \ln \int_{\Gamma_a} e^{-F_{i,a}(\theta,x)} dx \right] \nonumber \\\end{aligned}$$ If we can assume (i) the substitution of $k$ as a label for biasing potential rather than $a$ as the label of windows, (ii) the recognition that $\int_{\Gamma_a}$ should be either the same or approximately the same as $\int_\Gamma$, since samples from biased potential will be mostly constrained to subset of $\Gamma$, but can in principle appear anywhere in $\Gamma$, then we can translate this into the terminology of the present paper. The window $a$ becomes the biased simulation $k$, $N_a$ becomes $N_k$, $x$ becomes $\xi$, vectors are noted explicitly, and we obtain: $$\begin{aligned} \ln \ell(\vec{\theta}|\{\vec{x}\}) &=& \sum_{k=1}^{K} \left[ -\frac{1}{N_k} \sum_{i=1}^{N_k} F(\vec{\xi}_i|\vec{\theta}) \right. \nonumber \\ & & \left. - \ln \int e^{-F(\vec{\xi}'|\vec{\theta})-b_k(\vec{\xi}_n)} d\vec{\xi}' \right]\end{aligned}$$ This expression is identical to eq. \[eq:likelihooddirect\] and, up to a minus sign, eq. \[eq:simplesum\]. Accordingly, when viewed through the lens of the development presented in this paper—and with the previously mentioned assumptions about the definitions of windows and range of integrals—vFEP corresponds to a particular choice of biased state weighting within a Kullback-Leibler divergence (eq. \[eq:simplesum\]) or likelihood formulation (eq. \[eq:likelihooddirect\]). As discussed above, this expression is hard to justify from a theoretical or practical perspective, but if the direct sum over biasing potentials is changed to one weighted by $N_k$, then it would become the easier-to-work with and better justified eq. \[eq:likelihoodbiased\]. A Bayesian framework for PMF estimation ======================================= Equipped with the prescriptions to calculate likelihood of observations under the different assumptions detailed in Section \[subsec:likelihood\], we can switch to a Bayesian framework to find distributions possessing desirable features of an analytical form, continuity, and smoothness that is most consistent with our understanding of $F(\vec{\xi})$. We note that our use of a likelihood formulation, which was shown to be fully consistent with the KL divergence framework, is crucial in opening the door to a Bayesian formulation. At the first step in this framework, we take a candidate trial distribution $P_T(\vec{\xi} | \vec{\theta})$ and optimize its parameters $\vec{\theta}$ to form the maximum *a posteriori* probability (MAP) estimate of $P_T(\vec{\xi} | \vec{\theta})$. This estimate maximizes the Bayes posterior probability of the trial distribution, rather than simply the likelihood, given the collected (biased) samples and MBAR estimates of the relative free energy differences $\Delta f_{ij} = f_j - f_i$ between biased states. As we introduce our Bayesian formulation, we note that the free energies emerging from the MBAR equations have no free parameters; they are the only estimated normalizing constants satisfying the self-consistent equations in eq. \[equation:estimator-of-free-energies\]. It is possible to employ a Bayesian approach to free energy estimation by sampling of either the density of states [@Habeck:PRL:2012] or weights of each sample in the unbiased state [@Moradi:NC:2015], allowing one to incorporate additional priors about the simulations in addition to priors on the shape of the potential of mean force. However, since the free energy is defined completely by the Boltzmann distribution, and since the MBAR equations provide the lowest variance importance sampling estimator and are asymptotically unbiased, then in the absence of other information about the system, it is the simplest and least biased approach to employ MBAR estimates for $\{f_i\}$. A difference from previous efforts is that we cast our approach within in a Bayesian framework that enables transparent incorporation of Bayesian priors, Bayesian uncertainty quantification, and Bayesian model selection about the functional form of the potential of mean force. Although we do not do so here, this formalism also sets the stage for adaptive sampling, in which regions of the probability distribution containing the most uncertainty are identified for additional biased sampling to optimally direct computational resources. This is similar in spirit to, but would go beyond, the adaptive approach of Schofield, which presents an elegant means to alter the analytical representation of the unbiased probability distribution to minimize uncertainty [@Schofield:JPCB:2017], to actually guiding the collection of additional data to optimally reduce uncertainty in the estimated distribution. Given the set of biased samples $\{\vec{x}_n\}$ and their collective variable mappings $\{\xi_n\} = \{\Phi(\vec{x}_n)\}$ and the associated weights in the (unbiased) thermodynamic state calculated by by MBAR $W(\vec{x}_n$) (eq. \[eq:MBARweight\]), we apply Bayes’ theorem [@Sivia::2006] to construct an expression for the posterior probability of the parameters $\vec{\theta}$ given the data $\{\vec{x}_n\}$, obtaining: $$\begin{aligned} \mathcal{P}(\vec{\theta} | \{\vec{x}_n\}) &= \frac{\mathcal{P}(\{\vec{x}_n\} | \vec{\theta}) \mathcal{P}(\vec{\theta})}{P(\{\vec{x}_n\})} \label{eq:Bayes}\end{aligned}$$ where $\mathcal{P}(\vec{\theta} |\{\vec{x}_n\})$ is the *posterior probability* of the parameters $\vec{\theta}$ given the sampled data, $\mathcal{P}(\{\vec{x}_n\} | \vec{\theta}) = \ell(\vec{\theta}|\{\vec{x}_n\})$ is the previously-defined *likelihood* specifying the probability of the collected samples given the particular choice of parameters, $\mathcal{P}(\vec{\theta})$ is the *prior probability* of the parameters before any data have been collected, and $\mathcal{P}(\{\vec{x}_n\}) = \int \mathcal{P}(\{\vec{x}_n\} | \vec{\theta}) \mathcal{P}(\vec{\theta}) d\vec{\theta}$ is the probability of observing the samples that we did (the *evidence*), serves to normalize the posterior, and contains no dependence on the parameters $\vec{\theta}$. Importantly, the prior enables us to transparently encode any prior beliefs or knowledge about the system into our analysis that can serve to regularize and stabilize our estimation. The MAP estimate of the parameters follows from maximization of the log posterior: $$\begin{aligned} \label{eq:MAP} \vec{\theta}^\mathrm{MAP}(\{\vec{x}_n)\} &=& \overset{\mathrm{argmax}}{\vec{\theta}} \ln \mathcal{P}(\vec{\theta} | \{\vec{x}_n\}) \nonumber \\ &=& \overset{\mathrm{argmax}}{\vec{\theta}} \left( \ln \mathcal{P}(\{\vec{x}_n\} | \vec{\theta}) + \ln \mathcal{P}(\vec{\theta}) \right) \nonumber \\ &=& \overset{\mathrm{argmax}}{\vec{\theta}} \left( \ln \ell(\vec{\theta}|\{\vec{x}_n\}) + \ln \mathcal{P}(\vec{\theta}) \right)\end{aligned}$$ Exploiting our previous observation that maximizing a log likelihood is the same as minimizing the corresponding KL divergence from an empirical distribution [@Eguchi:JMA:2006], we can equivalently view maximization of the Bayes posterior (eq. \[eq:MAP\]) from a frequentist perspective as minimization of the Kullback-Leibler divergence or maximization of the log likelihood subject to regularization by the logarithm of the Bayes prior. To use eq. \[eq:MAP\] we need to adopt a form for the likelihood $\ell(\vec{\theta}|\{\vec{x}_n\})$ and prior $\mathcal{P}(\vec{\theta})$. The development in Section \[subsec:likelihood\] suggests we adopt eq. \[eq:like1\] or \[eq:like2\] as candidates for the likelihood, where we explicitly assumed samples to be i.i.d. distributed. If the samples cannot be treated as i.i.d., then the counts $N$ or $N_k$ should be corrected by an inefficiency factor reflecting the presence of correlations in the sampling procedure [@Gallicchio:JPCB:2005; @Zhu:JCC:2012]. The simplest and most common choice for the prior is a uniform prior $\mathcal{P}(\vec{\theta})$ = 1. With no dependence on the model parameters $\vec{\theta}$, it drops out of the maximization in eq. \[eq:MAP\] and the MAP estimate $\vec{\theta}^\mathrm{MAP}$ becomes coincident with the maximum likelihood (ML) estimate $\vec{\theta}^\mathrm{ML}$: $$\vec{\theta}^\mathrm{ML}(\{\vec{{{\mbox{\boldmath{$x$}}}}}_n)\} = \overset{\mathrm{argmax}}{\vec{\theta}} \ln \ell(\vec{\theta}|\{\vec{x}_n\}). \label{eq:ML}$$ In principle, arbitrary priors are admissible—even improper priors that do not have a finite integral—provided the posterior is proper (i.e., integrates to unity) [@Gelman::2013]. In a Bayesian sense, we use the prior to encode prior knowledge or belief about the character of the probability distribution (such as smoothness of the splines). In the frequentist sense, the prior serves to regularize the probability estimate, providing bias-variance trade-off and compensating for sparse data. In a practical sense, the appropriate prior to adopt depends on the form of the model selected $P_T(\vec{\xi} | \vec{\theta})$, the size and quality of the simulation data, and the degree of prior belief or understanding of the system. Adopting the likelihood eq. \[eq:like1\] the maximization in eq. \[eq:MAP\] can be expressed as: $$\begin{aligned} \vec{\theta}^{MAP}(\{\vec{x}_n\}) &= \overset{\mathrm{argmax}}{\vec{\theta}} \left[ -N\sum_{n=1}^N W(\vec{x}_n) F(\vec{\xi}_n|\vec{\theta}) - N\ln \int e^{-F(\vec{\xi}'|\vec{\theta})} d\vec{\xi}' + \ln \mathcal{P}(\vec{\theta}) \right] \nonumber \\ &= \overset{\mathrm{argmin}}{\vec{\theta}} \left[ N\sum_{n=1}^N W(\vec{x}_n) F(\vec{\xi}_n|\vec{\theta}) + N\ln \int e^{-F(\vec{\xi}'|\vec{\theta})} d\vec{\xi}' - \ln \mathcal{P}(\vec{\theta})\right] \nonumber \\ &= \overset{\mathrm{argmin}}{\vec{\theta}} \left[ N\sum_{n=1}^{N} W(\vec{x}_n) F(\vec{\xi}_n|\vec{\theta}) - \ln \mathcal{P}(\vec{\theta})\right] \quad \mathrm{s.t.} \quad \int_\Gamma e^{-F(\vec{\xi}_n|\vec{\theta})} d\vec{\xi} = 1 \label{eq:max1},\end{aligned}$$ where in going from line 2 to 3 we have appealed to the identity $P(\vec{\xi}|\vec{\theta}) = e^{-F(\vec{\xi}|\vec{\theta})}$ (eq. \[eqn:logP\]) and asserted that this distribution must be normalized. Adopting the product of likelihoods eq. \[eq:like2\] the maximization in eq. \[eq:MAP\] becomes: $$\begin{aligned} \vec{\theta}^{MAP}(\{\vec{x}_n\}) &= \overset{\mathrm{argmax}}{\vec{\theta}} \left[ -\sum_{n=1}^{N} F(\vec{\xi}_n|\vec{\theta}) - \sum_{k=1}^K N_k \ln \int e^{-F(\vec{\xi}'|\vec{\theta})-b_k(\vec{\xi}_n)} d\vec{\xi}' + \ln \mathcal{P}(\vec{\theta}) \right] \nonumber \\ &= \overset{\mathrm{argmin}}{\vec{\theta}} \left[ \sum_{n=1}^{N} F(\vec{\xi}_n|\vec{\theta}) + \sum_{k=1}^K N_k \ln \int e^{-F(\vec{\xi}'|\vec{\theta})-b_k(\vec{\xi}_n)} d\vec{\xi}' - \ln \mathcal{P}(\vec{\theta})\right] \nonumber \\ &= \overset{\mathrm{argmin}}{\vec{\theta}} \left[ \sum_{n=1}^{N} F(\vec{\xi}_n|\vec{\theta}) - \ln \mathcal{P}(\vec{\theta})\right] \quad \mathrm{s.t.} \quad \int_\Gamma e^{-F(\vec{\xi}'|\vec{\theta})-b_k(\vec{\xi}_n)} d\vec{\xi}' = 1 \; \; \forall k \label{eq:max2}.\end{aligned}$$ There are thus two approaches to find the MAP or ML estimate: an unconstrained minimization enforcing the normalization implicitly (second-to-last lines in eq. \[eq:max1\] and \[eq:max2\]), and a constrained minimization enforcing the normalization explicitly (last lines in eq. \[eq:max1\] and \[eq:max2\]). The constrained minimization versions of the above expressions can be solved using the method of Lagrange multipliers or through any other constrained optimization method such as the interior point method or sequential quadratic programming (SQP). The relative efficiency of the two approaches will depend on the details of software methods available as well as the particular forms of the biases and $F(\vec{\xi}_n|\vec{\theta})$. Model selection =============== The Akaike information criterion (AIC) or Bayesian information criterion (BIC) provide a principled means to discriminate between different possible choices for the Bayes prior and the trial probability distribution, The AIC is defined as [@Akaike:ITAC:1974]: $$\begin{aligned} AIC = 2k - 2 \ln \ell(\vec{\theta}|\{\vec{x}_n\}), \label{eq:AIC}\end{aligned}$$ where $k$ is the number of estimated parameters in the model. The BIC is defined as [@Schwarz:AS:1978]: $$\begin{aligned} BIC = k\ln N - 2 \ln \ell(\vec{\theta}|\{\vec{x}_n\}), \label{eq:BIC}\end{aligned}$$ where $N$ is the number of data points. If we compute $\vec{\theta} = \vec{\theta}^\mathrm{MAP}$ for a number of model choices $i$, we can use these parameter estimates to compute the set of AIC or BIC values $\{a_i\}$ for the candidate models. The model with the lowest $a_i$ is the single model that is best supported by the data. A more sophisticated approach to model selection defines the smallest of the $\{a_i\}$ as $a_\mathrm{min}$, then assigns the relative likelihood of model $i$ as $r_i = e^{-\Delta_i/2} = e^{-(a_i - a_\mathrm{min})/2}$. The model weights follow from the normalized $r_i$ and provide the likelihood of model $i$ [@Schofield:JPCB:2017]: $$\begin{aligned} \omega_i = \frac{r_i}{\sum_k r_k} = \frac{e^{-\Delta_i/2}}{\sum_k e^{-\Delta_k/2}}.\end{aligned}$$ Adopting a threshold $q$ = 0.05 (for example), the $\{r_i\}$ can be used to discard models from consideration and/or determine that there is insufficient evidence to choose one model over the other. The $\{\omega_i\}$ may also be used as weighting factors with which to construct a multi-model composed from the weighted sum of the predictions of each candidate model. Bayesian uncertainty quantification =================================== The $\vec{\theta} = \vec{\theta}^\mathrm{MAP}$ estimate represents the single best point estimate of the parameters of the trial distribution $P_{T}(\vec{\xi}|\vec{\theta})$ given the data $\{\vec{x}_n\}$ and the prior $\mathcal{P}(\vec{\theta})$. Uncertainties around these point estimates may be approximated by analytical error expectations or through bootstrap estimation [@Paliwal:JCTC:2011]. A fully Bayesian uncertainty estimate is defined by the distribution of $\vec{\theta}$ dictated by the Bayes posterior [@Ferguson:JCC:2017]. Empirical samples of $\vec{\theta}$ from the Bayes posterior may be generated using the Metropolis-Hastings algorithm. This Markov Chain Monte-Carlo (MCMC) approach generates a sequence of parameter realizations that converges to the stationary distribution of the Bayes posterior [@Smith::2013]. Under this approach we propose trial moves in $\vec{\theta}$ that are accepted or rejected according to the Metropolis-Hastings acceptance criterion [@Smith::2013; @Hastings:B:1970]: $$\begin{aligned} \alpha(\vec{\theta}^\nu | \vec{\theta}^\mu) &= \min \left[ \frac{\mathcal{P}(\vec{\theta}^\nu | \{\vec{x}_n\}) \cdot q(\vec{\theta}^\mu | \vec{\theta}^\nu)}{\mathcal{P}(\vec{\theta}^\mu | \{\vec{x}_n\}) \cdot q(\vec{\theta}^\nu | \vec{\theta}^\mu)}, 1 \right] \notag \\ &= \min \left[ \frac{\mathcal{P}(\{\vec{x}_n\} | \vec{\theta}^\nu) \cdot \mathcal{P}(\vec{\theta}^\nu) \cdot q(\vec{\theta}^\mu | \vec{\theta}^\nu)}{\mathcal{P}(\{\vec{x}_n\} | \vec{\theta}^\mu) \cdot \mathcal{P}(\vec{\theta}^\mu) \cdot q(\vec{\theta}^\nu | \vec{\theta}^\mu)}, 1 \right] \notag \\ &= \min \left[ \frac{\ell(\vec{\theta}^\nu|\{\vec{x}_n\}) \cdot \mathcal{P}(\vec{\theta}^\nu) \cdot q(\vec{\theta}^\mu | \vec{\theta}^\nu)}{\ell(\vec{\theta}^\mu|\{\vec{x}_n\}) \cdot \mathcal{P}(\vec{\theta}^\mu) \cdot q(\vec{\theta}^\nu | \vec{\theta}^\mu)}, 1 \right] \label{eqn:MH}\end{aligned}$$ where $\alpha(\vec{\theta}^\nu | \vec{\theta}^\mu)$ is the probability of accepting a trial move from parameter set $\vec{\theta}^\mu$ to parameter set $\vec{\theta}^\nu$, and $q(\vec{\theta}^\nu | \vec{\theta}^\mu)$ is the probability of proposing this trial move. We have invoked Bayes’ Theorem (eq. \[eq:Bayes\]) in going from the first line to the second, and observe that (importantly) the evidence has canceled top and bottom. In going from the second line to the third, we employed the identity $\mathcal{P}(\{\vec{x}_n\} | \vec{\theta}) = \ell(\vec{\theta}|\{\vec{x}_n\})$. In the event that symmetric trial move proposal probabilities are adopted such that $q(\vec{\theta}^\nu | \vec{\theta}^\mu) = q(\vec{\theta}^\mu | \vec{\theta}^\nu)$, the Metropolis-Hastings acceptance criterion reduces to the Metropolis criterion [@Smith::2013; @Metropolis:JCP:1953]: $$\begin{aligned} \alpha(\vec{\theta}^\nu | \vec{\theta}^\mu) &= \min \left[ \frac{\ell(\vec{\theta}^\nu|\{\vec{x}_n\}) \cdot \mathcal{P}(\vec{\theta}^\nu)}{\ell(\vec{\theta}^\mu|\{\vec{x}_n\}) \cdot \mathcal{P}(\vec{\theta}^\mu)}, 1 \right] \label{eq:Met}\end{aligned}$$ We initialize the Markov chain from $\vec{\theta}^\mathrm{MAP}$ corresponding to the maximum of the Bayes posterior $\mathcal{P}(\vec{\theta} | \{\vec{\xi}_n,W_{n}\})$ and propose trial moves that maintain normalization $\int_\Gamma \mathcal{P}(\vec{\xi} | \vec{\theta}) d\vec{\xi} = 1$. By monitoring $\mathcal{L}(\vec{\theta} | \{\vec{x}_n\}) = \ln \left( \mathcal{P}(\{\vec{x}_n\} | \vec{\theta}) \mathcal{P}(\vec{\theta}) \right) = \ln \ell(\vec{\theta}|\{\vec{x}_n\}) + \ln \mathcal{P}(\vec{\theta})$—which is proportional to the Bayes posterior up to an additive constant with no $\vec{\theta}$ dependence (eq. \[eq:Bayes\])—we can determine that the Markov chain has converged when $\mathcal{L}(\vec{\theta} | \{\vec{x}_n\})$ plateaus to fluctuate around a stable mean. At this point we may harvest realizations of $\vec{\theta}$ distributed according to the Bayes posterior. Using these parameter realizations, we can construct realizations of $\mathcal{P}(\vec{\xi} | \vec{\theta})$ to quantify the uncertainties in this estimated distribution. Example: Umbrella sampling of protein sidechain torsion within binding cavity ============================================================================= As an illustrative example, we consider the application of our mathematical framework to compute a 1D PMF from an umbrella sampling simulation. Code implementing these methods can be found publicly available in the ’pmf’ branch `http://github.com/choderalab/pymbar`, in the script `examples/umbrella-sampling-smoothpmf.py`. The data correspond to umbrella sampling simulation data for the $\chi$ torsion of a valine sidechain in lysozyme L99A with benzene bound in the cavity [@Mobley:JMB:2007] (fig. \[fig:lyspic\]). ![$\chi$ torsion angle in Lys111 of L99a T4 lysozyme around with the potential of mean force is calculated using umbrella sampling\[fig:lyspic\].](lys111_picture.png){width="0.8\columnwidth"} We analyze data from 26 biased simulations employing umbrella potentials at a range of dihedral values with harmonic biasing constants of between 100 and 400 kJ/mol/nm$^2$. A 100 ps simulation was carried out under each umbrella potential with angles and energies saved every 0.2 ps for a total of 500 samples at each state. The data was analyzed for correlations, and approximately every other data point is taken (exact frequency varying with state) for a total of 7446 data points, ranging from 42 to 410 points per umbrella. We examine the histogram approach (with 30 bins, a number chosen to be visually clear—the number of bins can be chosen completely independently of the number of umbrella simulations run), and the kernel density approximation with a Gaussian kernel, with bandwidth parameter half of the bin size, in this case, $\frac{1}{2} \times 360/30 = 6$ degrees. We also look at parameterized splines as our representation; in this case, using B-splines with varying numbers of knots placed uniformly, using cubic splines in this example; the theory is independent of these particular choices of spline. We note that one could use splines to fit to either the PMF $F(\vec{\xi}|\vec{\theta})$ or the probability distribution $P(\vec{\xi}|\vec{\theta})$. However, we find that it becomes difficult to satisfy the non-negativity condition of $P(\vec{\xi}|\vec{\theta})$ when using standard spline implementations, and that large changes in PMF propagate exponentially to the probability distribution making it challenging to fit stably and robustly. For numerical stability, we therefore recommend using splines to approximate $F(\vec{\xi}|\vec{\theta})$ rather than $P(\vec{\xi}|\vec{\theta})$. We examine the parameterized spline representations emerging from the optimizations defined by the expressions in eq. \[eq:max1\]—corresponding to the unbiased state likelihood in eq. \[eq:like1\], log likelihood in eq. \[eq:likelihoodunbiased\], and KL divergence in eq. \[eq:kldiverge\]—and eq. \[eq:max2\]—corresponding to the product of biased states likelihood in eq. \[eq:like2\], log likelihood in eq. \[eq:likelihoodbiased\], and KL divergence in eq. \[eq:sumkldiverge\]. We will refer to the first as the “unbiased state likelihood”, and the second as the “biased states likelihood,” as it combines samples from all biased states. Efficient optimization of these expressions requires calculating the gradient and potentially the Hessians. The use of B-splines, which construct the spline in terms of local basis function, makes this calculation relatively efficient, as detailed in the Appendix. For simplicity, we elect to use a uniform distribution of spline knot locations over the domain, but these could be adaptively situated by optimizing their locations to maximize the MAP as proposed by Schofield [@Schofield:JPCB:2017]. For the Bayes prior, where we compute the full posterior, rather than just the likelihood, we adopt a unnormalized Gaussian prior on the difference between successive spline knot values: $$\begin{aligned} \mathcal{P}(\vec{\theta}) = \prod_{c=1}^{C-1} e^{-\alpha (\theta_c-\theta_{c+1})^2} \label{eq:smooth_prior} \end{aligned}$$ where $\alpha$ is a hyperparameter that controls the degree of smoothing regularization imposed upon the trial distribution. Selecting $\alpha$ = 0 corresponds to a uniform prior that drops out of the maximization and $\vec{\theta}^\mathrm{MAP} = \vec{\theta}^\mathrm{ML}$. Selecting $\alpha$ $>$ 0 favors smoother splines with less variation from knot to knot. We examine the effect of priors governed by choice of $\alpha$, where $\alpha = k/n$, where $n$ is the number of spline knots, for some constant $k$. Uncertainties are estimated by MCMC sampling of the Bayes posterior using the Metropolis-Hastings algorithm and acceptance criteria (eq. \[eqn:MH\]). ![AIC (solid) and BIC (dotted) for splines maximizing PMF likelihoods for the unbiased state estimator (red, eq. \[eq:likelihoodunbiased\]) and biased states estimator (blue, eq. \[eq:likelihoodbiased\]) as a function of the number of spline knots, referenced from the minimum of each method. Although the curves are noisy, and occasionally nonmonotonic, they provide a useful guide towards choosing optimal numbers of parameters for models, as can be seen by comparison to Fig. . \[fig:IC\]](IC_method.pdf){width="\columnwidth"} The time limiting factor, both for optimizations and MCMC sampling of the posterior, is the numerical quadrature of the integral $\int P_T(\vec{\xi} | \vec{\theta}) d\vec{\xi}$. For the log likelihoods from the unweighted state (eq. \[eq:likelihoodunbiased\]), the integral enforcing the normalization of $P_T$ is only carried out over the unbiased trial function, whereas for approaches considering all states (eq. \[eq:likelihoodbiased\]), the integral is carried out over all $K$ trial functions with biases and is thus roughly $K$ times slower. The AIC and BIC allow us to select the number of spline knots best supported by the data. We plot in fig. \[fig:IC\] the AIC (eq. \[eq:AIC\]) and BIC (eq. \[eq:BIC\]) for the unbiased state likelihood and biased states likelihood choices. In the unbiased state case, the AIC exhibits a local minimum at 16 knots and a global minimum at 26, whereas the BIC—which penalizes excessive parameters more strongly than the AIC—possesses a local minimum at 24 knots and a global minimum at 16. In the biased states case, the AIC and BIC both exhibit clear global minima at 14 knots. ![Splines maximizing the (a) unbiased state likelihood (eq. \[eq:likelihoodunbiased\] or eq. \[eq:max1\] with uniform prior) and (b) biased states likelihood (eq. \[eq:likelihoodbiased\] or eq. \[eq:max2\] with uniform prior) as a function of the number of spline knots, with a histogram (black) as a reference. Knot numbers identified as optimal by both AIC and BIC appear to be good fits compared to other numbers of splines that under- or overfit the curve \[fig:compare\_pmf\].](compare_pmf_manyunbiased.pdf){width="\columnwidth"} ![Splines maximizing the (a) unbiased state likelihood (eq. \[eq:likelihoodunbiased\] or eq. \[eq:max1\] with uniform prior) and (b) biased states likelihood (eq. \[eq:likelihoodbiased\] or eq. \[eq:max2\] with uniform prior) as a function of the number of spline knots, with a histogram (black) as a reference. Knot numbers identified as optimal by both AIC and BIC appear to be good fits compared to other numbers of splines that under- or overfit the curve \[fig:compare\_pmf\].](compare_pmf_manybiased.pdf){width="\columnwidth"} We can see how the behavior of PMF changes as a function of the number of knots and how the AIC and BIC help select optimal knot numbers in fig. \[fig:compare\_pmf\]. In this figure, we plot maximum likelihood PMFs under the unbiased state likelihood (eq. \[eq:max1\], in fig. \[fig:pmf\_unbiased\]) and biased states likelihood (eq. \[eq:max2\], in fig. \[fig:pmf\_biased\]) as a function of the number of spline knots, along with the histogram estimate equipped with uncertainties generated from error propagation from the weights via MBAR [@Shirts:JCP:2008]. As expected, higher numbers of knots provide improved fitting, but overfitting becomes clear for larger numbers of knots, especially in the case of fits using the unbiased state likelihood. However, model complexities corresponding to AIC/BIC minima fit the data relatively well in both cases. We note that the unbiased state PMF fits in fig. \[fig:pmf\_unbiased\], even for the 10-knot spline, are tightly grouped at the various PMF minima, but they vary significantly at the maxima, as there are less constraints on the maxima than the minima using this approach. In contrast, all fits with sufficient functional flexibility (more than 10 spline knots) using the biased states approach agree relatively well across the entire range of the PMF (fig. \[fig:pmf\_biased\]), even with as few as 14 spline knots, the value corresponding to the minimum of both AIC and BIC for the biased states likelihood. ![Comparison of methods including with bootstrap uncertainty estimates. The number of splines employed in each method was selected according to the AIC / BIC analysis in fig. . The same number of spline knots is used for vFEP as for the biased states estimator. The histogram employs 30 bins and Gaussian kernels with $\sigma$ = 6$^\circ$. Uncertainties are estimated by bootstrap resampling with $n$ = 40. We observe that error bars are significantly greater at the barriers for the PMF maximizing the likelihood in eq. \[eq:likelihoodunbiased\] than maximizing the likelihood in eq. \[eq:likelihoodbiased\], which has very low uncertainty throughout the entire range of values. Histogram uncertainties are moderately large over the entire range. \[fig:withbars\]](compare_pmf_withuncertainties.pdf){width="\columnwidth"} Adding bootstrapped uncertainty estimates to the PMFs help better show the relationship between the methods and their strengths and weaknesses. We present in Fig. \[fig:withbars\] a comparison of the histogram (with 30 bins), kernel density approximation (with Gaussian kernels with $\sigma$ of 6$^\circ$), unbiased state likelihood and biased states likelihood splines employing the AIC/BIC optimal number of knots, and vFEP (using with the same number of splines as the biased states likelihood case). Uncertainties all estimates are estimated from an ensemble of 40 bootstrap samples from each of the umbrellas. All methods give relatively similar results, which is to be expected with a well-sampled system and careful selection of parameters. In particular, PMF using vFEP (subject to the assumptions discussed earlier in the text) is close to the biased states likelihood approximation. This result is expected because the two approaches coincide in the limit of equal numbers of uncorrelated samples per state. ![image](bayesian_95p_200K_unbias_a0_1.pdf){width="95.00000%"} ![image](bayesian_95p_200K_unbias_a1.pdf){width="95.00000%"} ![image](bayesian_95p_50K_bias_a0_1.pdf){width="95.00000%"} ![image](bayesian_95p_50K_bias_a1.pdf){width="95.00000%"} In fig. \[fig:mcmc\] we demonstrate the utility of fully Bayesian uncertainty quantification. Uncertainties in the MAP splines are computed from 50,000 (for biased states posteriors, which is slower) and 200,000 (for unbiased state posteriors) steps of MCMC sampling from the Bayes posterior. Uncertainties represent the 95% confidence intervals at each spline knot. In both cases, we show results for 10, 20, and 30 splines for two different Gaussian priors (eq. \[eq:smooth\_prior\]): (i) $\alpha=0.1/n$ in fig. \[fig:mcmc\_unbiaseda\] and fig. \[fig:mcmc\_biasedc\], where $n$ is the number of spline knots, and (ii) $\alpha=1/n$ in fig. \[fig:mcmc\_unbiasedb\] and fig. \[fig:mcmc\_biasedd\]. We recall that larger values of $\alpha$ impose a stronger influence of the smoothing prior and are expected to result in smoother posterior distributions. The choice of $\alpha=0.1/n$ produces very minor differences between the ML and MAP curves (fig. \[fig:mcmc\_unbiaseda\] and \[fig:mcmc\_biasedc\]), whereas $\alpha=1/n$ results in a visibly apparent difference between the two curves (fig. \[fig:mcmc\_unbiasedb\] and \[fig:mcmc\_biasedd\]). We see that under the biased states formulation (figs. \[fig:mcmc\_biasedc\] and  \[fig:mcmc\_biasedd\]), uncertainties are relatively low and constant across the full range of the PMF, whereas in the unbiased state formulation (figs. \[fig:mcmc\_unbiaseda\] and  \[fig:mcmc\_unbiasedb\]), the uncertainties are largest at the high free energy regions where the likelihood function is least constrained (cf. eq. \[eq:max1\]). Under the unbiased state formulation, the stronger smoothing prior with $\alpha=1/n$ (fig. \[fig:mcmc\_unbiasedb\]) is valuable in reducing the size of the confidence intervals at the peaks of the PMF (note the larger y-axis range in fig. \[fig:mcmc\_unbiaseda\] required to accommodate the large uncertainty envelopes). We note that due to the significant freedom in the 30-knot splines, MCMC sampling of the probability nearly diverges in fig. \[fig:mcmc\_unbiaseda\] with $\alpha=0.1/n$. In contrast, the biased states formulation provides more constraints across the entire PMF (cf. eq. \[eq:max2\]), and the MCMC error bounds are smaller over the entire range of the PMF for both choices of $\alpha$ (fig. \[fig:mcmc\_biasedc\] and  \[fig:mcmc\_biasedd\]). Conclusions =========== In this paper, we have presented a Bayesian formalism to compute potentials of mean force from the empirical distributions generated by biased sampling. Within this formalism, we avoid any arbitrary choice of histogram in either the definition of the PMF or the calculation of the weights, and provide clear and explicit criteria to decide which continuous potentials of mean force are most consistent with the biased sampling data. The choice and optimization of the representation of the continuous PMF is completely decoupled from the choice of biasing functions and calculation of the relative free energies between the biased simulations. Biasing functions can be chosen to give appropriate sampling along the collective variables of interest, and the samples and their associated Boltzmann weights are used to construct the PMF. The Bayesian formalism allows us to choose the PMF that is sufficiently close to the empirical distribution of the samples we have collected, and explicitly include any prior information that we include by our choice of representation of our PMF functional form. Our development also clearly demonstrates the equivalence of the likelihood-based Bayesian formulation and Kullback-Leibler-based frequentist formulation. We find that the maximum likelihood calculated only from the unbiased state (eqs. \[eq:likelihoodunbiased\] and \[eq:kldiverge\]) has a tendency to underestimate the free energy barriers in collective variable. The product of likelihoods from all the unweighted samples collected from each biased state, weighted by the number of samples collected from each biased state (eqs. \[eq:likelihoodbiased\] and \[eq:sumkldiverge\]), has much better overall performance over the entire PMF range. Surprisingly, this likelihood is exactly equal to the likelihood generated from the product over all states of the reweighted contribution of *all* samples to each biased state state, again weighted by the number of samples collected from each state (cf. eqs. \[eq:sumkldiverge\] and \[eq:weightedsimplesum\]). We can then take these likelihoods and directly incorporate them into a Bayesian inference framework. Priors on the parameters of the PMF can then be chosen using whatever criteria is most appropriate; in this study we considered a Gaussian prior enforcing smoothness, but the selection can be made based on any user-defined criteria, such as tethering free energies to particular values or enforcing similarity to previously estimated distributions. We can then use MCMC sampling of the posterior of the PMF curves to perform uncertainty quantification for arbitrary choices of prior. We demonstrate our approach in an application to calculation of the PMF for the leucine rotation in the L99A mutant of T4 lysozyme. The unbiased state likelihood has some clear failures in that it insufficiently constrains the PMF at the highest points. This failure shows up in multiple ways. When computing bootstrap uncertainties, the unbiased states approach has very high uncertainty in the barriers. With MCMC sampling, the issues become even clearer, with significant fluctuation in the parameters at the barriers unless a relatively severe prior is imposed. The biased states likelihood, however, behaves much more stably, with a well-constrained PMF over the entire range, even under weak priors. Code implementing this approach is distributed in `pymbar`, where the previous potential of mean force functionality, using histograms to represent the PMF, is replaced with a more comprehensive module implementing the formalism presented in this paper. The Bayesian approach we present here approach is directly extensible to multidimensional potentials of mean force. However, the numerical details of performing the fitting may be challenging in some cases. Both the optimization processes and the MCMC require successive quadrature of the integrals $\int P_T(\vec{\xi} | \vec{\theta}) d\vec{\xi}$, which in all but the simplest cases cannot be carried out analytically. The authors of vFEP have already noted this challenge [@Lee:JCTC:2014] in even two dimensions with splines. This approach may also be extensible to other methods that construct biasing functions and PMFs adaptively, though the equations presented above will require modification if the sampling is not strictly stationary. Appendix {#appendix .unnumbered} ======== Least squares functional fitting\[sec:least\_squares\] ------------------------------------------------------ One possibility briefly mentioned in the main text is to minimize a least squares fit of our trial function to the empirical distribution by writing the function to be minimized as $$\begin{aligned} S(\vec{\theta}) &=& \int \left(P_E(\vec{\xi}|\{\vec{x}_n\}) - e^{-F(\vec{\xi}|\vec{\theta})}\right)^2 d\vec{\xi} \\ &=& \int P_E(\vec{\xi}|\{\vec{x}_n\})^2 - 2P_E(\vec{\xi}|\{\vec{x}_n\}) e^{-F(\vec{\xi}|\vec{\theta})} \\ && + e^{-2F(\vec{\xi}|\vec{\theta})} d\vec{\xi} \\ &=& -2 \sum_{i=1}^N W(\vec{x}_n) e^{-F(\vec{\xi}_n|\vec{\theta})} \\ & & + \int e^{-2F(\vec{\xi}|\vec{\theta})} d\vec{\xi}\end{aligned}$$ where we neglect the terms independent of $\vec{\theta}$ and employ eq. \[eqn:expect\] to estimate the thermal average. However, this integral is problematic as it is strongly biased towards low free energy regions. Large values of $F$ contribute very little to the sum or the log and are therefore largely unconstrained. One could consider ameliorating this issue by minimizing over the relative error instead of the absolute. Since we can’t divide by delta functions, we would have to divide by the trial function: $$\begin{aligned} S(\vec{\theta}) &=& \int \left(\frac{P_E(\vec{\xi}|\{\vec{x}_n\}) - e^{-F(\vec{\xi}|\vec{\theta})}}{e^{-F(\vec{\xi}|\vec{\theta})}}\right)^2 d\vec{\xi} \\ &=& \int \left(P_E(\vec{\xi}|\{\vec{x}_n\})^2 e^{2F(\vec{\xi}|\vec{\theta})} \right. \\ && \left. - 2P_E(\vec{\xi}|\{\vec{x}_n\}) e^{F(\vec{\xi}|\vec{\theta})} + 1\right) d\vec{\xi} \end{aligned}$$ This integral is, however, even more problematic since squares of integrals of delta functions are not well-defined and the integral over the square of a delta function is infinite. In the direct least squares approach, we didn’t really care, because this undefined function was independent of $\vec{\theta}$ and could be dropped, but in this case we must maintain this term. This seems an insurmountable deficiency and so we choose to abandon this approach. Finally, we could consider minimizing over the squared log probabilities (i.e the PMF), instead of the weights. This is *not* the Kullback-Leibler divergence, but does penalize divergence in the positive as well as the negative direction: $$\begin{aligned} S(\vec{\theta}) &=& \int P_E(\vec{\xi}|\{\vec{x}_n\}) \left(\ln \left(\frac{P_E(\vec{\xi}|\{\vec{x}_n\})}{P_T(\vec{\xi}|\vec{\theta})}\right)\right)^2 d\vec{\xi}\\ &=& \int P_E(\vec{\xi}|\{\vec{x}_n\}) \left(\ln P_E(\vec{\xi}|\{\vec{x}_n\}) - \ln P_T(\vec{\xi}|\vec{\theta})\right)^2 d\vec{\xi}\\ &=& \int P_E(\vec{\xi}|\{\vec{x}_n\}) \left(\ln P_E(\vec{\xi}|\{\vec{x}_n\})^2 \right. \\ & & \left. - 2\ln P_E(\vec{\xi}|\{\vec{x}_n\}) \ln P_T(\vec{\xi}|\vec{\theta}) +\ln P_T(\vec{\xi}|\vec{\theta})^2\right) d\vec{\xi}\\\end{aligned}$$ It appears that square minimizing the log weights isn’t really possible, because the logarithm of the empirical distribution of delta functions that occurs in the cross-term is not well defined. However, other least square alternatives to determining similarities of distributions involving the *cumulative distribution* have been previously presented by Schofield [@Schofield:JPCB:2017]. Solving the minimization problem\[section:derivatives\] ------------------------------------------------------- We briefly describe efficient optimization routines to solve the minimization problems defined in eqs. \[eq:max1\] and \[eq:max2\] in the case of splines. In below, we suppress explicit dependence of F on $\theta$ for compactness. We start by examining the minimization of eq. \[eq:max2\]: $$S(\theta) = \sum_{n=1}^N F(\vec{\xi}_n) + \sum_{k=1}^{K} N_k \ln \int e^{-F(\vec{\xi}')-b_k(\vec{\xi}')} d\vec{\xi}' - \ln \mathcal{P}(\vec{\theta})$$ Various minimization approaches are required to compute the gradient and Hessian of this function with respect to the parameter vector $\vec{\theta}$. For convenience, we define the equilibrium average performed with biasing function $k$ of some observable $A$ that is a function of $\vec{\theta}$ as: $$\langle A(\vec{\theta}) \rangle_k = \frac{\int A(\vec{\xi}' | \vec{\theta}) e^{-F(\xi',\theta)-b_k(\vec{\xi}')} d\xi'}{\int e^{-F(\vec{\xi}',\theta)-b_k(\vec{\xi}')}d\vec{\xi}'}$$ The $i$ components of the gradient are then: $$\nabla S(\theta)_i = \sum_{n=1}^N\frac{\partial F(\vec{\xi})}{\partial \theta_i} \nonumber + \sum_{k=1}^K N_k \left\langle \frac{\partial F(\vec{\xi}')}{\partial \theta_i}\right\rangle_k - \frac{1}{\mathcal{P}(\theta)} \frac{\partial \mathcal{P}(\theta)}{\partial \theta_i}$$ We note that if we have linear basis functions, the first term is independent of $\vec{\theta}$ and can be precomputed, as $\frac{\partial F}{\partial \theta_i}$ is simply the corresponding basis function. Additionally, the integral term will have only limited support for each basis function, so the integrals are relatively easy to carry out, and the calculations scales easily in the number of basis functions. The $ij$ entries in the Hessian are:: $$\begin{aligned} \nabla^2 S(\theta)_{ij} &=& \sum_{n=1}^N \frac{\partial^2 F(\vec{\xi}_n)}{\partial \theta_i \partial \theta_j} \nonumber \\ & & - \sum_{k=1}^{K} N_k \left[\left\langle \frac{\partial^2 F(\vec{\xi})}{\partial \theta_i\partial \theta_j}\right\rangle_k - \left\langle\frac{\partial F(\vec{\xi})}{\partial \theta_i} \frac{\partial F(\vec{\xi})}{\partial \theta_j}\right\rangle_k \right.\nonumber \\ & & \left.+ \left\langle\frac{\partial F(\vec{\xi})}{\partial \theta_i} \right\rangle_k \left\langle\frac{\partial F(\vec{\xi})}{\partial \theta_j} \right\rangle_k\right] \nonumber \\ & - & \left[\frac{1}{\mathcal{P}(\theta)} \frac{\partial \mathcal{P}(\theta)}{\partial \theta_i\partial \theta_j} - \frac{1}{\mathcal{P}(\theta)^2} \frac{\partial \mathcal{P}(\theta)}{\partial \theta_i}\frac{\partial \mathcal{P}(\theta)}{\partial \theta_j}\right]\end{aligned}$$ If we assume that we have a trial function that is linear in the parameters, then the initial terms involving mixed second derivatives vanish, leaving only: $$\begin{aligned} \nabla^2 S(\theta)_{ij} &=& \sum_{k=1}^{K} N_k \left[\left\langle \frac{\partial F(\vec{\xi})}{\partial \theta_i} \frac{\partial F(\vec{\xi})}{\partial \theta_j}\right\rangle_k \right.\nonumber \\ & & \left. - \left\langle\frac{\partial F(\vec{\xi})}{\partial \theta_i} \right\rangle_k \left\langle\frac{\partial F(\vec{\xi})}{\partial \theta_j} \right\rangle_k\right] \nonumber \\ & - & \left[\frac{1}{\mathcal{P}(\theta)} \frac{\partial \mathcal{P}(\theta)}{\partial \theta_i\partial \theta_j} - \frac{1}{\mathcal{P}(\theta)^2} \frac{\partial \mathcal{P}(\theta)}{\partial \theta_i}\frac{\partial \mathcal{P}(\theta)}{\partial \theta_j}\right]\end{aligned}$$ If the function is linear in the parameters (again, such as splines), this will only be nonzero in areas where basis functions have mutual support, essentially just banded along the diagonal, so are be relatively inexpensive to compute. In the case of eq. \[eq:max1\], this becomes: $$\begin{aligned} \nabla S(\theta)_i = N \sum_{n=1}^N W_n(\vec{x}_n) \frac{\partial F(\vec{\xi})}{\partial \theta_i} - N \left\langle \frac{\partial F(\vec{\xi}')}{\partial \theta_i}\right\rangle - \frac{1}{\mathcal{P}(\theta)} \frac{\partial \mathcal{P}(\theta)}{\partial \theta_i} \nonumber \\\end{aligned}$$ $$\begin{aligned} \nabla^2 S(\theta)_{ij} &=& N\left(\left\langle \frac{\partial F(\vec{\xi})}{\partial \theta_i} \frac{\partial F(\vec{\xi})}{\partial \theta_j}\right\rangle\right. \nonumber \\ & & - \left.\left\langle\frac{\partial F(\vec{\xi})}{\partial \theta_i} \right\rangle \left\langle\frac{\partial F(\vec{\xi})}{\partial \theta_j} \right\rangle \right)\nonumber \\ & - & \left[\frac{1}{\mathcal{P}(\theta)} \frac{\partial \mathcal{P}(\theta)}{\partial \theta_i\partial \theta_j} - \frac{1}{\mathcal{P}(\theta)^2} \frac{\partial \mathcal{P}(\theta)}{\partial \theta_i}\frac{\partial \mathcal{P}(\theta)}{\partial \theta_j}\right] \nonumber \\\end{aligned}$$ Where expectations are now over the *unbiased* state rather than any of the $K$ biased simulations.
--- abstract: 'The internal structure of a composite fermion is investigated for a two dimensional parabolic quantum dot containing three electrons. A Yukawa screened Coulomb interaction is assumed, which allows us to discuss the evolution of the electron-vortex correlations from the Coulomb interaction limit to the contact potential limit. The vortex structure approaches the Laughlin limit non-monotonically through the formation of intermediate composite fermions in which a flip of the spatial orientation of the vortices with respect to the position of the electrons is observed. Only when we limit ourselves to the lowest Landau level (LLL) approximation the flip appears through the formation of an intermediate giant vortex at specific values of the screening length. Beyond the LLL approximation antivortices appear in the internal structure of the intermediate composite fermions which prevent the nucleation of giant vortices. We also studied the system of five electrons and show that the mechanism of the flip of the vortex orientation found for three-electron system is reproduced for higher number of electrons.' author: - 'T. Stopa' - 'B. Szafran' - 'M.B. Tavernier' - 'F.M. Peeters' title: 'Dependence of the vortex structure in quantum dots on the range of the inter-electron interaction' --- Introduction ============ Theoretical interpretation of the fractional quantum Hall effect,[@FQHE] (FQHE) observed at high magnetic field in the spin-polarized two-dimensional electron gas, is based on the properties of the Laughlin[@Laughlin] wave function. FQHE for electrons is explained[@Jain2] in terms of the integer quantum Hall effect for composite fermions, i.e. quasi-particles consisting of electrons with additional even number of bound vortices (or magnetic field fluxes). The vortices appear as zeros of the many-electron wave function when its phase changes by $2\pi$ on a path around this zero. The electron in a composite fermion feels a reduced effective magnetic field as the bound vortices partly cancel the usual Aharonov-Bohm phase on a closed loop around the electron.[@wstep] The original problem considered by Laughlin,[@Laughlin] i.e. the diagonalization of the few-electron eigenequation in the basis of single-electron wave functions obtained in the symmetric gauge, is formally very similar to an electron system confined in a parabolic quantum dot. Only very recently wider attention was paid to the vortices in the quantum Hall regime of confined systems[@Marten; @Saarikoski1; @Saarikoski2; @Harju; @Tober1; @Tober2] and to the composite fermion theory for quantum dots.[@Yan; @JQD1; @JQD2] In particular, the vortex distribution for Coulomb interacting electrons confined in quantum dots was investigated[@Marten; @Saarikoski1; @Saarikoski2] using the exact diagonalization technique and the reduced wave function imaging. The structure of vortices as obtained from such exact calculations differs significantly from the one assumed in the Laughlin wave function or in the composite fermion approach. It was found[@Marten; @Saarikoski1; @Saarikoski2] that the vortices are not localized on the electron as assumed in the Laughlin state but stay in the neighborhood of electrons to which they are bound. On the other hand, Laughlin functions are the exact non-degenerate ground state wave functions for the case of short range interactions. Analytical proof of their exactness and uniqueness was provided[@Trugman] for potentials developed in series of $\nabla^{2j}\delta^2({\mathbf r})$. The energy gap allowing the Laughlin liquid to be incompressible was identified[@Haldane] as due to the short-range component of the Coulomb interaction. The purpose of the present work is to investigate how the vortex structure is modified when the inter-electron interaction is taken from the Coulomb limit to the contact potential limit. We show that the vortices approach the Laughlin liquid limit in a non-monotonic fashion. Within the lowest Landau level (LLL) approximation for filling factors $\nu<1/3$ intermediate composite fermion states are found with two additional vortices localized on the electron. Beyond the LLL approximation the internal structure of the intermediate composite fermion turns out to be very complex with possible appearances of antivortices which can even be localized at the position of the electron. Within the LLL we found that only in the contact potential limit more than two extra vortices are localized at the electron position. In the present paper we focus our attention on the lowest number of electrons, i.e. $N=3$, for which a nontrivial[@Marten] internal composite fermion structure can be observed in the reduced wave function. Next, we verify the conclusions reached for $N=3$ studying the vortex structure of a five electron system. To study the dependence of the structure on the range of the inter-electron interaction we assume that the electrons interact through a Yukawa potential $$\label{yukawa} V(r)=\frac{e^2}{4\pi\epsilon_0\epsilon}\frac {\exp(-r/\alpha)} {r},$$ which in the large and small screening length ($\alpha$) limits yields the Coulomb and the contact potential, respectively. A potential of the form (\[yukawa\]) is obtained for an external Coulomb defect linearly screened by a three dimensional electron gas.[@Ando] In fact the screening of the electron-electron interaction in electrostatic quantum dots results from charges induced on the metallic electrodes and is of a more complex form.[@Maksym] The screening of the electron-electron interaction by the image charges cuts off the long tail of the Coulomb potential. The contact potential limit corresponds then to the case of a negligible distance of the quantum dot to the metal gate in comparison to the dots size. This paper is organized as follows: Section II presents the theory behind the results which are given in Section III. Subsection III (a) contains the results calculated in the LLL approximation and the influence of the higher LL is described in subsection III (b), results for five electrons are given in subsection III (c). Summary and conclusions are provided in Section IV. Theory ====== The effective mass Hamiltonian of our system is $$\label{ham} \hat{H}=\sum_{i=1}^N\left(\frac{\left(-i\hbar\nabla_i+e{\mathbf A}(\mathbf{r}_i)\right)^2}{2m^*}+ V_\text{ext}(r_i)\right)+ \sum_{i<j}^NV(r_{ij}),$$ where is the parabolic confinement potential, and $\mathbf{A}$ is the vector potential. We adopt the GaAs effective mass $m^*=0.067m_e$ and dielectric constant $\epsilon=12.4$. All the calculations were performed for $\hbar\omega=1$ meV for which the oscillator length equals $l_0\equiv\sqrt{\hbar/m^*\omega}=33.7$ nm. The Schrödinger equation is solved using the exact diagonalization (ED) technique[@Marten2] with the three-electron Slater determinants constructed from the single-electron Fock-Darwin orbitals.[@RM] We investigated the ground-state magnetic-field induced angular momentum and spin transitions of the three-electron system as function of the screening length in the presence of a perpendicular magnetic field \[$(0,0,B)=\nabla\times\mathbf{A}$\]. For $\alpha\rightarrow\infty$ we exactly reproduce the results of Ref.\[[@Mikha]\] (our parameters correspond to the interaction constant $\lambda\equiv l_0/a_B=3.44$, with $a_B$ the donor Bohr radius). For finite values of $\alpha$ no interesting results are obtained: decreasing the screening length has the trivial effect of decreasing the strength of the interaction ($\lambda$), the ground-state spin-orbital symmetry sequence remains unchanged, only the critical magnetic fields for the transitions between subsequent angular momentum states are shifted to higher values. We consider only the spin-polarized states of the magic angular momentum sequence[@RM] \[total angular momentum $L\hbar$ being a multiple of $3\hbar$\], which become ground states at high magnetic fields, after the maximum density droplet decays. The results presented below were obtained mostly within the LLL (more precisely in the lowest Fock-Darwin band[@RM] of zero radial quantum number and nonnegative angular momentum) to keep a direct correspondence to the Laughlin wave function. In the discussion of the vortices we do not apply any magnetic field to the system without loss of generality for the wave function, since for a harmonic confinement potential the magnetic field simply rescales the electron coordinates of the wave function for a given $L$:[@Marten2] $$\Psi_{B\neq0}(\mathbf{r}_1,\mathbf{r}_2,\mathbf{r}_3)= \Psi_{B=0}(\gamma\mathbf{r}_1,\gamma\mathbf{r}_2,\gamma\mathbf{r}_3),$$ with the scaling factor $\gamma=(1+(\omega_c/2\omega)^2)^{1/4}$, where $\omega_c=eB/m^*$ stands for the cyclotron frequency. Note, that property (3) implies that if, as generally accepted, the ground states at high magnetic fields are well approximated by the LLL, the approximation is not any worse at $B=0$, where the high $L$ states correspond to high excitations. Moreover, the eigenstates of the Hamiltonian written in the basis of Slater determinants built of LLL wave functions can be exactly identified with the eigenstates of the electron-electron interaction matrix operator. They are therefore the same for any constant $\lambda$ multiplying the interaction potential \[Eq. (1)\], even if for large $\lambda$ the LLL approximation can be arbitrarily bad.[@mikha2] In the calculations we consider screening lengths $\alpha\geq0.1$ nm. The delta-like interaction potential obtained for $\alpha\rightarrow 0$ does not influence the energies or wave functions for a spin-polarized system because of the Pauli exclusion principle. Consequently, for $\alpha=0$ one obtains a multifold degenerate non-interacting ground-state. Since in the diagonalization these states (with very different vortex structure each) mix stochastically one cannot carry on the discussion of vortices for a screening length equal to zero. As a matter of fact, there is actually no need to take $\alpha$ strictly zero, since then the Laughlin function, as well as any other wave function constructed within the LLL, obviously corresponds to the degenerate ground state. A general form of the three-electron wave function in the LLL approximation is:[@uwaga] $$\label{general} \Psi(z_1,z_2,z_3)=\sum_{j}\eta_j {\text{\em A}}z_{1}^{j_{1}}z_{2}^{j_{2}}z_{3}^{j_{3}} \exp\left(-\frac{1}{2}\sum_{k=1}^3\frac{|z_k|^2}{l_0^2}\right)$$ where [*A*]{} stands for the antisymmetrizer, $z_k\equiv x_k+iy_k$ denotes the complex, two-dimensional position of the $k$-th particle, $\eta_j$ are the linear variational parameters, ${j_{1}}$, $j_{2}$, $j_{3}$ are nonnegative integers, of which not a pair is identical, and $j_1+j_2+j_3=L$. The Laughlin wave function[@Laughlin] for the angular momentum $L=3m$ (for odd $m$) is a product of the Jastrow factor and a Gaussian $$\label{laughlin} \Phi_{1/m}(z_1,z_2,z_3)=\prod_{k<l}(z_{k}-z_{l})^m\exp\left(-\frac{1}{2}\sum_{n=1}^3\frac{|z_n|^2}{l_0^2}\right),$$ which is a special form of the general formula (\[general\]). In the Laughlin function the filling factor $\nu=1/m$ is directly related to the number of zeros $m$ localized on each electron as well as to the angular momentum (for $N=3$ electrons one has $\nu=3/L$). Note that not all the states of the magic angular momentum sequence can be represented by the Laughlin function, only those of odd $L$ can. We investigate the zeros of the reduced wave function,[@Marten; @Saarikoski1; @Saarikoski2] constructed by fixing coordinates of two electrons $z_1$ and $z_2$ $$\psi_{z_1,z_2}(z)=\Psi(z_1,z_2,z),$$ where $z$ is the test electron coordinate. The reduced Laughlin wave function is a complex polynomial of the test electron position ($z$) of degree $2m=\frac{2}{3}L$, multiplied by a Gaussian. On the other hand, for a general LLL state (\[general\]) the reduced wave function is a complex polynomial of degree $L-1$, resulting in more zeroes than occurring in the Laughlin wave function. The additional zeroes, commonly attributed to vortices bound to the test electron, are not localized close to the pinned electron positions. Since one extra zero has to be attributed to the test electron itself, one obtains the total number of $L$ vortices \[for a general number of $N$ electrons the number of vortices equals $N/\nu=2L/(N-1)$\]. When higher Landau levels are included the reduced wave function depends also on the complex conjugate of the particle positions and larger exponent values in the polynomial appear, which increases the number of zeros and allows antivortices to appear.[@wstep; @Marten] Results ======= Lowest Landau level approximation --------------------------------- Fig. 1 shows the position of zeros of the LLL reduced wave functions for two of the electrons pinned in the locations $(\pm l_0, 0)$ for states with $L=9, 12, 15, 18$ and $21$. In the presented range of $x$ we focus only on the zeros located near the pinned electrons. In the Coulomb limit all the vortices are placed on the $x$ axis.[@Marten] For $L=9$ \[Fig. 1(a)\], as the screening length decreases, the two vortices bound to the electron approach its position, staying always on the $x$ axis. For $\alpha=2$ nm ($\alpha=0.06 l_0$) the bound vortices are localized exactly at the electron position forming a giant vortex, characteristic for the Laughlin wave function. For $L=12$ \[see the black lines in Fig. 1(b)\] the giant vortex on the electron position is formed earlier, i.e. for $\alpha=0.44l_0$. However, for smaller screening lengths the two extra vortices leave the electron position (and the $x$ axis) passing to the $x=-l_0$ line \[see the inset in Fig. 1 (b)\]. For still smaller $\alpha$ the vortices return to the electron position. A similar behavior is found for larger $L$. For states with $L>12$, there are more than $2$ extra vortices bound to each electron and pairs of them collapse on the electron positions for specific $L$-dependent screening length values. Decreasing $\alpha$ beyond this value flips them to the $x=\pm l_0$ line approaching again the electron positions in the $\alpha=0$ limit. Note, that the formation of the intermediate giant vortices is observed also for non-Laughlin states (even $L$) and that all these intermediate giant vortices have winding number three. For non-Laughlin states the number of zeros of the reduced wave function ($L-1$) is odd, therefore a single vortex resides in the $(0,0)$ position in order not to break the symmetry. The position of this vortex for $L=12$ and $L=18$ is marked by the vertical line just to the left of the tick marks on the right hand side. We see that for even $L$, i.e. the non-Laughlin states, the number of vortices bound to the electrons is the same as in the closest Laughlin state with lower angular momentum ($L-3$). The presented results are quite general in the sense that for $N=3$ the vortex structure does not depend on the specific choice of the positions of the two fixed electrons in Eq. (6) but scales linearly as a function of the distance between the fixed electron positions, whether they are, or not, placed symmetrically with respect to the origin. This is demonstrated in Fig. 2, which shows a contour plot of the logarithm of the absolute value of the reduced wave function calculated in the LLL approximation for $L=15$ and the screening length $\alpha=l_0/2$. In Fig. 2(a) the two electrons are fixed at $(\pm l_0,0)$ like in Fig. 1 (c). For $\alpha=l_0/2$ two of the bound vortices are localized perpendicular to the line between the electrons \[cf. Fig. 1(c)\] whose positions are marked with the blue arrows. In Fig. 2(b) the fixed electron coordinates were scaled down 10 times with respect to Fig. 2(a), and in Fig. 2(c) the electrons were shifted to the left by $l_0/2$. The scalability of the vortex structure is evident from the form of the LLL wave function (4). If one decreases all the distances $\gamma$ times, one can take $\gamma^L$ before the sum (since $j_{1}+j_{2}+j_{3}=L$) from the polynomial part, i.e. the one responsible for the appearance of the vortices. The invariance of the vortex structure with respect to the shift of the fixed electron positions is not evident from Eq. (4). In fact, the vortex structure of each of the Slater determinants is [*not*]{} invariant with respect to the shifts, and the invariance is only obtained in the entire basis containing all the LLL determinants. Since we are dealing with the harmonic oscillator potential the exact wave function is separable into a product of the center of mass and relative motion wave functions $\Psi=F_{CM}(z_{cm})G_{rel}(z_1-z_2,z_1-z_3,z_2-z_3)$. The vortices are entirely due to the relative part. From the separable form it is clear that the vortices shift with the fixed electron positions. It is quite remarkable that this feature of the exact solution is reproduced in the LLL approximation. For a general $N$ the vortex structure remains invariant with respect to the size, position and orientation of the polygon formed by the $N-1$ fixed electrons as long as its shape is preserved. One cannot change the shape of the line segment linking the two fixed electrons for $N=3$. But for $N=4$ different vortex structures are obtained when the shape of the triangle formed by the fixed electrons is varied.[@Marten] Fig. 2 shows also that only the vortex structure and [*not*]{} the reduced wave function scales with the positions of the fixed electrons. The nonscalability of the wave function results from the center of mass component of the wave function \[or the Gaussian in Eq. (4)\] The dashed lines in the upper part of Fig. 3 show the positions of the two remaining vortices, which did not fit into Fig. 1(a) for $L=9$. These vortices are not bound to the electrons whose positions are pinned but belong to the test electron and disappear to infinity as the screening constant is decreased to zero. This behavior is expected since the number of vortices in the Coulomb problem is larger than in the limit of the Laughlin liquid (see the end of Section II). The red full curves close to the lower horizontal axis show the modulus of the corresponding reduced Laughlin wave function for $y=0$ with the electron positions fixed at $(\pm l_0,0)$. We see that the disappearing test-electron vortices are always localized beyond the region of the reduced Laughlin wave function localization. This is not always the case. Black solid lines in Fig. 3 show the position of vortices for the two electrons fixed at $(-l_0,0)$ and $(-l_0/2,0)$. The outermost vortices are localized more closely to the electrons. We see that in this case, for decreasing screening lengths the vortices pass through the region in which the reduced Laughlin function (plotted in blue in Fig. 3) takes large values. In order to get an idea how well the electron-vortex correlations are described in the Laughlin wave function we project the reduced optimal wave function obtained within the LLL approximation to the one corresponding to the Laughlin many-particle wave function $$S_{z_1,z_2}=\frac{<\psi_{z_1,z_2}|\psi^L_{z_1,z_2}>}{\sqrt{<\psi_{z_1,z_2}|\psi_{z_1,z_2}><\psi^L_{z_1,z_2}|\psi^L_{z_1,z_2}>}},$$ in which the positions of vortices as well as of the pinned electrons can be seen ($\psi^L$ denotes the reduced Laughlin wave function). The overlaps calculated between ED wave functions of states with odd angular momentum and corresponding Laughlin wave functions are shown in Fig. \[over\]. In Fig. \[over\] (a) the two pinned electrons were placed at $(\pm l_0,0)$. Vortex positions for this case are shown in Figs. \[rys1\](a), 1(c), and 1(e). The overlap values increase monotonically for all the three Laughlin states with decreasing $\alpha$, in the $\alpha\rightarrow 0$ limit they all achieve unity. Note also that the higher the angular momentum the smaller the overlap. In Fig. \[rys1\] we observe that for these three states the distance between the outermost bound vortex and the electron increases with $L$ which is the reason why for larger $L$ the overlaps are smaller. Moreover, there are regions for $L=15$ and $L=21$ where vortices increase their distance from the electron with decreasing $\alpha$, which is not reflected in the overlap plot, which apparently is more strongly determined by the decreasing distance of the electron from the outermost vortex. More interesting behavior is observed when the electrons are pinned closer to each other. We placed them in $(-l_0,0)$ and $(-l_0/2,0)$ and the resulting overlaps are shown in Fig. \[over\] (b). For all three states there is a more sharp minimum as function of $\alpha$. This is due to the external vortex passing through the region in which the reduced wave function is large as discussed in the context of Fig. \[l9\]. The minimal value of the overlap, which is almost zero, occurs exactly when the vortex position coincides with the Laughlin wave function maximum. The distinctly different dependence of the overlaps for the fixed-electron positions is due to the fact that the reduced wave function is not scalable with the interelectron distances. The displacement of the vortex towards infinity for $\alpha\rightarrow0$ and its effect on the reduced LLL wave function is illustrated in the contour plots of Fig. \[wave1\] for $L=15$ with the fixed electron positions $(-l_0,0)$ and $(-l_0/2,0)$. The position of the vortex is marked by a $\star$ in Figs. 5(a) and 5(c). In Fig. 5(b), $\alpha$ corresponds to the overlap minimum \[cf. Fig. 4(b)\] the vortex is visible near $x=3l_0$ where it digs a hole in the wave function. Moreover, when the vortex is at the position of the Laughlin wave function maximum, it splits the LLL wave function into two almost equal parts with opposite signs (see Fig. \[wave2\]). This makes the Laughlin function almost orthogonal to the ED wave function. When the vortex of the test electron passes beyond the maximum of the wave functions, the overlap starts to increase reaching unity for all the states, but this occurs earlier for smaller values of $L$. Another relevant quantity to be discussed is the pair correlation function (PCF), defined as $$W(z_a,z_b)=<\Psi|\sum_{i\neq j}\delta(z_i-z_a)\delta(z_j-z_b)|\Psi>.$$ PCF calculated for the Laughlin function (\[laughlin\]) gives (up to a normalization constant) $$\label{w2} \begin{array}{lll} W_L(z_a,z_b)=&(z_a-z_b)^{2m}\exp[-(|z_a|^2+|z_b|^2)/l_0^2]\\ &\\ &\times \int dz(z_a-z)^{2m}(z_b-z)^{2m}\exp(-|z|^2/l_0^2). \end{array}$$ Therefore, at small interelectron distances ($z_a\rightarrow z_b$) the pair correlation function will asymptotically behave as $W_L(z_a,z_b)\sim (z_a-z_b)^{2m}$. We consider the PCFs with one particle fixed in $|z_a|=1.5l_0$ as well as at the origin $z_a$=0. In the case of $|z_a|=1.5l_0$ we calculated the PCF for the other electron at the same distance from the origin (i.e. $|z_b|=1.5l_0$ ) along an arc of $0.6l_0$ length away from $z_a$. Then, we fitted the results to a function of the form $f(|z_a-z_b|)=a|z_a-z_b|^{\kappa}$. For the other considered pinned position ($z_a=0$) we repeated this procedure moving from the origin along a straight line of length $0.2l_0$. The obtained results are shown in Figs. \[pcff\](a) and (b). We found that the value of the exponent depends on the fixed electron position ($z_a$), which is not the case for the Laughlin wave function. For $L=9$ \[blue curves in Fig. \[pcff\](a)\] the fitted $\kappa$ value approaches 6 – the Laughlin limit – monotonically with decreasing $\alpha$. For $L=12$, i.e. a non-Laughlin state, also the value of 6 is obtained in the $\alpha=0$ limit. In this case three vortices become localized at the electron position \[see Fig. 1(b)\] like for $L=9$. We also notice that the $\kappa$ fitted for different fixed electron positions \[black solid and black dashed lines in Fig. \[pcff\](a)\] are both equal to 6 around $0.45l_0$, i.e. when the intermediate giant vortex is formed \[see Fig. 1(b)\]. The intermediate giant vortex is therefore associated with the appearance of a position independent $\kappa$, which is characteristic of the Laughlin wave function. Note, that the $\kappa$ value calculated for $z_a=0$ between the intermediate ($\alpha=0.45 l_0$) and the final ($\alpha=0$) giant vortices possesses a local minimum. This minimum is related to those vortices which initially are moving away from the electron for $\alpha$ below the occurrence of the intermediate giant vortex (see the dashed lines in Fig. 1(b)\]. However, for the $\kappa$ exponent calculated with $|z_a|=1.5l_0$ \[black dashed line in Fig. \[pcff\](a)\] a maximum is observed below $\alpha=0.45l_0$. For $L=15$ the intermediate giant vortex is formed around $\alpha=l_0$ \[Fig. 1(c)\]. The $\kappa$ values fitted for the two pinned electron positions approach one another near $\alpha=l_0$ \[see the solid and dashed red curves in Fig. \[pcff\](a)\], but the $\kappa$ value for $|z_a|=1.5l_0$ is larger than 6. As before, a more direct correspondence between the vortex positions and the fitted $\kappa$ value is obtained for the electron pinned at the origin. The loop that the two vortices perform in the $(\alpha,y)$ plane when the intermediate giant vortex decays into single vortices \[see Fig. 1(c)\], has no effect on the $\kappa$ value for $|z_a|=1.5l_0$. On the other hand the loop is translated into a minimum of $\kappa$ as calculated for $z_a=0$ \[see the red solid line in Fig. \[pcff\](a)\]. Both $\kappa$ values tend to the value of the Laughlin function, i.e. to 10, in the $\alpha=0$ limit. This limit is achieved by $\kappa$ values calculated for $L=18$ as well, since also here 5 vortices are found at the electron position in the contact potential limit \[see Fig. 1(d)\]. Again, for $L=18$ the $\kappa$ value calculated for $z_a=0$ is more sensitive to the actual vortex behavior. Local maximum (slightly above 6) is obtained \[dashed curve in Fig. \[pcff\] (b)\] when the first intermediate giant vortex is formed \[$\alpha\simeq 1.6l_0$, see Fig. 1(d)\]. Another maximum is observed for the second intermediate giant vortex ($\alpha\simeq 0.37l_0$). The value is now considerably larger than 6, which can be explained by the presence of the other vortices localized in close proximity of the pinned electron. The third intermediate giant vortex near $0.1 l_0$ gives a plateau near $\kappa=8.5$, which then shoots up to 10, when the final giant vortex is formed. For $L=21$ we observe again a local maximum in $\kappa$ calculated for $z_a=0$ at $\alpha=0.7l_0$ – an intermediate giant vortex position \[see Fig. 1(e)\]. For $L=21$ we actually do not observe the final giant vortex in the $\alpha\rightarrow 0$ limit, due to the problem of degeneracy of the ground-state as explained in Section II. However, the presented $\kappa$ values and the vortex positions for small $\alpha$ for which the ground-state is still nondegenerate, clearly indicate the giant vortex Laughlin asymptotic with seven vortices at the position of the electron. The close correspondence found between the PCF calculated for $z_a=0$ and the vortex behavior is quite remarkable. For $L$ above the value for the maximum density droplet, the charge density develops a minimum at the center of the quantum dot and the depth of the minimum increases with $L$. Moreover, by fixing the position of one of the electrons at the origin one includes only those Slater determinants in which the zero angular momentum Fock-Darwin state appears. The angular momenta of the two remaining orbitals must sum up to $l_a+l_b=L$ (let $l_a<l_b$). From the asymptotic behavior of the single-electron orbitals at the origin \[$z^l$\] one should expect that the obtained $\kappa$ value is related to the lowest of all $l_a$. Due to the applied fitting procedure one actually obtains a $\kappa\geq l_a$. For instance, the value of 6 obtained in the Laughlin limit for $L=9$ indicates that the Slater determinants corresponding to angular momenta (0,1,8), and (0,2,7) do not contribute to the wave function while (0,3,6) does. Expanding the Jastrow factor it is straightforward to check that the Laughlin wave function contains admixtures of the (0,3,6) and (0,4,5) basis functions, but not of the (0,1,8) nor the (0,2,7) determinants. ----------------- ------ ------------ ----------- ----------- ---------------- --------- $E_{ni}$\[meV\] $K$ $E$\[meV\] $x_l/l_0$ $x_r/l_0$ $\alpha^*/l_0$ $c/l_0$ 15 12 15.35272 -1.206 -0.813 0.440 - 17 61 15.33831 -1.246 -0.779 0.450 0.0042 19 173 15.33754 -1.243 -0.779 0.421 0.0073 21 392 15.33732 -1.238 -0.783 0.415 0.011 23 761 15.33722 -1.231 -0.786 0.411 0.014 25 1346 15.33719 -1.226 -0.789 0.409 0.016 27 2213 15.33717 -1.222 -0.792 0.410 0.017 29 3453 15.33717 -1.220 -0.794 0.413 0.017 31 5158 15.33716 -1.223 -0.795 0.416 0.016 ----------------- ------ ------------ ----------- ----------- ---------------- --------- : Convergence of the results for $L=12$ beyond the LLL approximation. $E_{ni}$ is the maximum energy of the noninteracting Slater determinants used for the construction of the basis set ($B=0$). Second column lists the number of basis elements ($K$). $E$ is the energy estimate for $\alpha=1.48l_0$. $x_l$ and $x_r$ are the positions of the bound vortices to the left and right of the electron fixed at the point $(-l_0, 0)$ as in Fig. 1(b) for $\alpha=1.48 l_0$. $\alpha^*$ is the screening length for which the distance between the vortices aligned in the horizontal and vertical directions is the same \[see Fig. 8 (c)\]. This distance ($c$) is listed in the last column. The first row of the Table are the results for the LLL approximation. Value for $\alpha^*$ in the first row corresponds to the giant vortex. Beyond the lowest Landau level approximation -------------------------------------------- In order to verify the calculated vortex structure in the neighborhood of the fixed electron we have performed exact calculations with a basis including higher Landau levels for $L=12$. The basis was constructed in the following way. From all the Slater determinants built of the non-interacting Fock-Darwin states we picked up only those for which the energy at $B=0$ (see the discussion of the wave function scalability with the magnetic field given in Section II) does not exceed a fixed energy value $E_{ni}$. The number of basis elements $K$ as function of $E_{ni}$ is listed in Table I together with the energy estimates obtained for an interacting system at $\alpha=1.48 l_0$. The first row of the Table corresponds to the LLL approximation. We obtain convergence of the energy estimate up to six significant digits. Fourth and fifth columns of the Table give the position of the vortices attached to the electron localized at point $(-l_0,0)$ for the second electron pinned at $(l_0,0)$ like in Fig. 1. The convergence of the position of the vortices is slower than the energy. Beyond the LLL approximation for $\alpha=1.48 l_0$ and for $\alpha$ up to the Coulomb limit the distances between the electrons and the vortices are slightly larger than in the LLL approximation. The positions of vortices obtained with the most precise calculations are shown by the blue curves in Fig. 1(b). Beyond the LLL approximation the wave function is nonanalytical and the exact number of nodes in the whole complex plane is not known a priori. However, we have found that within the range plotted in Fig. 1(b) the extra nodes in the exact calculations appear only within the region where the LLL predicts the formation of the intermediate giant vortex. Fig. \[av\] shows the contour plots of the logarithm of the absolute value of the reduced wave function when the two electrons are pinned at $(\pm l_0,0)$, for the range of $\alpha$ when the bound vortices flip their positions from the $x$ axis to the $x=\pm l_0$ lines. Instead of the formation of the intermediate giant vortex, a state consisting of single separate vortices is formed. When the vortices approach the electron along the $x$ axis \[Figs. \[av\](a)\], the node of the wave function associated with the electron elongates in the perpendicular direction and finally splits into an antivortex localized at the electron position and two vortices localized at the $x=\pm l_0$ lines \[Figs. \[av\](b)\] placed symmetrically with respect to the electron position. For a certain screening length $\alpha=\alpha^*$ the distances between the vortices localized at the $x$ axis and those localized at $x=\pm l_0$ lines are equal \[Fig. \[av\](c)\]. With decreasing $\alpha$ the vortices localized at the $x$-axis approach \[Fig. \[av\](d)\] the pinned electron and annihilate with the antivortex localized therein. Eventually, we are left with a single vortex at the electron position and two vortices localized on the $x=\pm l_0$ line \[Fig. \[av\](e)\], like in the LLL for $\alpha$ values such that we are between the intermediate and the final giant vortices. This mechanism of the flip of the orientation of the vortices is found for all the wave functions calculated beyond the LLL with basis adopted according to the strategy explained above. Values of $\alpha^*$ are listed in Table I. The corresponding distances between the pairs of vortices ($c$) are given in the last column of the Table. Distance $c$ initially increases with the size of the variational basis and finally saturates near $0.016 l_0$. Fig. 9 presents a zoom of Fig. 1(b) for the range of $\alpha$ corresponding to the intermediate and final giant vortices. The blue curves are for the exact calculations. After the flip of the vortex orientation the results of the LLL and the exact calculations are nearly equal. Contribution of the higher LL becomes negligible when the electron-electron interaction is switched off. Five electrons -------------- The mechanism presented above for the flip of the vortex orientation is reproduced for higher number of electrons. To illustrate this we focused on the five-electron system at $L=35$, i.e., a non-Laughlin state corresponding to a ground-state of the magic angular momentum sequence below the filling factor $\nu<1/3$. This state is the counterpart of the $L=12$ state for three electrons discussed in the context of Fig. 1(b). Calculations were performed in the LLL approximation. The plots of the logarithm of the absolute value of the reduced wave function are given in Fig. 10 for four electrons fixed at the corners of a square $(\pm l_0,\pm l_0)$. Fig. 10(a) shows the case of the Coulomb potential, and Figs. 10(b-d) the case of the screened Coulomb interaction for $\alpha=0.0889l_0$, $0.0643 l_0$ and $0.0222l_0$. In Figs. 10(b-d) we present the vortices near the electron localized at $(l_0,l_0)$. The vortices attached to the fixed electrons approach them along the diagonals of the square and form a giant vortex for $\alpha=0.0643 l_0$ \[see Fig. 10(c)\]. For smaller values of $\alpha$ the line along which the attached vortices are aligned is rotated over $90^\circ$ as compared to Fig. 10(b) and is now perpendicular to the corresponding diagonal of the square \[see Fig. 10(d)\]. Summary and Conclusions ======================= We have investigated the dependence of the vortex structure of a three-electron quantum dot on the range of the inter-electron potential. The Yukawa interaction potential can be changed continuously from the Coulomb to the contact potential Laughlin limit. The evolution towards the Laughlin liquid appears through the formation of intermediate three-fold giant vortices at which the vortices flip their orientation with respect to the electron to which they are bound. In our discussion we relied on the reduced wave function where $2$ electrons are pinned and found that the screening lengths for which the giant vortices are formed do not depend on the choice of the positions of the pinned electrons. Hence, for $N=3$ the giant vortices can only be created by manipulating the screening length and not the positions of the fixed electron. For $N>3$ electrons the exact vortex structure in the reduced wave function depends on the shape of the polygon formed by the $N-1$ fixed electrons. But the binding of the vortices to the fixed electrons for large L are independent of the exact location of the electron. It is the angular position of the bound vortices which is altered when we move the other fixed electrons. Nevertheless for $N>3$, we find that the evolution to the Laughlin limit is also non-monotonic and is accompanied with flips of the vortex orientation and the formation of the intermediate composite fermion states. We found that the LLL approximation predicts the vortex positions quite accurately in the whole range of the screening length except for $\alpha$ values where the vortices approach closely the fixed electrons. For a certain value of the screening length we observe a flip of the vortex orientation. In general we found that this flip can be realized in four different ways: symmetry breaking, discontinuously, through a giant vortex or by the formation of antivortices. In the LLL approximation giant vortices (similar to the ones assumed in the Laughlin state) are observed at the orientation flip, even though vortices are expected to exhibit a repulsive behavior at close distances.[@wstep] In the LLL approximation an antivortex can not appear because the number of zeroes of the reduced wave function is fixed. When higher Landau levels are included extra vortices and an antivortex appear and annihilate preventing the formation of the giant vortex. The presented study of the pair-correlation function shows that the precise positions of the vortices with respect to the electrons are important for the physics of electron-electron correlations. The number of bound vortices in the close neighborhood of the electron is translated into an asymptotic power-law form for the pair correlation function around the pinned electron position. For the giant vortices, i.e. for the intermediate composite vortex states, the electron-electron correlations acquire properties similar to the ones described by the Laughlin wave function. [**Acknowledgments**]{} This work was supported by the Flemish Science Foundation (FWO-Vl) and the Belgian Science Policy. T.S was supported by the Marie Curie training project HPMT-CT-2001-00394 and B.S by the EC Marie Curie IEF project MEIF-CT-2004-500157. [00]{} D.C. Tsui, H.L. Störmer, and A.C. Gossard, Phys. Rev. Lett [**48**]{}, 1559 (1982). R.B. Laughlin, Phys Rev. Lett. [**50**]{}, 1395 (1983). J.K. Jain, Phys. Rev. Lett. [**63**]{}, 199 (1989). K.L. Graham, S.S. Mandal, and J.K. Jain, Phys. Rev. B [**67**]{}, 235302 (2003). M.B. Tavernier, E. Anisimovas, and F.M. Peeters, Phys. Rev. B [**70**]{}, 155321 (2004). H. Saarikoski, A. Harju, M. J. Puska, and R. M. Nieminen, Phys. Rev. Lett. [**93**]{}, 116802 (2004). H. Saarikoski, S.M. Reimann, E. Räsänen, A. Harju, and M. J. Puska, Phys. Rev. B [**71**]{}, 035421 (2005). A. Harju, S. Siljamäki, and R. M. Nieminen, Phys. Rev. Lett. [**88**]{}, 226804 (2002). M. Toreblad, M. Borgh, M. Koskinen, M. Manninen, and S. M. Reimann, Phys. Rev. Lett. [**93**]{}, 090407 (2004). M. Manninen, S. M. Reimann, M. Koskinen, Y. Yu, and M. Toreblad, Phys. Rev. Lett. [**94**]{}, 106405 (2005). C. Yannouleas and U. Landman, Phys. Rev. B [**68**]{}, 035326 (2003). G.S. Jeon, C.C. Chang, and J.K. Jain, J. Phys.: Condens. Matter [**16**]{}, L271 (2004). C.C. Chang, G.S. Jeon, and J.K. Jain, Phys. Rev. Lett. [**94**]{}, 016809 (2005). S.A. Trugman and S. Kivelson, Phys. Rev B [**31**]{}, 5280 (1985). F.D.M. Haldane and E.H. Rezayi, Phys. Rev. Lett. [**54**]{}, 237 (1985). T. Ando, A. B. Fowler, and F. Stern, Rev. Mod. Phys. [**54**]{}, 437 (1982). N. A. Bruce and P. A. Maksym, Phys. Rev. B [**61**]{}, 4718 (2000). M.B. Tavernier, E. Anisimovas, F.M. Peeters, B. Szafran, J. Adamowski, and S. Bednarek, Phys. Rev. B [**68**]{}, 205305 (2003). S.M. Reimann and M. Manninen, Rev. Mod. Phys. [**74**]{}, 1283 (2003). S.A. Mikhailov and N.A. Savostianova, Phys. Rev. B [**66**]{}, 033307 (2002). For $B\neq0$ the oscillator length $l_0$ should be replaced by $l=\sqrt{\hbar/m^*\omega_e}$, with the effective frequency $\omega_e^2=\omega^2+\omega_c^2/4$. S.A. Mikhailov, Phys. Rev. B [**65**]{}, 115312 (2002).
--- abstract: 'Carrier aggregation, which allows users to aggregate several component carriers to obtain up to 100 MHz of bandwidth, is one of the central features envisioned for next generation cellular networks. While this feature will enable support for higher data rates and improve quality of service, it may also be employed as an effective interference mitigation technique, especially in multi-tier heterogeneous networks. Having in mind that the aggregated component carriers may belong to different frequency bands and, hence, have varying propagation profiles, we argue that it is not necessary, indeed even harmful, to transmit at maximum power at all carriers, at all times. Rather, by using game theory, we design a distributed algorithm that lets eNodeBs and micro base stations dynamically adjust the downlink transmit power for the different component carriers. We compare our scheme to different power strategies combined with popular interference mitigation techniques, in a typical large-scale scenario, and show that our solution significantly outperforms the other strategies in terms of global network utility, power consumption and user throughput.' author: - - title: | Downlink Transmit Power Setting\ in LTE HetNets with Carrier Aggregation --- \[sec:intro\]Introduction ========================= The exponential increase in mobile data traffic in recent years has become a serious challenge for today’s cellular communication networks. To tackle this challenge, one of the strategies foreseen in the LTE-Advanced (LTE-A) specifications, among others, is the deployment of Heterogeneous Networks (HetNets). HetNets are seen as a potential cost-efficient approach to effectively meet the challenge, by introducing smaller cells, i.e., micro, pico and femtocells, nested within the traditional macrocell. This approach promises to improve both the capacity and the coverage of current cellular networks. However, it also introduces several technical challenges, the most prominent being the interference between different architectural layers sharing the same spectrum resources. Carrier aggregation is another expected feature of future networks, which aims at guaranteeing higher data rates for end users so as to meet the IMT-Advanced requirements. It enables the concurrent use of several LTE component carriers with, potentially, different bandwidth and belonging to different frequency bands. Downlink transmissions over each carrier will occur at maximum output power and each carrier will have an independent power budget [@3gpp-trca]. Thus, different component carriers may have very different coverage areas and impact in terms of interference, due to both their different transmit power level and propagation characteristics. Currently, three main approaches have been proposed to mitigate downlink interference in HetNets: per-tier assignment of carriers, Enhanced Inter Cell Interference Coordination (eICIC), which has been adopted in LTE-A systems, and downlink power control. Per-tier assignment of carriers simply implies that in HetNets with carrier aggregation support, each tier should be assigned a different carrier component so as to nullify inter-tier interference [@lp-abs]. eICIC includes techniques such as Cell Range Expansion to incentivise users to associate with micro base stations (BSs), and Almost Blank Subframes (ABS), i.e., subframes during which macro BSs mute their transmissions to alleviate the interference caused to microcells. Algorithms to optimise biasing coefficients and ABS patterns in LTE HetNets have been studied in, e.g., [@eicic-alg], however they do not address carrier aggregation. Also, modifications to the eICIC techniques that allow macro BSs to transmit at reduced power during ABS subframes have been proposed in [@lp-abs]. In this paper we do not consider a solution within the framework of eICIC or its modifications, rather we use them as comparison benchmarks for the solutions we propose. We adopt instead the third approach, which consists in properly setting the downlink transmit power of BSs so as to avoid interference between different tiers. Indeed, macro BSs transmitting blindly at high power ensures large coverage and an acceptable level of service for all users under coverage, but it can also create a lot of harmful interference to microcell users. Interesting schemes adopting a similar approach have been proposed in [@coalitions_overlap; @hierarchical-competition; @discrete-eeff], which, however, did not consider carrier aggregation. In this paper we address the problem of downlink power setting in LTE HetNets with carrier aggregation support, when all BSs share the available radio resources. Carrier aggregation allows all carrier aggregation enabled users in the network to receive concurrently in two or more component carriers, while they are under their coverage areas. The coverage area of each component carrier is determined by the carrier’s propagation characteristics, as well its transmitting power, therefore it is possible that some users may be under the coverage of one carrier and not others. We propose to leverage this diversity in the component carrier coverage areas to mitigate inter-tier interference in HetNets. In addition, by varying the carrier transmit power to alter their coverage, we enable a wide range of network configurations which reduce power consumption, provide high throughput and ensure a high level of coverage to network users. This type of configurations have also been envisioned by 3GPP [@3gpp-ca], however, unlike the current specifications, we aim at reaching such solutions dynamically and in-response to real traffic demand. As envisioned in LTE-A systems, we consider that each component carrier at each BS has an independent power budget, and that BSs can choose the transmit power on each carrier from a set of discrete possible values. This implies that the problem we face is not to properly allocate the power among the different carriers to ensure most efficient use of a power budget, as done in existing work. Rather, we address the problem of adequately choosing a power level from a range of choices to ensure optimal network performance. It is easy to see that the complexity of the problem increases exponentially with the number of cells, carriers and the granularity of the power levels available to the BSs. In addition, if one of the objectives is to maximise the network throughput, the problem becomes non linear since transmission data rates depend on the signal-to-interference-plus-noise ratio (SINR) experienced by the users. It follows that an optimal solution requiring a centralised approach would be both unfeasible and unrealistic, given the large number of cells in the network and the flat network architecture of LTE-A. We therefore study the above problem through the lens of game theory, which is an excellent mathematical tool to obtain a multi-objective distributed solution in a scenario with entities (BSs) sharing the same pool of resources (available component carriers). We model each group of BSs in the coverage area of a macro BS as a team so that we can capture both (i) cooperation between macro BS and micro BSs with overlapping coverage areas, and (ii) the competitive interests of different macro BSs. The framework we provide however allows for straightforward extension to teams which include several macro BSs. We prove that the game we model belongs to the class of [*pseudo-potential*]{} games, which are known to admit pure Nash Equilibria (NE) [@pa-potential]. This allows us to propose a distributed algorithm based on best-reply dynamics that enables the network to dynamically reach an NE representing the preferred solution in terms of throughput, user coverage and power consumption. As shown by simulation results, our scheme outperforms fixed transmit power strategies, even when advanced interference mitigation techniques such as eICIC are employed. \[sec:rel-work\]Related work ============================ We focus our discussion on existing works on power control in cellular networks, since they are the most relevant to our study. Note that, while many papers have appeared in the literature on uplink power control, fewer exist on downlink power setting. Among these, [@coalitions_overlap] uses coalitional games to investigate power and resource allocation in HetNets where cooperation between players is allowed. Downlink power allocation in cellular networks is modeled in [@hierarchical-competition] as a Stackelberg game, with macro and femto BSs competing to maximise their individual capacities under power constraints. An energy efficient approach is instead proposed in [@hetnet-eff]. There, BSs do not select transmit power levels as we do in our work, rather they can only choose between on and off states. Maximising energy efficiency is also the goal of [@yang-eeff], which however is limited to the study of resource allocation and downlink transmit power in a two-tier LTE single cell. A multi-cell network with inter-cell interference is considered in [@discrete-eeff], where energy efficiency is optimised by applying resource allocation and discrete transmit power levels. We remark that the above papers address HetNets but, unlike our work, they do not consider carrier aggregation support. Also, [@yang-eeff; @coalitions_overlap; @hierarchical-competition] formulate a resource allocation problem that aims at distributing the transmit power among the available resources under overall power constraints. In our work, instead, we do not formulate the problem as a downlink power allocation problem, rather as a power setting problem at carrier level, assuming [*each carrier has an independent power budget*]{}. Additionally, while most of the previous work [@hetnet-eff; @discrete-eeff; @yang-eeff] focus on the HetNet interference problem only, using game theory concepts we jointly address interference mitigation, power consumption and user coverage by taking advantage of the diversity and flexibility provided by the availability of multiple component carriers. Finally, we propose a solution that enables the BSs to dynamically change their power strategies based on user distribution, propagation conditions and traffic patterns. To our knowledge, the only existing work that investigates downlink power setting in LTE networks with carrier aggregation support is [@joint-ra-ca]. There, the authors formulate an optimisation problem that aims at maximising the system energy efficiency by optimising power allocation and user association. However, interference issues, which are one of the main challenges we address, are largely ignored in [@joint-ra-ca] as the authors consider a non-heterogeneous single cell scenario. System model and assumptions\[sec:system\] ========================================== We consider a two-tier LTE network composed of macro BSs controlling macrocells, and micro BSs controlling microcells. For simplicity, the user equipments (UEs) in the network area are all assumed to be carrier aggregation (CA) enabled. Note, however, that the extension to a higher number of tiers as well as to the case where there is a mix of CA-enabled and non CA-enabled UEs is straightforward. The network area is partitioned into a set of tiles, or zones, denoted by ${\mathcal{Z}}$. From the perspective of downlink power setting, all UEs within a tile $z \in {\mathcal{Z}}$ are assumed to experience the same propagation conditions from a specific BS. Also, the tile-BS association is determined by the mobile operator network planning. In particular, following [@qualcomm], we will assume for ease of presentation that tiles (i.e., the UEs therein) are associated with the closest BS, although the extension to other, dynamic association schemes as well as to the case where a tile is served by multiple BSs can be easily obtained. All BSs share the same radio resources. In particular, a comprehensive set of component carriers (CC), indicated by ${\mathcal{C}}$, is available simultaneously at all BSs (BSs having at their disposal a subset of CCs is a sub-case of this scenario). Each CC is defined by a central frequency and a certain bandwidth. The central frequency affects the carrier’s coverage area, as the propagation conditions deteriorate greatly with increasing frequency. The level of transmit power irradiated by each BS on the available CCs can be updated periodically depending on the traffic and propagation conditions in the served tiles, or it can be triggered by changes in such network parameters. The update time interval, however, is expected to be substantially longer than a resource block allocation period, e.g., order of hundreds of subframes. The BSs can choose from a discrete set of available power levels, including 0 that corresponds to switching off the CC. The possible power values are expressed as fractions of the maximum transmit power, i.e., $\boldsymbol{P}=\{0.1, 0.2,...,1\}$, with the maximum transmit power that typically depends on the type of BS. As noted before, each CC at each BS has an independent power budget. In order to determine the downlink power setting, BSs can leverage the feedback they receive from their users on the channel quality that UEs experience. Also, we assume that each macro BS is connected with the set of micro BSs underlaid over its coverage area, via, e.g., optical fiber connections, which allows for swift communication between them. As a result, we assume that it is possible for the macro BS and the corresponding micro BSs to cooperate and exchange information in order to reach common decisions. This is a reasonable assumption since it is expected that the architecture foreseen for future networks will allow BSs that are geographically close to share a common baseband [@ericsson]. Furthermore, it is fair to assume that neighbouring macro BSs can communicate with each other. Game theory approach\[sec:game\] ================================ \[c\]\[\] \[c\]\[\] \[c\]\[\] \[c\]\[\] \[c\]\[\] \[c\]\[\] \[c\]\[\] \[c\]\[\] \[c\]\[\] \[c\]\[\] \[c\]\[\] \[c\]\[\] ![\[fig:net-model\]Network model and teams. Team locations are denoted by $l_1, l_2, l_3$. Solid red lines represent team boundaries, while black solid lines represent coverage areas. Tiles are represented by grey squares.](scenario_wtiles "fig:"){width="0.5\columnwidth"} As mentioned, the complexity of carrier power setting may be very high and impair an optimal, centralised solution in networks with many cells. We therefore adopt a game theoretic approach to the problem, which provides a low-complexity, distributed solution that is applicable in realistic scenarios. We formulate the problem of power setting in LTE HetNets with carrier aggregation as a competitive game between [*teams*]{} of BSs (see Fig. \[fig:net-model\]), where each team wants to maximize its own payoff. Indeed, given the network architecture at hand, a macro BS and the micro BSs within its coverage area, have the common objective to provide the UEs located within the geographical area of the macrocell with a high data throughput. Thus, they may choose to cooperate with each other in order to improve their individual payoffs as well as contribute to the “public good” of the team. Cooperation between such BSs is beneficial especially since the inter-tier interference is most significant within the cell. At the same time, although increasing the transmit power of one BS may increase the SINR that its UEs experience, such increase hurts the UEs being served by other BSs since all BSs share the same frequency spectrum. It follows that teams will compete between each other for the same resources, each aiming at maximising its own benefits. The game we model and its analysis are detailed below. Game definition\[subsec:game-definition\] ----------------------------------------- Let ${\mathcal{T}}=\{t_1,...,t_{T}\}$ be the set of teams in our network, where $T$ is the number of teams. Each team consists of a macro BS and the micro BSs whose coverage areas geographically overlap with that of the macro BS. Note that not only can team players exchange information between each other, but we can also assume that the macro BS plays the role of team leader, i.e., it makes the decisions for all team members in a way that maximizes the overall team benefits. To generalise the formulation further, we will refer to the BSs forming a team $t$ as the [*locations*]{} of the team, ${\mathcal{L}}_t=\{l_1,l_2,...,l_L\}$ where, for simplicity of notation, the number of locations within a team is assumed to be constant and equal to $L$. Such a generalisation is particularly useful since the interference caused within the team depends also on the relative position between the different players. We indicate the set of tiles under the coverage area of a particular location $l$ by ${\mathcal{Z}}_l$, and their union, denoting the comprehensive set of tiles of the team, by ${\mathcal{Z}}_t$. Also, let us denote by $E_l$ the number of UEs under the coverage of location $l$, and by $E_t=\sum_{l\in{\mathcal{L}}_t}E_l$ the total number of UEs served by the team. Each team, comprising a set of locations (BSs located at different positions within the macrocell), has to decide which transmit power level to use (out of the possible values in $\boldsymbol{P}$), at each one of those locations and for each of the available carriers ${\mathcal{C}}=\{c_1,c_2,...,c_C\}$. It follows that the strategy selected by a team $t$, $\boldsymbol{s^t}$, is an $L\times C$ matrix, where each $(l,c)$ entry indicates the power level set at location $l$ on carrier $c$. We now provide the definitions for the team utility and payoff, which are used in game theory to model the objectives of the players when choosing their strategy. Since network throughput is an important performance metric, it is natural that the utility of each team is defined as a function of the data rates it can serve to its UEs. The data rate a UE obtains is closely linked to the SINR it experiences, which depends on the transmit power chosen by the serving location (BS), the CC that is used and the transmit power levels chosen by neighbouring locations. Assuming that all UEs within the same tile experience the same amount of interference, for each team we can first define an interference matrix of size $|{\mathcal{Z}}_t| \times C$, denoted by $\boldsymbol{I^t}$. Each entry in the matrix indicates the interference experienced by UEs in tile $z$ on carrier $c$, which is caused by other teams: $$I^t_{z,c}(\boldsymbol{s^{-t}}) = \sum_{t'\in{\mathcal{T}}\wedge t'\neq t}\sum_{l'\in{\mathcal{L}}_{t'}}s^{t'}_{l',c}a_{l',z,c} \label{eq:interference}$$ where $\boldsymbol{s^{-t}}$ represents the strategies adopted by all teams other than $t$, $s^{t'}_{l',c}$ is the power level (the strategy) of team $t'$ for location $l'$ on carrier $c$ and $a_{l',z,c}$ is the factor of the attenuation ($0\leq a_{l',z,c}\leq1$) experienced by the signal transmitted from location $l'$ on $c$ when it reaches the UEs in tile $z$. The attenuation values are pre-calculated using the urban propagation models specified in [@itu]. The SINR at tile $z$, when served by location $l$ in team $t$, is: $$\gamma_{z,c}^t=\frac{s^{t}_{l,c}a_{l,z,c}}{N+\sum_{l'\in{\mathcal{L}}_t \wedge l'\neq l}a_{l',z,c}s^t_{l',c}+I^t_{z,c}} \label{eq:SINR}$$ where $N$ represents the average noise power level. Note that, besides $N$ and $I^t_{z,c}$, we have an additional term in the denominator, which stands for the intra-team interference and indicates the sum of all power received from the locations within the same team, other than location $l$. Then the utility of each team can be defined as a function of the individual tiles’ SINR values. In particular, the sigmoid-like function has been often used for this purpose in uplink power control [@pa-sigmoid]. We note that this function is suited to capture also the utility in downlink power setting, as it has features that closely resemble the realistic relationship between the SINR and the data rate. We therefore adopt the sigmoid function proposed in [@pa-sigmoid], as the utility function of each (tile, carrier) duplet in the team, and write the team utility as: $$u^t(\boldsymbol{s^t},\boldsymbol{s^{-t}}) = \sum_{l\in{\mathcal{L}}_t} \sum_{z\in{\mathcal{Z}}_l}\sum_{c\in{\mathcal{C}}}\frac{E_z}{E_t \left(1+e^{-\alpha(\gamma^t_{z,c}-\beta)} \right)} \,. \label{eq:team-utility-sigmoid}$$ The sigmoid function in Eq. (\[eq:team-utility-sigmoid\]) has two tuneable parameters, $\alpha$, which controls the steepness of the function, and $\beta$, which controls its centre. They can be tweaked to best meet the scenario of interest. In particular, the higher the $\alpha$, the closer the function resembles a step function, i.e., the utility becomes more discontinuous with the increase of the SINR. The higher the $\beta$, the larger the SINR for which a tile obtains a positive utility (see Sec. \[sec:peva\] for the setting of these parameters). Also, the individual utility of each tile $z$ in team $t$ is weighted by the fraction of UEs covered by the team in the tile ($E_z/E_t$) so as to give more weight to more populated tiles. This enables us to account for the user spatial distribution whenever this is not uniform over the network area. Next, we introduce a cost function to account for the interference and its detrimental effect, as well as for fairness in the service level to users. We define a first cost component that aims at penalising players who choose high power strategies, as: $\xi \sum_{l\in{\mathcal{L}}_t}\sum_{c\in{\mathcal{C}}}\bar{a}_{l,c}s^t_{l,c}$ where $\bar{a}_{l,c}$ is the link quality on carrier $c$ averaged over all tiles served by location $l$, and $\xi$ is the unit price per received power. This cost component increases with the increase in the chosen level of transmit power, however it also accounts for the propagation conditions of the users served by the location. In other words, locations that have to serve UEs experiencing poor channel quality will incur a lower cost, which ensures some level of fairness. Additionally, as clear by intuition and as shown in our technical report [@techpaper], the parameter $\xi$ can be optimally set so as to be inversely proportional to the average interference that the team experiences from other teams. This way the cost component will be smaller for a team that experiences high interference thus rightfully pushing the team to increase its transmit power. The second term of the cost function further provides fairness in the network by penalising those strategies that leave UEs without coverage. It is defined as $\delta e_t$, where $\delta$ is a unit price paid for each unserved user and $e_t$ is the fraction of UEs within the team area that experience SINR levels below a certain threshold. We remark that since a macro BS can communicate with the micro BSs in the macrocell, the team leader has knowledge of the UE density under the coverage of its team players. Thus, it can easily estimate the fraction of users, $e_t$, depending on the strategy chosen for each of its players ($\boldsymbol{s^t}$) as well as on all other teams’ strategies ($\boldsymbol{s^{-t}}$). The total cost function is then given by: $$\begin{aligned} \pi^t(\boldsymbol{s^t},\boldsymbol{s^{-t}})= \xi\sum_{l\in{\mathcal{L}}_t}\sum_{c\in{\mathcal{C}}}\bar{a}_{l,c}s^t_{l,c}+\delta e_t \label{eq:fullcost}\end{aligned}$$ where $\xi$ and $\delta$ indicate the weight that is assigned to each part of the cost function. Finally, we define the payoff of each team $t$ as the utility minus the cost paid: $$\begin{aligned} w^t(\boldsymbol{s^t},\boldsymbol{s^{-t}}) = u^t(\boldsymbol{s^t},\boldsymbol{s^{-t}}) -\pi^t(\boldsymbol{s^t},\boldsymbol{s^{-t}}) \,.\label{eq:teampayoff}\end{aligned}$$ In summary, we can formulate the problem as a competitive game $G=\{{\mathcal{T}},{\mathcal{S}},{\mathcal{W}}\}$, where ${\mathcal{T}}$ is the set of teams, ${\mathcal{S}}$ is the comprehensive set of strategies available to the teams, and ${\mathcal{W}}$ is the set of payoff functions. The objective of each team is to choose a strategy that maximises its payoff. Because its payoff depends also on the strategies of the other teams, a team must make decisions accounting for the strategies, it estimates or knows, the other teams have selected. Thus, using game-theory terminology, we will refer to the strategy chosen by a team as best reply. Moreover, to reduce both power consumption and the interference towards other teams, a team will select its best reply among strategies that maximise its payoff, as follows.\ [*(i)*]{} Between strategies that are equivalent in terms of payoff, it will choose the one with the lowest total power, to reduce the overall power consumption.\ [*(ii)*]{} When indifferent between strategies with equal total power but assigned to different locations, it will select the strategy that assigns higher power levels to micro BSs that are closer to the centre of the cell, to minimise interference.\ [*(iii)*]{} When indifferent with respect to the two above criteria, it will choose the strategy that assigns higher power levels to higher frequency carriers, again, to minimise interference. Game analysis\[subsec:game-analysis\] ------------------------------------- To analyse the behaviour of the above-defined game, and discuss the existence of NEs, we rely on the definition of games of [*strategic complements/substitutes with aggregation*]{} as provided in [@pa-potential; @pa-strategic]. A game $\Gamma=\{{\mathcal{P}},{\mathcal{S}},{\mathcal{W}}\}$, where ${\mathcal{P}}$ is the set of players, and ${\mathcal{S}}$ and ${\mathcal{W}}$ are defined as above, is a game of [**[strategic substitutes]{}**]{} with aggregation if for each player $p\in {\mathcal{P}}$ there exists a best-reply function $\theta_p:\boldsymbol{S^{-p}} \to \boldsymbol{S^p}$ such that: $$\begin{aligned} 1)& \theta_p(I^p)\in \Theta(I^p)\label{eq:cond-1}\\ 2)& \theta_p\text{ is continuous in } \boldsymbol{S^{-p}}\label{eq:cond-2} \\ 3)& \theta_p(\hat{I}^p) \leq \theta_p(I^p), \forall \hat{I}^p>I^p \,.\label{eq:cond-3}\end{aligned}$$ $\Theta(I^p)$ is the set of best replies for player $p$ and $\boldsymbol{S^{-p}}$ is the Cartesian product of the strategy sets of all participating players other than $p$. $I^p$ is an additive function of all other players’ strategies, also referred to as the [*aggregator*]{} [@pa-strategic]: $$I^p(\boldsymbol{s^{-p}}) =\sum_{p'\in{\mathcal{P}}, p' \neq p} b_{p'}s_{p'}\label{eq:aggregator}$$ where $b_{p'}$ are scalar values. Condition 1) is fulfilled whenever the dependence of the payoff function on the other players’ strategies can be completely encompassed by the aggregator. Condition 2), also known as the [*continuity*]{} condition, implies that for each possible value of $I^p$, the best reply function $\theta_p$ provides unique best replies. Condition 3) implies that the best reply of the team decreases with the value of the aggregator. A game of [**[strategic complements]{}**]{} with aggregation is identical, except for condition 3), which changes into: $$\begin{aligned} \theta_p(\hat{I}^p) \leq \theta_p(I^p), \forall \hat{I}^p<I^p \,,\label{eq:cond-4}\end{aligned}$$ i.e., in the case of games of strategic complements, the best reply of the team increases with the value of the aggregator. Next, we show the following important result. [*Our competitive team-based game $G$ is a game of [**strategic complements/substitutes with aggregation**]{}.* ]{} For brevity, here we provide a sketch of the proof; the full proof can be found in [@techpaper]. Let us define the aggregator in our scenario as the interference experienced by a team. It is easy to see that, in the case of a single-player team and a single carrier, such aggregator satisfies condition (\[eq:aggregator\]), and that $G$ meets the conditions set out in Eqs. (\[eq:cond-1\])-(\[eq:cond-2\]) and in either Eq. (\[eq:cond-3\]) or Eq. (\[eq:cond-4\]). The extension to a multi-carrier game with multi-player teams, implies that the strategy chosen by the team is not a scalar value but a matrix. Likewise, the interference experienced by each team (i.e., the aggregator) is a matrix. Given that, and similarly to the scalar case, it can be verified that the team best reply, which is an $L\times C$ matrix, fulfils the above conditions. In particular, the continuity condition (i.e., the existence of unique best replies) is ensured for any value of the interference matrix by the list of preferences set out to reduce power consumption and inter-team interference. As a further remark to the above result, it is worth stressing that the cost introduced in Eq. (\[eq:fullcost\]) is an important function that determines whether the game is of strategic complements or substitutes. Indeed, if we consider the payoff to coincide with the utility function (i.e., $\xi=\delta= 0$), a team’s best reply will be to increase its transmit power as the interference grows, implying that the game is of strategic complements. This would lead to an NE in which all teams transmit at maximum power level, without consideration for the interference caused. Instead, imposing some $\xi>0$, the game will turn into a game of strategic substitutes. This is because the first term of the cost function is linear with the received power, and hence increasing with the chosen strategies. Therefore, the payoff function will start decreasing once the increase in the chosen transmit powers does not justify the price the team has to pay. Imposing some $\delta>0$ (i.e., activating the second cost component), the relationship between transmit power and cost becomes more complicated but it does not change the nature of the game. The fraction of unserved UEs within the team will be high for very low power strategies, then it will decrease as the transmit power is increased, and increase again as the strategies chosen cause high intra-team interference. In other words, the second cost component strengthens the trend in the payoff function imposed by the utility for increasing interference in presence of low power strategies. For those mid-level strategies that ensure good coverage, it does not affect the cost function. Instead, it resembles the behaviour of the first cost component for high power strategies, as it is still able to discriminate against high power strategies that may harm the system performance. Main results from [@pa-potential; @pa-strategic] and references therein show that games of strategic complements/substitutes with aggregation belong to the class of potential games, specifically to the subclass of [*[pseudo-potential games]{}*]{}. These games admit pure Nash Equilibria (NE), i.e., action profiles that are a consistent and stable prediction of the outcome of the game, in the sense that no player has incentive to unilaterally deviate from such strategies. Another important result that holds for such games with a discrete set of strategies is that, thanks to the continuity condition in Eq. (\[eq:cond-2\]), convergence to an NE is ensured by best reply dynamics [@pa-strategic; @pa-potential]. The power setting algorithm\[sec:algo\] ======================================= We now use the above model and results to build a distributed, low-complexity scheme that enables efficient downlink power setting on each CC. We first consider a single carrier and show how the system converges to the best game solution among the possible ones. We then extend the algorithm to the multiple-carrier case and discuss its complexity. Single-carrier scenario\[subsec:single-carrier\] ------------------------------------------------ Let us first focus on a single carrier and consider two possible borderline strategies that a team may adopt: the [*max-power*]{} strategy in which all locations transmit at the highest power level, and the [*min-power*]{} strategy in which all locations transmit at the lowest available power level greater than 0. Evaluating the utility values obtained for the two extreme strategies, both at the global and individual team level, it transpires that the [*min-power*]{} always outperforms the [*max-power*]{} in a HetNet scenario. Indeed, the inter-tier and inter-team interference seriously undermine the overall network performance in terms of global utility, expressed as the sum of all individual team utilities (see Eqs. (\[eq:SINR\])–(\[eq:team-utility-sigmoid\]) as well as the results in Fig. \[fig:ca-mamihist\] in Sec. \[sec:peva\]). With regard to the cost, as discussed in Sec. \[subsec:game-analysis\], the first component increases with the increase in the selected transmit power. The second component strengthens the trend imposed by the first cost component for the [*max-power*]{} strategy, and by the utility for the [*min-power*]{} strategy. This leads to the following important result. [*When multiple NEs exist, then the NE with the least overall power cost will be the preferred NE in terms of global payoff. This NE will always be reached if players start by setting their strategies to the lowest power level available.*]{} The sigmoid function is characterised by a jump reaching a saturation point. Since the power cost linearly increases with power, a team’s best reply will coincide with the lowest strategy that reaches saturation. Let us assume then that the game has two NEs, in one of which teams tend to choose higher power levels. Since all teams are playing their best replies, they are at utility saturation. Thus, playing higher power level does not ensure higher utility, however it increases the cost component, hence the payoff will be lower in the NE with the higher overall transmitted power. A longer, more formal proof can be found in [@techpaper]. We therefore devise the following procedure that should be executed by each team leader (macro BS), in order to update the BSs downlink power setting, either periodically or upon changes in the user traffic or propagation conditions. At a given update period, all teams initialise their transmit power to zero. Then, they sequentially run the Best-reply Power Setting (BPS) algorithm reported in Alg. \[alg:single-cc-br\]. We refer to the single execution of the BPS algorithm by any of the teams as an iteration. Note that the order in which teams play does not affect the convergence or the outcome of the game, since all teams start from the zero-power strategy. At each iteration, the leader of the team that is playing determines the strategy (i.e., the power level to be used at each BS in the team) that represents the best reply to the strategies selected so far by the other teams. The team leader will then notify it to the neighbouring team leaders that can be affected by this choice. BPS will be run by the teams till convergence is reached, which, as shown in Sec. \[sec:peva\], occurs very swiftly. Also, we remark that the strategies identified over the different iterations are not actually implemented by the BSs. Only the strategies representing the game outcome will be implemented by the BSs, which will set their downlink power accordingly for the current time period. In order to detail how the BPS algorithm (Alg. \[alg:single-cc-br\]) works, let us consider the generic $i+1$-th iteration and denote the team that is currently playing by $t$. The algorithm requires as input the carrier $c$ at disposal of the BSs and the strategies selected so far by the other teams, $\boldsymbol{s^{-t}_c}(i)$. Additionally, it requires the cost components weights $\xi$ and $\delta$, the SINR threshold $\gamma_{min}$, used to qualify unserved users, and the utility function parameters $\alpha$ and $\beta$. This latter set of parameters are calculated offline and provided to the teams by the network operator. The algorithm loops over all possible strategies in the strategy set of team $t$, $\boldsymbol{S^t_c}$. For each possible strategy, $\boldsymbol{s}$, and each location $l$ within the team, it evaluates the interference experienced by the tiles within the location area (line \[line:scc-interference\]). This value is used to calculate the SINR and the utility (lines \[line:scc-sinr\]-\[line:scc-util\]), then the first cost component is updated (line \[line:power-cost\]). In line \[line:quality-cost1\], it is verified whether UEs in tile $z$ achieve the minimum SINR value. If not, the cost component $e_t$ is amended to include the affected UEs. The overall team utility for each potential strategy $\boldsymbol{s}$ is obtained by summing over the individual tile utilities weighted by the fraction of UEs present in each tile. We recall that such weight factor ensures that the UE distribution affects the outcome of the game accordingly. Once the utility and cost are obtained, the team payoff corresponding to strategy $\boldsymbol{s}$ is calculated (line \[line:payoff\]). After this is done for all possible $\boldsymbol{s}$, the leader chooses the strategy $\boldsymbol{s^t}(i+1)$ that maximises the team payoff. Note that, according to our game model, the $\arg\max^{\star}$ function in line \[line:max\] operates as follows: in case the $\arg\max$ function returns more than one strategy, the leader applies the list of preferences reported in Sec. \[subsec:game-definition\] to choose the best strategy. $c$, $\boldsymbol{s^{-t}_c}(i)$, $\xi,\delta,\alpha,\beta,\gamma_{min}$ \[line:scc-input\] \[line:scc-str\] Set $u^t(\boldsymbol{s},\boldsymbol{s^{-t}_c}(i))$,$w^t(\boldsymbol{s},\boldsymbol{s^{-t}_c}(i))$,$\pi^t(\boldsymbol{s},\boldsymbol{s^{-t}_c}(i))$,$e_t$ to 0 Compute $I^t_{z,c}$ by using Eq. (\[eq:interference\]) \[line:scc-interference\] Compute $\gamma^t_{z,c}$ by using Eq. (\[eq:SINR\]) \[line:scc-sinr\] $u^t(\boldsymbol{s},\boldsymbol{s^{-t}_c}(i)) \hspace{-1mm}\gets u^t(\boldsymbol{s},\boldsymbol{s^{-t}_c}(i))+\hspace{-1mm}\frac{E_z}{E_t\left(1+e^{-\alpha(\gamma^t_{z,c}-\beta)}\right)}$ \[line:scc-util\] $\pi^t(\boldsymbol{s},\boldsymbol{s^{-t}_c}(i))\gets \pi^t(\boldsymbol{s},\boldsymbol{s^{-t}_c}(i))+\xi \bar{a}_{l,c}s_{l,c}$ \[line:power-cost\] \[line:quality-cost1\] $e_t\gets e_t+\frac{E_z}{E_t}$ \[line:quality-cost2\] $\pi^t(\boldsymbol{s},\boldsymbol{s^{-t}_c}(i))\gets \pi^t(\boldsymbol{s},\boldsymbol{s^{-t}_c}(i))+\delta e_t)$ \[line:power-cost2\] $w^t(\boldsymbol{s},\boldsymbol{s^{-t}_c}(i))\gets u^t(\boldsymbol{s},\boldsymbol{s^{-t}_c}(i))-\pi^t(\boldsymbol{s},\boldsymbol{s^{-t}_c}(i))$ \[line:payoff\] \[line:scc-strend\] $\boldsymbol{s^{t}_c}(i+1)\gets \arg\max^{\star}_{\boldsymbol{s}}w^t(\boldsymbol{s},\boldsymbol{s^{-t}_c}(i))$\[line:max\] Multi-carrier scenario ---------------------- We now extend the previous procedure to the multi-carrier case. As mentioned before, the team leader has to decide on the power level to be used at each available carrier, at each location within the team. Thus the team strategy is no longer a vector, but an $L\times C$ matrix, each entry $(l,c)$ indicating the power level to be used for carrier $c$ at location $l$. A straightforward extension of Alg. \[alg:single-cc-br\] would imply that lines \[line:scc-str\]–\[line:scc-strend\] are executed for each element in the new extended strategy set. However, the new strategy set, depending on the number of carriers, may become too large and therefore make the algorithm impractical to use in realistic scenarios. Analysing the utility expression obtained in Eq. (\[eq:team-utility-sigmoid\]), we can note that since the carriers are in different frequency bands and have separate power budgets (as foreseen in LTE-A), the utilities secured at each carrier are independent of each other. In other words, the utility a team will get at one of the carriers, is not affected by the strategy chosen at another carrier. The same holds for the first cost component in Eq. (\[eq:fullcost\]). It is only the second cost component that couples the power setting at the different carriers. Indeed, in networks with carrier aggregation support, a UE can be considered unserved only if the SINR it experiences is below the threshold in all carriers. In order to obtain a practical and effective solution in the multi-carrier scenario, we take advantage of the partial independence between the carriers, and run Alg. \[alg:single-cc-br\] independently for each carrier, keeping the size of the strategy set the same as in the single-carrier scenario. Then, to account for the dependence exhibited by the second cost component, we set the order in which the per-carrier games are played, using the order of preferences listed in the game description. Since the teams prefer to use high-frequency carriers over low-frequency ones, due to their smaller interference impact, it is logical that the game is played starting from the highest-frequency carrier. It follows that low-frequency carriers will likely be used to ensure coverage to UEs not served otherwise. Importantly, our algorithm is still able to converge to an NE, since surely none of the teams will deviate from the strategies they chose at each carrier. Also, since the game for the lowest frequency carrier is played last, the number of served UEs cannot be further improved without increasing the power level on the other carriers, which we already know is not a preferable move as it has not been selected earlier. Thus, although it does not search throughout the entire solution space as for the single-carrier scenario, the procedure is still able to converge to an NE that provides a close-to-optimum tradeoff among throughput, user coverage and power consumption. The results obtained in toy scenarios (see Sec. \[sec:peva\]) confirm that our scheme provides performance as good as that achieved by an exhaustive search in the strategy space. Complexity ---------- The complexity of the algorithm depends largely on the size of the strategy sets that are available to the teams, $\boldsymbol{S^t}$, since each team has to find the strategy which maximises its payoff value by searching throughout the entire set. The set size depends on the number of discrete power levels available to the BSs ($|\boldsymbol{P}|$), the number of locations in the team ($L$) and the number of CCs available at each location ($C$). In the single-carrier scenario, we have $|\boldsymbol{S^t}|=|\boldsymbol{P}|^L$, while in the multi-carrier scenario the size exponentially grows to $|\boldsymbol{S^t}|=|\boldsymbol{P}|^{LC}$, which is reduced to $|\boldsymbol{S^t}|=C|\boldsymbol{P}|^L$ by our approach. Performance evaluation\[sec:peva\] ================================== We consider the realistic two-tier LTE HetNet scenario that is used within 3GPP for evaluating LTE networks [@scenario]. The network is composed of 57 macrocells and 228 microcells. Macrocells are controlled by 19 three-sector macro BSs, while micro BSs are deployed over the coverage area so that there are 4 non-overlapping microcells per macrocell. The inter-site distance is set to 500 m. The overall network area is divided into 2,478 square tiles of equal size. The BSs are grouped into 57 five-player teams, each consisting of 1 macro BS and 4 micro BSs within its macrocell. There are about 34,400 UEs in the area, distributed non-uniformly with a user density around micro BSs that is three times higher than over the macro BS coverage area. All UEs are assumed to be CA enabled. BSs can use three CCs, each $10$ MHz wide, with the central frequencies: 2.6 GHz (CC1), 1.8 GHz (CC2) and 800 MHz (CC3). The signal attenuation and losses follow the ITU specification for urban environments [@itu], while the SINR values are mapped to throughput using the look-up table in [@sinr-map]. The maximum transmit powers for macro and micro BSs are set at $20$ W and $1$ W, respectively. The set of discrete power levels is given by $\boldsymbol{P}=\{0,0.1,0.2,...,1\}$, each representing a fraction of the maximum power. The game is played by all teams using the algorithm for the multiple-carrier scenario. The sigmoid function parameters are $\alpha=1$ and $\beta=1$, which were selected as the most appropriate to model the relationship between the selected strategy and final user rate. The SINR threshold is set at $\gamma_{min}=-10$ dB, based on [@sinr-map]. Using our results in [@techpaper], the value of the cost parameter is set as $\xi= \frac{k\alpha}{\bar{\mathbf{I}}}$, where $k$ is the weight factor used to indicate the importance we place on the first cost component and $\bar{\mathbf{I}}$ is an average value for interference calculated by the network operator, obtained by fixing the transmit power of all teams at half the maximum power. Unless otherwise specified, the weight factor $k$ is set to $0.25$ while $\delta=0.6$. These values were selected based on their effect on the performance metrics, as shown below in the simulation results. The performance of the algorithm is first compared to the optimum in a toy scenario. In the large-scale scenario described above, it is instead compared to that of four baseline strategies: the two fixed power strategies [*max-power*]{} and [*min-power*]{}, as well as to the [*max-power*]{} strategy coupled with eICIC technique, as usually applied in the literature and in practice, and with the Low Power - ABS (LP-ABS) technique [@lp-abs]. Traditional eICIC is applied with CRE for microcells set at $8$ dB and macro BS downlink transmissions muted in 25% of subframes (ABS). These values were chosen to represent the mid-range of those applied in the surveyed literature [@eicic-alg] LP-ABS uses a $6$ dB microcell biasing, ABS subframe ratio of $50\%$ and macro BS power reduction of $6$ dB during ABS, which were shown to perform best in [@lp-abs]. Note that for the strategy reached via the BPS algorithm and the two fixed power strategies, user association is distance-based and fixed, while for the power strategies coupled with eICIC, it is based on the strongest received pilot signal plus the bias, to properly model the CRE behaviour. In Fig. \[fig:ca-comparison\] we compare BPS in a multi-carrier setting with the optimal solution obtained via exhaustive search. Due to the problem complexity, the comparison is performed only for a toy scenario in which two teams compete, each consisting of one macro and one micro BS. The results, obtained by averaging the behaviour of ten different sets of teams, show that there is negative deviation in terms of payoff as expected, but BPS yields higher utility. Looking at the per-user throughput CDF curves, however, we note that the two strategies perform almost identically. In Fig. \[fig:ca-cc1-ne\], we look at a snapshot of the NE strategy reached via the BPS algorithm, in a game with 57 teams. The strategies chosen by the teams for each CC are differentiated using different shades, from white ([*zero*]{} power) to black ([*maximum*]{} power). Hexagons represent the macro BS, while circles represent micro BSs. The figure shows that CC1, i.e., the high frequency carrier, allows for higher transmit power to be used by both macro and micro BSs, due to its low interference impact. CC1 can be also used simultaneously by macro and micro BSs in the same team, which is not always the case for the other two CCs. CC2 and CC3 are used to complement each other to ensure overall coverage. Histograms of chosen strategies for macro and micro BSs, shown in Fig. \[fig:ca-mamihist1\], confirm these observations. Here note that CC1 is activated for most macro and micro BSs, however macro BSs often set low power levels for CC1, while most micro BSs set CC1 at maximum power level. On the contrary, CC2 is rarely activated for macro BSs, while CC3 (the low frequency CC) is the least utilised, and tends to be especially unfavored by micro BSs, due to its high interfering impact. These results validate the intuition that far reaching low-frequency carriers are not appropriate to be used by micro BSs, rather they should be used only to ensure broader coverage for edge UEs. ![\[fig:ca-comparison\]Deviation from optimal strategy: utility, payoff and overall transmitted power (left) and CDF of the per-user throughput (right).](comparison_optimal100 "fig:"){width="23.00000%"} ![\[fig:ca-comparison\]Deviation from optimal strategy: utility, payoff and overall transmitted power (left) and CDF of the per-user throughput (right).](cdf_optimal "fig:"){width="23.00000%"} ![\[fig:ca-cc1-ne\]BPS strategies for a 57-team game for CC1 (top left), CC2 (top right) and CC3 (bottom). Darker shades represent higher power level, while the white color corresponds to the [*off*]{} state. Hexagons are macro BSs while circles are micro BSs.](ne_cc1_new2 "fig:"){width="18.00000%"} ![\[fig:ca-cc1-ne\]BPS strategies for a 57-team game for CC1 (top left), CC2 (top right) and CC3 (bottom). Darker shades represent higher power level, while the white color corresponds to the [*off*]{} state. Hexagons are macro BSs while circles are micro BSs.](ne_cc2_new2 "fig:"){width="18.00000%"} ![\[fig:ca-cc1-ne\]BPS strategies for a 57-team game for CC1 (top left), CC2 (top right) and CC3 (bottom). Darker shades represent higher power level, while the white color corresponds to the [*off*]{} state. Hexagons are macro BSs while circles are micro BSs.](ne_cc3_new2 "fig:"){width="18.00000%"} ![\[fig:ca-mamihist1\]BPS strategies for a 57-team game: chosen strategies by macro (left) and micro (right) BSs.](histma "fig:"){width="23.00000%"} ![\[fig:ca-mamihist1\]BPS strategies for a 57-team game: chosen strategies by macro (left) and micro (right) BSs.](hist_mi "fig:"){width="23.00000%"} ![image](glut_lp){width="28.00000%"} ![image](txpow_lp){width="28.00000%"} ![image](rate_cdf_lp){width="28.00000%"} ![image](pprice_eval){width="23.00000%"} ![image](qcost_eval){width="23.00000%"} ![image](cost_eval_txpow){width="23.00000%"} ![image](avnoit){width="23.00000%"} Next, in the left and middle plots of Fig. \[fig:ca-mamihist\] we compare the performance of the strategy reached via our scheme (labelled by “BPS”) to the fixed baseline strategies, in terms of global utility and overall transmitted power, and for a varying number of teams. The strategy reached via the BPS mechanism outperforms all other solutions in terms of global utility, calculated as the sum of the individual team utilities. Also, the gap in performance grows with the number of teams. This gain in performance is achieved at much lower transmit power, which implies that the BPS strategy is very efficient. The overall transmit power of the BPS strategy, calculated as the sum of the selected transmit powers over all BSs and CCs in the network, closely approaches that of the [*min-power*]{} strategy and is much lower than the power consumption of all other schemes. Also, as anticipated in Sec. \[subsec:single-carrier\], the [*min-power*]{} strategy always outperforms the [*max-power*]{} strategy in terms of utility, regardless of the number of teams, while keeping the overall transmit power at the minimum level. The final comparison is performed in the rightmost plot of Fig. \[fig:ca-mamihist\], which depicts the cumulative distributive function (CDF) of the per-user throughput for the strategies under consideration. Overall, our solution outperforms all other schemes. This holds especially for the top $70\%$ of UEs. eICIC and LP-ABS give slightly better results in ensuring a positive throughput to the worst UEs. However, BPS provides a very low fraction of UEs that are left unserved (about $2\%$), while transmitting at much lower overall power. Note also that the strategies with eICIC and LP-ABS are at a slight advantage since user association is performed based on the best downlink pilot signal, which, at least for downlink communication, is always better than the fixed distance-based user association scheme that we assumed for simplicity. In summary, it is clear that BPS is a very well-balanced strategy in terms of level of service: it provides slightly lower per-user throughput than eICIC and LP-ABS for the worst UEs, but much better throughput than all other strategies for the rest of the UEs, and it consumes very little power (almost the same as the [*min-power*]{} strategy). In Fig. \[fig:ca-priceeval\], we look at the behaviour of our algorithm. First, we evaluate the effect of $k$, i.e., the weight we assign to the cost of received power, on the global utility and the fraction of low SINR users, by varying its value from $0$ to $1$ and fixing $\delta=0.6$. We see that increasing $k$ is beneficial in terms of global utility (solid, blue line), but only up to some value (around 0.4). Beyond that, the global utility experiences a sharp drop, which signifies that, due to the high power price, BPS is more inclined to provide strategies that optimise power consumption rather than the utility. Also, $k$ has little effect on the fraction of unserved users (dashed, green line): just a small improvement can be noticed around $k=0.25$. Conversely, the cost parameter $\delta$ plays an instrumental role in ensuring that the number of UEs experiencing an SINR below the acceptable threshold is kept low, as can be seen by the dashed green line in the second plot of Fig. \[fig:ca-priceeval\] (here $k=0.25$). The third plot depicts the effect of $k$ (solid, blue line) on the overall transmitted power when $\delta=0.6$, and the effect of $\delta$ (dashed, green line) when $k=0.25$. Note that increasing $k$ leads BPS to converge to strategies with overall lower power, however, as observed before, this comes at the expense of the utility. As expected, the increase in $\delta$ does not lead to strategies with higher overall transmit power, which confirms our earlier statement that introducing the second cost component does not change the nature of the game. Finally, the rightmost plot presents the average number of iterations it takes to each team to converge to the final best strategy. Depending on the intra-team dynamics, teams may take a different time, however the game always converges quite fast (in about 8 iterations). Importantly, the average number of iterations required by each team does not grow with the number of teams. Conclusions\[sec:concl\] ======================== We proposed a novel solution for downlink power setting in HetNets with carrier aggregation, which aims to reduce interference and power consumption, and to provide high quality of service to users. Our approach leverages the different propagation conditions of the carriers and the different transmit power that macro and micro BSs can use for them. Through game theory, we framed the problem as a competitive game among teams of macro and micro BSs, and identified it as a game of strategic substitutes/complements with aggregation. We then introduced a distributed algorithm that enables the teams to reach a desirable NE in very few iterations. Simulation results, obtained in a realistic scenario, show that our solution greatly outperforms the existing strategies in terms of global performance while consuming little power. 3GPP Technical Report R 36.808 V10.1.0, 2013. B. Soret, H. Wang, K.I. Pedersen, C. Rosa, “Multicell Cooperation for LTE-Advanced Heterogeneous Network Scenarios,” [*IEEE Wireless Comm.,*]{} 2013. S. Deb, P. Monogioudis, J. Miernik, P. Seymour, “Algorithms for Enhanced Inter Cell Interference Coordination (eICIC) in LTE HetNets,” [*IEEE/ACM Trans. on Netw.,*]{} 2014. Z. Zhang, L. Song, Z. Han, W. Saad, “Coalitional Games with Overlapping Coalitions for Interference Management in Small Cell Networks,” [*IEEE Trans. on Wireless Comm.*]{}, 2014. S. Guruacharya, D. Niyato, E. Hossain, D.I. Kim, “Hierarchical Competition in Femtocell-Based Cellular Networks,” [*Globecom,*]{} 2010. H.-H. Nguyen, W.-J. Hwang, “Distributed Scheduling and Discrete Power Control for Energy Efficiency in Multi Cell Networks,” [*IEEE Comm. Lett.,*]{} in press. http://www.3gpp.org/technologies/keywords-acronyms/101-carrier-aggregation-explained T. Heikkinen, “A Potential Game Approach to Distributed Power Control and Scheduling,” [*Comp. Networks,*]{} 2006. E. Yaacoub, A. Imran, Z. Dawy, A. Abu-Dayya, “A Game Theoretic Framework for Energy Efficient Deployment and Operation of Heterogeneous LTE Networks,” [*CAMAD,*]{} 2013. K. Yang, S. Martin, T.A. Yahiya, J. Wu, “Energy-efficient Resource Allocation for Downlink in LTE Heterogeneous Networks,” [*VTC,*]{} 2014. G. Yu, Q. Chen, R. Yin, H. Zhang, G.Y. Li, “Joint Downlink and Uplink Resource Allocation for Energy-efficient Carrier Aggregation,” [*IEEE Trans. on Wireless Comm.,*]{} 2015. Qualcomm White Paper, “LTE Advanced: Heterogeneous Networks,” 2011. D. Bladsjo, M. Hogan, S. Ruffini, “Synchronization Aspects in LTE Small Cells,” [*IEEE Comm. Mag.,*]{} 2013. Report ITU-R M.2135-1, 2009. M. Xiao, N.B. Shroff, E.K.P. Chong, “A Utility-based Power-control Scheme in Wireless Cellular Systems,” [*IEEE/ACM Trans. on Netw.,*]{} 2003. Companion technical report: [<https://www.dropbox.com/s/ceivq2kpshwas4y/technical_report.pdf?dl=0>]{}. P. Dubey, O. Haimanko, A. Zapelchelnyuk, “Strategic Complements, Substitutes and Potential Games,” [*Games & Economic Behavior,*]{} 2004. 3GPP Technical Report 36.814, 2010. 3GPP Technical Report 36.942 V12.0.0, 2014.
--- abstract: 'Neutrinos are allowed to mix and to oscillate among their flavor. Muon and tau in particular oscillate at largest values. Last Minos experiment claimed [@14] possible difference among their matter and anti-matter masses, leading to a first violation of the most believed CPT symmetry. Isotropically born atmospheric muon neutrino at $E_{\nu_{\mu}}\simeq 20-80$ GeV, while up-going, they might be partially suppressed by mixing in analogy to historical SuperKamiokande muon neutrino disappearance into tau, leading to large scale anisotropy signals. Here we show an independent muon rate foreseen in Deep Core based on observed SK signals extrapolated to DeepCore mass and its surrounding. Our rate prediction partially differ from previous ones. The $\nu_{\mu}$, $\bar{\nu_{\mu}}$ disappearance into $\nu_{\tau}$,$\bar{\nu_{\tau}}$ is leading to a ${\mu}$, $\bar{\mu}$ anisotropy in vertical up-going muon track: in particular along channel $3-5$ we expect a huge rate (tens of thousand of events) of neutral current events, charged current electron and inclined crossing muons. Moreover at channel $6-9$ we expect a severe suppression of the rate due to muon disappearance (in CPT conserved frame). Such an anisotropy might be partially tested by two-three string detection at $E_{\bar{\nu_{\mu}}}\geq 45$. A CPT violation may induce a more remarkable suppression of vertical up-going tracks because of larger $\bar{\nu_{\mu}}$ reduction for $E_{\bar{\nu_{\mu}}}\geq 35$.' author: - | Fargion D.,$^{1,2}$ D’Armiento D.,$^2$\ $^1$INFN, Rome University 1, Italy\ $^2$Physics Department, Rome Univ. 1 Ple A.Moro 2, 00185, Rome, Italy\ title: | Deep Core muon neutrino rate and anisotropy\ by mixing\ and CPT violation --- Introduction ============ The neutrino are very complex particles indeed. Their three light neutrino flavors mix in a complex way described by a matrix born only in last few decades [@12],[@18],[@19]. Their presence maybe recorded in small kiloton detector or larger (like SK and Super Kamiokande $22$ kiloton) ones.In largest size detectors as Icecube the higher characteristic $\nu_{\mu}$, $\bar{\nu_{\mu}}$ TeV energy do not oscillate much and they do not exhibit the negligible oscillation along our narrow Earth. However the new born Deep Core, while tuning to few tens or even a few GeV $\nu_{\mu}$, $\bar{\nu_{\mu}}$ energy, may hold memory of the $\nu_{\mu}$, $\bar{\nu_{\mu}}$ disappearance into tau. Many have foreseen the $\nu_{\mu}$, $\bar{\nu_{\mu}}$ disappearance in Deep Core [@09][@15]. We did use their prediction to calibrate the eventual CPT violation influence into their future rate [@00]. Here we review these predictions and now we reconsider our preliminary estimate, based on Super Kamiokande ones [@03a], estimates that partially disagree with the previous results [@09][@15][@10]as well as [@09a]. Deep Core, is a new telescope or better a counter event blurred at low energies (below $E_{\nu_{\mu}}\leq 30$ GeV) because muons are tracing tracks mostly projected along one string: the inner cone within $\sim 30^{o}$ may contain any neutrino arrival direction around the string axis azimuth angle . At higher energy $E_{\nu_{\mu}}\geq 45$ GeV the muon track, if inclined, may intersect two different strings leading to a much accurate (a few degree) angular resolution. Therefore the DeepCore may test different energy regions at different degree of resolution. Moreover most of the very inclined (respect the vertical) muons (below $E_{\nu_{\mu}}\leq 30$ GeV) may intersect briefly the one string leading to a few (four-five) detector optical module (DOM) signal, in a very short timing clustering. This mean that most of the inclined events may accumulate, as a noise, into the a few channel region that is at the same time the deposit of ten (or tens) GeV shower, mostly Neutral Current, (NC), event by $\nu_{\mu}$, $\bar{\nu_{\mu}}$,$\nu_{\tau}$, $\bar{\nu_{\tau}}$ and Charged Current, (CC), and NC due to electrons by $\nu_{e}$, $\bar{\nu_{e}}$. These few channel signals may also record rare (nearly a thousand) tau appearance. However the previous noise will make difficult to discover tau appearance, while muon disappearance is still viable. SuperKamiokande rate versus DeepCore ------------------------------------ The simplest way to estimate the Deep Core muon track and possible tau appearance (or better, the muon disappearance) has been shown by Icecube MC simulation [@10], as described in figure \[123\], with our additional CPT violation expected influence[@00]. However we show here an independent derivation of the expected Deep Core rate, based on SuperKamiokande one [@01]. There are four main contribute to SK upgoing muons: Fully contained event (FC), born inside and decayed inside SK; partially contained event (PC), born inside but escaping outside the SK volume; the Upward Stopping Muons, born outside but decayed inside the detector; the upward through muon, just born outside , crossing and decaying again at external volume. Some care have been taken into account for the last Upward Stopping Muons and upward through muon: the SK location is deeply surrounded by mountain rock, while Deep Core is within much less dense ice. Therefore we suppressed the two last rates by the density ratio ($\simeq 2.6$) to calibrate the expected rate in DeepCore , amplified by extrapolated volume ratio ($\frac{V_{DeepCore}}{22kT}$) versus Deep Core one at each energy range. Indeed the Deep Core volume is variable with the muon energy values due to photodetector thresholds and muon Cherenkov luminosity. We considered here the preliminary Deep Core effective volume variability following last Icecube articles [@05],[@10],[@15],[@20]. Our result is described in the right side of figure \[123\],\[05\] in linear scale along the channel number. We assumed an averaged neutrino muon and anti-muon energy conversion, their length projected along the string at spread angle of $\theta\simeq 30^{o}$. The total event number derived by simplest SK-DeepCore translation is huge: $N \simeq 97.800$. Most of these events are not vertical but inclined. Therefore assuming a vertical beaming solid cone suppression (also to reconcile with Deep Core total expected rate) we selected only those events within a cone angle $\pm 33^{o}$, obtaining a fraction ($1- \cos\theta\simeq 0.16$) of the total rate, in this way now compatible with Deep Core preliminary global expectation $N \simeq 16.000$. Rates and anisotropy -------------------- The calibrated muon rate figure \[05\] shows in grouped channel graph number, the rate that we foresee following SK within a narrow vertical axis along each string. These prediction do not overlap with previous one. In particular as we did mention we foresee a huge rate of inclined events whose NC produce shower observable by $3-4-5$ channel: this very rough estimate is based on the NC, and electron CC,NC shower : they are well above $20.000$ NC (for $\nu_{\mu}$ $\nu_{\tau}$) with additional thousands of shower by CC and NC for $\nu_{e}$ and their antiparticles. The inclined tracks by nearly horizontal muons will excite the vertical string with a characteristic arrival time much shorter than any vertical shower event. Indeed the time difference in arrival for spherical shower along a string (each DOM at $7$ m separation) is nearly $\Delta t_{0}\simeq t_{0}= h/c = 23 ns$; by triangulation any horizontal muon tracks and its Cherenkov cone will record a quite shorter delay $\Delta t_{0}\simeq t_{0}\cdot(1-\frac{1}{\cos(\theta_{C}) \cdot n_{ice}}) \simeq 0.08 t_{0} = 1.84 ns$. Therefore this cluster of event, nearly coincident in time , might be a key test to calibrate the muon event rate in wide solid angle and possibly to meter the event rate at each channel. Conclusions =========== We did show the rate of atmospheric muon neutrinos along Deep Core string channels with some comments on the expected anisotropy. The main results are: a) a very sharp peaked morphology in muon rate, namely a huge noise rate in low range $3-6$ channel (ten thousand or above event a year) followed by b), a deep minimum along channel $7-9$ due to muon disappearance (contrasted partially by eventual CPT violation), whose rate maybe below $2000$ event a year. c) A global decay and suppression of described events Fig (\[05\]) in range above $10-50$, all along each channel, because of muon disappearance and additional anti-muon CPT violated suppression, by the vertical muon tracks that are more wide and they suffer larger disappearance leading to more anisotropic behavior respect to the averaged SK rates. The present rate estimate differ from the most known ones [@05],[10]{},[20]{}; however the CPT violation influence foreseen in our previous paper [@00] plays the same role: to reduce the common muon survival, making more anti-tau appearance, from channel above $\simeq 13$. The strong modulation by CPT violation at low channel ($3-6$) number,is quite remarkable, but it is nevertheless useless because a huge noise pollution by NC, electron CC,NC and nearly horizontal muon traces. [99]{} Abbasi R. *et al.* (IceCube Collaboration), arXiv:1010.3980v1. Aguilar J. A. for the IceCube Collaboration, arXiv:1010.6263. Akhmedov E., *Phys. Scripta* T121 (2005) 65–7; arXiv:hep-ph/0412029v2Ashie,Y. *PHYSICAL REVIEW D* 71,(2005) 112005;. arXiv:hep-ex/0501064 Cabibbo N., *Unitary Symmetry and Leptonic Decays*, *Phys. Rev. Lett.* 10 (1963) 531–533. Cowen D., *Journal of Physics: Conference Series*, TeV Particle Astrophysics II Workshop 60 (2007) 227–230. Kodama K. *et al.* (DONUT Collaboration), *Observation of tau neutrino interactions*, *Phys. Lett.* B 504 (2001) 218; doi:10.1016/S0370-2693(01)00307-0. Fargion D.,Astrophys.J.570:909-925,2002; . arXiv:astro-ph/9704205 ; Fargion D. et. al, *Astrophys. J.* 613 (2004) 1285-1301. Fargion D., D’Armiento D., Desiati P., Paggi P., arXiv:1012.3245; Giordano G., Mena O., Mocioiu I. arXiv:1004.3519v1 Gandhi R. *et al.*, *Earth matter effects at very long baselines and the neutrino mass hierarchy*, *Phys. Rev.* D 73 (2006). Grant D., Koskinen J., and Rott C. for the IceCube collaboration, *Proc. of the 31st ICRC*, Lodtz, Poland, 2009. Jeong Y. S. and Reno M. H., *Phys. Rev.* D 82 (2010) 033010. Maki Z., Nakagawa M., and Sakata S., *Remarks on the Unified Model of Elementary Particles*, *Prog. Theor. Phys.* 28 (1962) 870; doi:10.1143/PTP.28.870. Mikheev S. P. and Smirnov A. Y., *Sov. J. Nucl. Phys.* 42 (1985) 913–917. MINOS Collaboration, website, http://www-numi.fnal.gov/PublicInfo/forscientists.html. Montaruli T., IceCube Collaboration, *Proc. of CRIS 2010 Conference*, Catania, Sep. 2010 Nakamura K. *et al.* (Particle Data Group), *J. Phys.* G 37, (2010) 075021. Perl M. L. *et al.*, *Phys. Rev. Lett.* 35 (1975) 1489. Pontecorvo B., *Mesonium and anti-mesonium*, *Zh. Eksp. Teor. Fiz.* 33 (1957) 549-551; reproduced and translated in Sov. Phys. JETP **6** (1957) 429. Pontecorvo B., *Neutrino Experiments and the Problem of Conservation of Leptonic Charge*, *Zh. Eksp. Teor. Fiz.* 53 (1967) 1717; Sov. Phys. JETP **26** (1968) 984. Schulz O. (IceCube Collaboration), *AIP Conf. Proc.* 1085 (2009) 783; C. Wiebusch and f. t. I. Collaboration,arXiv:0907.2263. Shinji M. and Seong C. P., arXiv:1009.1251v2 , 8 Sep 2010. Wiebusch C. for the IceCube Collaboration, *Proceedings of the 31st ICRC*, Lodz, Poland, July 2009. Wolfenstein L., *Phys. Rev.* D 17 (1978) 2369.
--- abstract: 'We provide a simple and short proof of the Karush-Kuhn-Tucker theorem with finite number of equality and inequality constraints. The proof relies on an elementary linear algebra lemma and the local inverse theorem.' address: 'Department of Mathematics and Statistics, College of Science, King Faisal University, Al-Ahsa, Kingdom of Saudi Arabia' author: - Ramzi May date: ' June 21, 2020' title: 'A simple proof of the Karush-Kuhn-Tucker theorem with finite number of equality and inequality constraints' --- 0.2cm Introduction ============ Let $X$ be a normed real linear space. We denote by $X^{^{\prime }}$ the space of linear mapping from $X$ to $\mathbb{R}$ and by $X^{\ast }$ the dual space of $X$ i.e. the space of linear and continuous mapping from $X$ to $\mathbb{R}$. The infinite dimensional version of the famous Karush-Kuhn-Tucker theorem with finite number of equality and inequality constraints reads as follows. \[Th\] let $\Omega $ be an open space of $X$ and $\{f_{i}: 0\leq i\leq n+m \}$ a family of continuously differentiable functions from $\Omega $ to $\mathbb{R}$ where $n\in \mathbb{N},m\in \mathbb{N}\cup \{0\}$. Let $x^{\ast }$ be a solution of the constraint minimization problem$$\left\{ \begin{array}{ll} \text{minimize} & f_{0}(x) \\ \text{subject to} & f_{i}(x)=0,1\leq i\leq n, \\ & f_{i}(x)\leq 0,n+1\leq i\leq n+m,\end{array}\right.$$such that the family $\{f_{i}^{\prime }(x^{\ast }): i\in J(x^{\ast })\}$ is linearly independent in $X^{\ast },$ where $$J(x^{\ast })=\{i: 1\leq i\leq n+m~\text{and}~ f_{i}(x^{\ast })=0\}.$$Then there exist $(\lambda _{1},\cdots ,\lambda _{n})\in \mathbb{R}^{n}$ and $(\mu _{1},\cdots ,\mu _{m})\in ([0,+\infty \lbrack )^{m}$ such that $$f_{0}^{\prime }(x^{\ast })+\sum_{i=1}^{n}\lambda _{i}f_{i}^{\prime }(x^{\ast })+\sum_{j=1}^{m}\mu _{j}f_{j+n}^{\prime }(x^{\ast })=0$$and$$\mu _{j}f_{j+n}(x^{\ast })=0,~\forall 1\leq j\leq m.$$ This famous theorem is a natural extension of the classical Lagrange multipliers theorem to the case of the minimization problem with finite number of equality and inequality constraints. Its finite dimensional version has been originally derived independently by Karush [@K] and Kuhn and Tucker [@KT]. Since there, different proofs of the generalization of the Karush, Kuhn and Tucker theorem (KKT) to the infinite dimensional setting have been provided in many works (see, for instance, [@BS; @CZ; @GT; @R] and references therein). In three recent papers [@BTW0; @BT1; @BTW2], Brezhneva, Tretyakov, and Wright have given some elementary and different proofs of the KKT Theorem respectively with equality constraints, inequality constraints and linear equality, and nonlinear inequality constraints. In this short note, inspired essentially by the paper [@BTW2], we give a new, detailed and simple proof of the KKT theorem with finite number of mixed equality and inequality constraints. Our proof relies essentially on a very simple but powerful lemma from linear algebra and the classical local inverse theorem in the finite dimensional setting. Proof of the Karush, Kuhn and Tucker Theorem ============================================ Before starting the proof of Theorem \[Th\], we introduce the following simple notations. Let $N\in \mathbb{N} $. 1. The canonical basis of $\mathbb{R}^N$ is the vector family $\{e_1,\cdots,e_N\}$ defined be: $e_1=(1,0,\cdots,0), e_2=(0,1,0,\cdots,0), \cdots, e_N=(0,\cdots,0,1)$. 2. $B_{\mathbb{R}^N}(0,r)$ is the open ball of $\mathbb{R}^N$ with center $0$ and radius $r>0$. 3. $I_N$ is the unity matrix of size $(N,N)$. 4. $Id_{\mathbb{R}^{N}}$ is the identity mapping from $\mathbb{R}^{N}$ into itself. Next, we prove the following elementary linear algebra lemma. \[le\] Let $\{T_{i}: 1\leq i\leq n\}$ be a finite family of a linear independent elements of $X^{\prime }.$ Then there exists a family $\{v_{i}: 1\leq i\leq n\}$ of elements of $X$ such that $$T_{i}(v_{j})=\delta _{ij},~\forall 1\leq i,j\leq n \label{lin}$$where $\delta _{ij}$ is the Kronecker’s symbol. The family $\{v_{i}: 1\leq i\leq n\}$ will be called a quasi primal basis of $X$ associated to the family $\{T_{i}: 1\leq i\leq n\}.$ Define the linear mapping $T:X\rightarrow \mathbb{R}^{n},~T(v)=(T_{1}(v),\cdots ,T_{n}(v)).$ Let us prove that $T$ is not surjective. Suppose that this is not true; then there exists a vector $\alpha =(\alpha _{1},\cdots ,\alpha _{n})\in \mathbb{R}^{n}\backslash \{0\}$ orthogonal in $\mathbb{R}^{n}$ (with respect to the usual inner product) to the linear subspace $T(X) $; which implies that for every $v\in X,$$$\alpha _{1}T_{1}(v)+\cdots +\alpha _{n}T_{n}(v)=0.$$This contradicts the assumption on the family $\{T_{i}:~1\leq i\leq n\}.$ Therefore we conclude that the mapping $T$ is surjective. By consequence, for every $1\leq j\leq n,$ there exists $v_{j}\in X$ such that $T(v_{j})=e_{j}$, where $\{e_{1},\cdots ,e_{n}\}$ is the canonical basis of $\mathbb{R}^{n}.$ Clearly, the family $\{v_{i}: 1\leq i\leq n\}$ satisfies (\[lin\]). Now we are ready to prove the KKT Theorem. Let us first notice that up to replace $\Omega $ by the open subset $$\Omega ^{\ast }=\{x\in \Omega :f_{i}(x)<0,~ \forall n\leq i\leq n+m ~ \text{and} ~ i \notin J(x^{\ast })\}$$and to set $\mu _{j-n}=0$ for every $j\notin J(x^{\ast }),$ we can assume without loss of generality that $J(x^{\ast })=\{i: 1\leq i\leq n+m\} .$ Now we will first prove that the family $\{f_{i}^{\prime }(x^{\ast }): 0\leq i\leq n+m\}$ is linearly dependant in $X^{\ast }.$ We argue by contradiction. According to Lemma \[le\], there exits a quasi primal basis $\{v_{i}: 0\leq i\leq n+m\}$ of $X$ associated to the family $\{f_{i}^{\prime }(x^{\ast }): 0\leq i\leq n+m\}.$ Since $x^{\ast }$ belongs to the open subset $\Omega $, there exists a real number $r_{0}>0$ such that the mapping defined for every $t=(t_{i})_{0\leq i\leq n+m}$ in $B_{\mathbb{R}^{m+n+1}}(0,r_{0})$ by $$\Phi (t)=(f_{0}(\sigma (t)),\cdots ,f_{m+n}(\sigma (t))),$$where $$\sigma (t)=x^{\ast }+\sum_{i=0}^{m+n}t_{i}v_{i},$$is continuously differentiable and its Jacobian matrix at $t=0$ is $$J_{\Phi }(0)=\left[ f_{i}^{\prime }(x^{\ast })(v_{j})\right] _{0\leq i,j\leq m+n}=I_{m+n+1}.$$Therefore, $\Phi ^{\prime }(0)=Id_{\mathbb{R}^{m+n+1}}$; hence by applying the local inverse theorem, we deduce the existence of a real number $r_{1}\in ]0,r_{0}]$ such that $\Phi $ is a $C^{1}$ diffeomorphism from $U_{1}\equiv B_{\mathbb{R}^{m+n+1}}(0,r_{1})$ to an open neighbourhood $V_{1}$ of $\Phi (0)=(f_{0}(x^{\ast }),0,\cdots ,0)$ in $\mathbb{R}^{m+n+1}.$ For $\nu >0$ small enough, the vector $y_{\nu }\equiv (f_{0}(x^{\ast })-\nu ,0,\cdots ,0)$ belongs to $V_{1};$ let $t_{\nu }=\Phi ^{-1}(y_{\nu }).$ It is clear that the vector $x_{\nu }=\sigma (t_{\nu })$ belongs to $\Omega $ and satisfies $$\begin{aligned} f_{0}(x_{\nu }) &=&f_{0}(x^{\ast })-\nu , \\ f_{i}(x_{\nu }) &=&0,~\forall 1\leq i\leq n+m,\end{aligned}$$which contradicts the definition of $x^{\ast }.$ Thus, the family $\{f_{i}^{\prime }(x^{\ast }): 0\leq i\leq n+m\}$ is linearly dependant in $X^{\ast }.$ On the other hand, since $\{f_{i}^{\prime }(x^{\ast }): 1\leq i\leq n+m\}$ is linearly independent in $X^{\ast },$ we infer the existence of $(\lambda _{1,\cdots ,}\lambda _{n},\mu _{1,}\cdots ,\mu _{m})\in \mathbb{R}^{m+n}$ such that $$f_{0}^{\prime }(x^{\ast })+\sum_{i=1}^{n}\lambda _{i}f_{i}^{\prime }(x^{\ast })+\sum_{j=1}^{m}\mu _{j}f_{j+n}^{\prime }(x^{\ast })=0. \label{A1}$$It remains to prove that $\mu _{j}\geq 0$ for every $1\leq j \leq m$ According to Lemma \[le\], there exists $\{w_{1},\cdots ,w_{m+n}\}$ a quasi primal basis of $X$ associated to the family $\{f_{i}^{\prime }(x^{\ast }): 1\leq i\leq n+m\}.$ Proceeding as previously, we deduce that there exists $r>0$ and a neighbourhood $V$ of $0$ in $\mathbb{R}^{m+n}$ such that the mapping $\varphi :B_{\mathbb{R}^{m+n}}(0,r)\rightarrow V$ defined $$\varphi (t)=(f_{1}(s(t)),\cdots ,f_{m+n}(s(t)),$$where$$s(t=(t_{i})_{1\leq i\leq m+n})=x^{\ast }+\sum_{i=1}^{m+n}t_{i}w_{i},$$is a $C^{1}$ diffeomorphism. Let $1\leq j_{0}\leq m$ be a fixed integer. Since $V$ is an open neighbourhood of $0$ in $\mathbb{R}^{m+n},$ there exists $\varepsilon _{0}>0$ such that for every $\varepsilon \in ]-\varepsilon _{0},\varepsilon _{0}[,~-\varepsilon e_{j_{0}+n}\in V,$ where $(e_{1},\cdots ,e_{m+n})$ is the canonical basis of $\mathbb{R}^{m+n}.$ Hence, for every $\varepsilon \in ]-\varepsilon _{0},\varepsilon _{0}[,$ the vector $$\tilde{x}(\varepsilon )=s(\varphi ^{-1}(-\varepsilon e_{j_{0}+n}))$$belongs to $\Omega $ and satisfies $f_{j_{0}+n}(\tilde{x}(\varepsilon ))=-\varepsilon $ and $f_{i}(\tilde{x}(\varepsilon ))=0$ for every $i\in \{1,\cdots,m+n\}\backslash j_{0}+n.$ Hence, for every $\varepsilon \in ]0,\varepsilon _{0}[,$$$\frac{f_{0}(\tilde{x}(\varepsilon ))-f_{0}(\tilde{x}(0))}{\varepsilon }=\frac{f_{0}(\tilde{x}(\varepsilon ))-f_{0}(x^{\ast })}{\varepsilon }\geq 0.$$Letting $\varepsilon \rightarrow 0,$ we obtain$$f_{0}^{\prime }(x^{\ast })(\frac{d\tilde{x}}{d\varepsilon }(0))\geq 0. \label{A2}$$For every $\varepsilon \in ]-\varepsilon _{0},\varepsilon _{0}[,$ define $$\tilde{t}(\varepsilon )=(\tilde{t}_{1}(\varepsilon ),\cdots ,\tilde{t}_{m+n}(\varepsilon ))=\varphi ^{-1}(-\varepsilon e_{j_{0}+n}).$$First, since $\varphi (\tilde{t}(\varepsilon ))=-\varepsilon e_{j_{0}+n}$ and $\varphi ^{\prime }(0)=Id_{\mathbb{R}^{m+n}},$ we have $\frac{d\tilde{t}}{d\varepsilon }(0)=-e_{j_{0}+n}.$ Using now the fact that$$\tilde{x}(\varepsilon )=s(\tilde{t}(\varepsilon ))=x^{\ast }+\sum_{i=1}^{m+n}\tilde{t}_{i}(\varepsilon )w_{i},$$we deduce that$$\frac{d\tilde{x}}{d\varepsilon }(0)=-w_{j_{0}+n}. \label{A3}$$Finally, combining (\[A1\]),(\[A2\]), and (\[A3\]) yields$$\mu _{j_{0}}=-f_{0}^{\prime }(x^{\ast })(w_{j_{0}+n})\geq 0,$$which completes the proof of the theorem. [99]{} Bazaraa MS, Shetty CM. Nonlinear Programming: Theory and Algorithms, Wiley and Sons, New York, 1979. Brezhneva OA, Tretyakov AA. Wright, S.E.: A short elementary proof of the Lagrange multiplier theorem. Optimization Letters 2012; 6: 1597-1601. doi.org/10.1007/s11590-011-0349-4 Brezhneva OA , Tretyakov AA. An elementary proof of the Karush-Kuhn-Tucker theorem in normed linear spaces for problems with a finite number of inequality constraints. Optimization 2011; 60 (5): 613-618. doi.org/10.1080/02331930903552473 Brezhneva OA, Tretyakov AA, Wright SE. A simple and elementary proof of the Karush- Kuhn-Tucker theorem for inequality constrained optimization. Optimization Letters 2009; 3: 7-10. doi.org/10.1007/s11590-008-0096-3 Chang EKP, Zak SH. An introduction to optimization. A Wiley Interscience Publication. Wiley and Sons, New York, 2001. Gould FJ, Tolle JW. Optimality conditions and constraint qualifications in Banach space. Journal of Optimization Theory and Applications 1975; 15: 667-684. doi.org/10.1007/BF00935506 Karush W. Minima of functions of several variables with inequalities as side conditions. Master thesis, University of Chicago, 1939. Kuhn HW, Tucker AW. Nonlinear programming, in Proceedings of the Second Berkeley Symposium on Mathematical Statistics and Probability, Jerzy Neyman, ed., University of California Press, Berkeley. 1950; 481-492. Ritter K. Optimization theory in linear spaces. Part III: Mathematical programming in partially ordered Banach spaces. Math Ann. 1970; 184: 133-154. doi.org/10.1007/BF01350314
--- abstract: | Purpose: Segmentation of organs-at-risk (OARs) is a bottleneck in current radiation oncology pipelines and is often time consuming and labor intensive. In this paper, we propose an atlas-based semi-supervised registration algorithm to generate accurate segmentations of OARs for which there are ground truth contours and rough segmentations of all other OARs in the atlas. To the best of our knowledge, this is the first study to use learning-based registration methods for the segmentation of head and neck patients and demonstrate its utility in clinical applications. Methods and Materials: Our algorithm cascades rigid and deformable deformation blocks, and takes on an atlas image (M), set of atlas-space segmentations ($S_A$), and a patient image (F) as inputs, while outputting patient-space segmentations of all OARs defined on the atlas. We train our model on 475 CT images taken from public archives and Stanford RadOnc Clinic (SROC), validate on 5 CT images from SROC, and test our model on 20 CT images from SROC. Results: Our method outperforms current state of the art learning-based registration algorithms and achieves an overall dice score of 0.789 on our test set. Moreover, our method yields a performance comparable to manual segmentation and supervised segmentation, while solving a much more complex registration problem. Whereas supervised segmentation methods only automate the segmentation process for a select few number of OARs, we demonstrate that our methods can achieve similar performance for OARs of interest, while also providing segmentations for every other OAR on the provided atlas. Conclusions: Our proposed algorithm has significant clinical applications and could help reduce the bottleneck for segmentation of head and neck OARs. Further, our results demonstrate that semi-supervised diffeomorphic registration can be accurately applied to both registration and segmentation problems. author: - 'Charles Huang, Masoud Badiei, Hyunseok Seo, Ming Ma, Xiaokun Liang, Dante Capaldi, Michael Gensheimer, Lei Xing' bibliography: - 'atlas\_seg\_papers.bib' title: 'Atlas Based Segmentations via Semi-Supervised Diffeomorphic Registrations' --- Introduction ============ Background ---------- Timely detection and prompt treatment are crucial for modern cancer care to be effective[@DBLP:journals/corr/abs-1809-04430]. A recurring problem for many hospitals that hinders the administration of timely radiation therapy arises from the immense workload required for the radiation therapy pipeline[@DBLP:journals/corr/abs-1809-04430]. Automating the radiation therapy process, which includes the segmentation of both tumor volumes and organs-at-risk (OARs) in patients receiving treatment, drastically reduces the burden on physicians to contour large numbers of patient images in such a time-sensitive environment. As treatment planning at minimum requires the contouring of OARs surrounding a tumor volume, segmentation of OARs often accounts for the largest proportion of the overall segmentation task. Segmentation of these numerous OARs through automatic pipelines, thus, could have the potential to greatly reduce the physician workload and expedite treatment planning. Due to these reasons, developing automatic OAR segmentation tools has significant impact in the field of radiation therapy and could potentially save numerous lives by increasing patient turnover[@DBLP:journals/corr/abs-1809-04430]. As a result, automatic segmentation of OARs has spurred significant interest in medical and deep learning communities. Many recent works in automatic segmentation focus on the supervised paradigm of deep neural network models. For those models to be effective, they require numerous contours of OARs for training, which are manually generated by physicians. One major limitation to this approach for segmenting OARs is that clinical data for contours of OARs is often incomplete. It is common for physicians to only contour OARs near the tumor volume due to time constraints. Therefore, clinical data for contours of OARs often do not contain contours for all possible OARs. In addition to automated segmentation, there has also been great progress in developing automated dose prediction methods that produce voxel-wise dose predictions, often through employing an atlas or employing previously treated patients. The process of registering a currently treated patient’s CT scan to that of either an atlas or a previously treated patient is crucial for accurate voxel-wise dose predictions[@mcintosh2016voxel]. Similarly, cross-modality registration is a critical component in the segmentation of tumor volumes under conditions of poor contrast. When tumor volumes are difficult to delineate in CT scans, it is common for physicians to contour tumor volumes on positron emission tomography (PET) scans or magnetic resonance imaging (MRI) scans for better visibility. Contouring on these other non-CT modalities is also a critical component for evaluation of patient response to radiotherapy. Transferring contours of tumor volumes onto a CT scan then requires a robust cross-modality registration method[@Piert2018]. To address the challenges of segmenting OARs and to produce accurate registrations, we propose an automatic framework for atlas-based segmentation of head and neck OARs using a semi-supervised diffeomorphic registration to the atlas. The proposed framework generates contours of all OARs on the head and neck atlas as well as a registration of each individual patient to the atlas, thus being useful for both automated segmentation and automated treatment planning pipelines. Related Works ------------- ### Segmentation Methods There are numerous automatic segmentation software packages that are commercially available and two categories by which these segmentation methods can be distinguished: atlas-based segmentations and supervised CNN based segmentations. Conventional atlas-based segmentation methods made use of either rigid or deformable registration techniques to register an atlas to a patient. These methods typically solve the registration optimization problem by searching over the space of deformations. They then apply this deformation to contours made on the atlas to warp those contours into the patient-space. For segmentations from supervised CNN models, the general methodology is to train a U-net[@DBLP:journals/corr/RonnebergerFB15] to mimic ground truth contours of OARs that were provided by physicians. Although supervised CNN models provide the current state of the art segmentation performance, they do have some limitations compared to atlas-based methods for many OARs. For instance, supervised CNN models often rely on incomplete OAR datasets. Training a supervised model requires providing the model with ground truth contours. These OAR contours are often taken from clinical data or created manually by physicians specifically for the purpose of training these models. Due to the amount of labor and time required to manually contour every possible head and neck OAR, the datasets prepared for training these supervised models are often incomplete, as they do not contain contours of every possible OAR. A second limitation to these supervised CNN models is their sensitivity to visual artifacts in patient images. Distortions of the input CT image may arise from patient-based artifacts (i.e. implants, clothing, jewelry, motion, etc.), physics-based artifacts (i.e. beam hardening, aliasing, etc.), or reconstruction-based artifacts (i.e. ring artifacts, helical artifacts, etc.), which may degrade the performance of supervised models, particularly because these models rely on visual information in the image. In contrast, atlas-based segmentation models provide segmentations for all possible OARs (assuming those OARs are included in the atlas) and are more robust to artifacts, as they solve a registration problem instead of learning a function that outputs contours from CT image inputs. Nevertheless, conventional atlas based segmentation models yield a poor performance compared to neural networks for complex segmentation problems, particularly for the head and neck images which have numerous degrees of freedom (e.g. head/neck shape, head/neck rotation, head/neck bending, opening of the jaw, etc.). ### Conventional Registration Methods Volume registration can be characterized as the problem of aligning a moving image ($M$) with a fixed image ($F$). The transformation ($\phi$) that warps $M$ onto $F$ can be computed by solving an optimization problem where the target transformation minimizes a loss function. The optimization problem has the following form: $$\begin{aligned} \hat{\phi}&=\arg\min_\phi{L}(\phi,F,M)\nonumber\\ &=\arg\min_\phi{L_{similarity}}(F,M\circ\phi)+\lambda L_{regularization}(\phi)\end{aligned}$$ where $M\circ\phi$ is the warping of image $M$ by deformation field $\phi$, $L_{similarity}$ typically is the mean squared error or normalized cross correlation between images $F$ and $M\circ\phi$, $L_{regularization}$ typically is a spatial smoothness loss to preserve topography, and $\lambda$ is the regularization hyperparameter. Conventional registration methods solve the optimization problem by searching the space of deformations[@Sims2009; @Teguh2011; @HoangDuc2015; @Haq2019]. These methods can be categorized into elastic deformation models[@Thirion1998; @Bajcsy1989], deformations using b-splines[@Li2017; @Heinrich2013; @Nakano2017], statistical parametric mapping[@Ashburner2000], Demons[@Nithiananthan2009], and Markov random field based discrete optimization[@Li2017; @Glocker2008]. The allowable transformations can also be constrained to diffeomorphisms in order to preserve topology and maintain invertibility of the transformation[@Ashburner2007]. Diffeomorphic registration algorithms have seen considerable development over the years, resulting in publicly available tools such as ANTs[@Avants2008], Large Diffeomorphic Distance Metric Mapping (LDDMM)[@Cao2005; @Beg2005], diffeomorphic demons[@Vercauteren2009; @Pukala2016], and DARTEL[@Ashburner2007]. Variations of these algorithms have been adapted into commercial packages made available by vendors such as MIM, Varian, RaySearch, and Phillips[@Pukala2016]. Under a probabilistic formulation, priors can also be specified on the deformation field[@Ashburner2007; @Simpson2012], and the underlying cost function can be minimized using an iterative optimization approach to find a deformation field distribution that resembles the prior. Our proposed method improves on a deep learning-based formulation proposed in Voxelmorph[@DBLP:journals/corr/abs-1809-05231; @DBLP:journals/corr/abs-1903-03545] and will be discussed in subsequent sections. We also provide further background on learning-based registration in Supplemental Materials sections A-C. Materials and Methods ===================== Problem Formulation ------------------- The goal of this paper is to find a deformation field that solves an atlas registration problem and then use the deformation field solution to warp atlas-space contours of OARs to the patient-space. The inputs to our registration problem are head and neck CT scans of individual patients (which we call the fixed image, $F$) , the Brouwer head and neck atlas (which we call the moving image, $M$)[@Brouwer2015], and OAR contours defined on the Brouwer head and neck atlas ($S_A$). Both $M$ and $F$ are certain intensity functions in $\mathbb{R}^3$, and the proposed model attempts to generate the moved image $M'$ such that $M'$ is similar to $F$. $$\begin{aligned} F\approx M'=M\circ\phi^{aff}\circ\phi^{diff_1}\circ\phi^{diff_2}\end{aligned}$$ Here, $\phi^{aff}$ and $\phi^{diff}$ denote the deformations for an affine transform and dense diffeomorphic transform, respectively. Under a generative model, $\phi^{diff}$ is parametrized by the latent variable $z$ that either defines the velocity field (in the case of probabilistic Voxelmorph[@DBLP:journals/corr/abs-1903-03545]) or a low-dimensional embedding (as in a variational autoencoder[@Krebs2019]). Clearly, the definition of “$\approx$” changes with the particular registration problem, and we further define “$\approx$” in terms of a training objective and evaluation metric in upcoming subsections. The proposed approach learns network weights to minimize the training objective in either an unsupervised or semi-supervised manner (i.e. unsupervised if no ground truth deformation fields or OAR segmentations are provided and semi-supervised if only the OAR segmentations are provided), and we begin by describing the network and its building blocks below. ![image](whole_net.png){width="\textwidth"} ![image](localization_net.png){width="\textwidth"} Network Overview ---------------- Our proposed network, presented in Figure \[whole\_net\], consists of a cascade of affine and dense diffeomorphic deformation blocks. The localization network, depicted in Figure \[localization\_net\], learns the deformation fields $\phi^{aff}$ and $\phi^{diff}$ given inputs $M$ and $F$. Warping of images is performed using a spatial transformer layer [@DBLP:journals/corr/JaderbergSZK15], which takes as input an image and a deformation field (see Figure 5 in the Supplemental Materials). Based on a training objective function, the network uses stochastic gradient descent methods to find the 12 parameters that specify an affine transform, as well as the voxel-wise velocity field. As head and neck registration typically involves large displacements, our model leverages a cascade of both affine and dense transforms. This cascade is made possible by constraining the transform to a diffeomorphism, requiring the transform to be smooth and invertible. To enforce smoothness, we incorporate gaussian smoothing of the learned velocity field directly into the network and add a KL divergence term between the approximate posterior and prior (described later in the section on losses, as well as in the Supplemental Materials in section C on diffeomorphic transforms). The network then uses scaling and squaring integration layers (with the default step size of 8) on the velocity field, as described in various implementations of Voxelmorph[@DBLP:journals/corr/abs-1809-05231; @DBLP:journals/corr/abs-1903-03545], to constrain the transformation to a diffeomorphism. In general, the network takes the moving image $M$, first warps $M$ with an affine displacement field, and then warps the affine transformed moving image with the dense displacement field cascade to get the warped image $M'$. The overall warping can be described with the following equation: $$\begin{aligned} M'=M\circ\phi^{aff}\circ\phi^{diff_1}\circ\phi^{diff_2}\end{aligned}$$ Then, to warp contours of the OARs (which we will call S from here on out) from the atlas-space to the patient space, we leverage the cascading property of diffeomorphisms as follows: $$\begin{aligned} S_A'=S_A\circ\phi^{aff}\circ\phi^{diff_1}\circ\phi^{diff_2}\end{aligned}$$ We selected 8 integration steps for scaling and squaring to satisfy the trade-off between voxel folding and computation time (i.e. increasing the number of integration steps reduces the number of folding voxels but increases the computation time)[@DBLP:journals/corr/abs-1809-05231; @DBLP:journals/corr/abs-1903-03545]. Localization Network -------------------- The proposed localization network utilizes a U-net architecture for the dense transformations and a traditional CNN for the affine transformation. As we want to incorporate as many dense transformations into the cascade as memory permits, we must compromise by limiting the localization network sizes, which provides the added benefit of preventing overfitting. In comparison to the localization networks of previously proposed frameworks like Voxelmorph and Microsoft’s Volume Tweening Network (VTN)[@DBLP:journals/corr/abs-1902-05020], our localization network incorporates dense blocks of convolutional layers, which we found to improve training convergence and testing performance (Figure \[localization\_net\]). The implementation details of the localization network, such as convolution filter dimensions, number of convolution filters, feature sizes, etc., are shown in Figure \[localization\_net\]. Each localization network is tasked with extracting deformation field parameters that are used in the deformation blocks to warp the moving image. The model was implemented using Keras[@chollet2015keras] with a Tensorflow[@Abadi:2016:TSL:3026877.3026899] backend. Objective Function ------------------ Based on our assumptions, we formulate the objective function to be minimized through a deep learning approach. Let the localization network be parametrized by $\theta$, we can then minimize the following loss using stochastic gradient descent methods: $$\begin{aligned} L(M,F,S_A,S_A';\theta)=&L_{recon-diff}\nonumber\\ &+L_{recon-affine}\nonumber\\ &+L_{segmentation-sim}\nonumber\\ &+D_{KL}(q_{diff_1}(z_{diff_1}| F;M)||p(z))\nonumber\\ &+D_{KL}(q_{diff_2}(z_{diff_2}| F;M)||p(z))\end{aligned}$$ To improve readability, we choose to only include the overall objective function here. A more detailed explanation of the overall objective function and each component of Equation 5 can be found in Supplemental Materials section D. Results ======= Experimental Setup and Evaluation --------------------------------- ![image](boxplot.png){width="\textwidth"} The dataset used in our experiment consists of 500 CT scans of head and neck patients taken from the National Cancer Institute’s Quantitative Imaging Network dataset[@Fedorov2016], McGill’s head and neck PET-CT dataset[@Vallieres2017], Ibragimov’s head and neck OAR dataset[@ibragimov2017], and Stanford Radiation Oncology Clinic (SROC) data. Scanner details, acquisition dates, age, and sex varied across the datasets used. All scans were reoriented to a standard orientation, cropped to a head and neck window above the fourth thoracic vertebrae, down sampled to a size of 128x128x128, thresholded to a soft tissue interval between -170 and 230 HU[@Hoang2010], and normalized to between 0 and 1. Training of our registration model involves matching the input CT images and OAR contours between each patient and an atlas. While OAR contours on the patient CT images are unnecessary for training in an unsupervised manner, we incorporate them into our objective function following typical semi-supervised training protocol. As we only had access to OAR contours for the 40 SROC patients, the remainder 460 patient images were unlabeled. In order to mitigate the discrepancy between the number of labeled and unlabeled images in our training set, we generated pseudo-labels of the 460 originally unlabeled images using a separately trained supervised CNN. The set of ground truth contours for our data consists of 8 OARs, including the mandible (M), left optic nerve (lON), right optic nerve (rON), left parotid (lP), right parotid (rP), spinal cord (SC), left submandibular gland (lSG), and right submandibular gland (rSG). Our experiment used the Brouwer head and neck atlas [@Brouwer2015], which defines a set of 36 OARs that encompass the 8 OARs mentioned above. The dataset was split into a training set of 475 scans, a validation set of 5 scans, and a testing set of 20 scans. In order to ensure that the generated pseudo-labels do not confound our results, all 5 validation scans and all 20 test set scans contained segmentations that were manually contoured by physicians as part of the radio therapy pipeline (i.e. SROC data). To evaluate the performance of the proposed model, we calculate the segmentation overlap—dice score coefficient—between the warped atlas segmentations ($S_A'$) and the segmentations annotated on each patient ($S_F$). As our test set only has ground truth contours for 8 OARs, our evaluation pertains only to those 8 OARs, but all 36 OARs can be warped to the patient space (as shown in Figure \[whole\_net\] and Figure \[example\_seg\]). Comparison to Other Methods --------------------------- For all comparisons, we used a learning rate of $10^{-5}$, a batch size of 1 (due to memory constraints) and train all models until convergence. Table 2 in the Supplemental Materials summarizes the number of network parameters and values for regularization parameters used. There have been numerous works that already compare the performance of unsupervised learning based models to conventional non-learning based registration models (i.e. SyN, Elastix, etc.), and these works show that the performance of learning based models is comparable with or exceeds the performance of conventional models[@DBLP:journals/corr/abs-1809-05231; @DBLP:journals/corr/abs-1903-03545; @DBLP:journals/corr/abs-1902-05020; @Krebs2019]. For clarity, we choose to compare our proposed model performance to implementations of the current state of the art learning-based models like Voxelmorph, VTN, and VAE-like networks. We decompose these other models into 4 baselines (an Info VAE, a non-generative cascaded model with 1 affine and 1 dense block, a non-generative cascaded model with 1 affine and 2 dense blocks, and a generative cascaded model with 1 affine and 2 dense blocks). For atlas-based segmentation of head and neck patients, our methods outperform other state of the art registration methods, with key results summarized in Figure \[boxplot\]. Compared to the other learning-based algorithms we tested, our method achieves the best performance on this dataset for OAR dice score. Examining the results in Figure \[boxplot\] reveals that the non-generative models tend to overfit to the training data, which we mitigate in our proposed network with the incorporation of gaussian smoothing and regularization terms. For our task, it appears that VAE-like networks tend to underfit the training data. Our initial comparisons used a VAE-like model like Krebs et al.[@Krebs2019], but due to poor convergence we choose not to include those comparisons in our results. We instead develop an Info VAE network[@DBLP:journals/corr/ZhaoSE17b] in order to mitigate underfitting, but even that does not fully resolve the underfitting issue. ![image](figure_example_segs.png){width="\textwidth"} Discussion ========== Registration of head and neck patients often involves large deformations due to the complexity of different body geometry, position, rotation, and bending angle. Cases that require these large deformations can be better fit by breaking down the overall deformation into a cascade of smaller, more manageable ones. As our framework cascades deformations (i.e. Equation 3 and 4), it maintains a diffeomorphic property if each individual deformation in the cascade remains diffeomorphic[@Ashburner2007], which we can enforce by assuming a stationary velocity field and integrating that velocity field using a scaling and squaring method (see section C in the Supplementary Materials). As depicted in Figure \[boxplot\], incorporating more deformation blocks into the cascade allowed for improved training-time registration performance, which can lead to overfitting as is the case with Baselines 1 and 2. Using more deformation blocks in the cascade improves training efficiency, because it allows the network to perform a coarse-to-fine alignment with each alignment involving smaller displacements than if the network had only used one deformation block. Our proposed method uses a variational approach while leveraging multiple dense deformation blocks in a cascade. The variational regularization terms, along with a built-in gaussian smoothing of the velocity field, help to reduce overfitting for our proposed method. Moreover, our method utilizes an improved localization network composed of dense convolution blocks. Along with the semi-supervised pseudo-labelling of our training data, these improvements contribute to the improved performance of our proposed methods as compared to other state of the art learning-based registration algorithms. ![image](table2.png){width="\textwidth"} \[table2\] As with any method intended for clinical use, it is natural to question performance under less than ideal situations. Other segmentation methods, such as supervised deep learning ones, may underperform in the presence of large image artifacts. In head and neck data, the presence of metal artifacts from dental implants, for instance, can obscure surrounding OARs and degrade the performance of segmentation methods applied to those images. Under similar conditions presented in Figure 4a-b, we can appreciate the robustness of our proposed methods to these image artifacts. There are a few potential limitations to our methods. As our current study only uses the Brouwer atlas, performance is largely capped by the similarity between the Brouwer atlas and patient images. In edge cases where there are large differences between the atlas and patient, it may be better to first merge multiple atlases or retrain a model using a single, more representative atlas. Our comparisons use ground truth OAR contours acquired from routine clinical workflow. While this does improve the relevance of our results to clinical practice, it also introduces biases that may not be as present had the ground truth contours come from multiple expert raters following a specific atlas. To further determine the usefulness of our proposed algorithm for clinical applications, we compare it to the current state of the art supervised learning segmentation algorithms and traditional multi-atlas-based auto segmentation algorithms (multi-ABAS). Though we cannot feasibly test all of these algorithms on our particular dataset, we would like to follow the precedent of previous works and present a rough comparison1. Table 1 demonstrates that the performance of our algorithm compares very favorably against traditional multi-ABAS algorithms and matches the performance of current state of the art supervised segmentation algorithms, making our algorithm highly relevant to the clinic. Conclusions =========== Our results demonstrate the clinical applicability of atlas-based segmentation through semi-supervised diffeomorphic registration. We show that our algorithm exceeds the performance of other learning-based registration algorithms and traditional atlas-based auto segmentation algorithms while providing comparable performance to that of current state of the art supervised segmentation algorithms. This work presents the approach behind learning-based registration frameworks and can be further extended to other clinically relevant registration problems (i.e. multimodal registration, atlas-based dose prediction, etc.) or atlas-based segmentation of other regions of the body (i.e. lungs, prostate, etc.). Supplemental Materials {#supplemental-materials .unnumbered} ====================== Deep Learning Based Registration Methods ---------------------------------------- Conceptually, registration methods that use deep learning require methods for feature extraction and spatial transformation of images. Feature extractors are tasked with transforming high dimensional inputs into meaningful low-dimensional features, and since the inputs to the registration model are images (i.e. intensity matrices M and F), various CNN based architectures are typically used for feature extraction. These extracted features can then provide useful information on how best to warp the moving image to the fixed image. The mechanism typically used for warping images is some variant of a spatial transformer[@DBLP:journals/corr/JaderbergSZK15], though there is a mechanism for aligning images using CNNs to perform patch-wise matching that does not require a spatial transformer[@Dalca2016]. These patch-wise methods, however, are computationally prohibitive. ![Fig. 5: visualization of the spatial transformer block that takes in the deformation field and image being transformed and outputs the transformed image.[]{data-label="st"}](spatial_transformer.png){width="50.00000%"} Spatial warping in deep learning registration is typically accomplished through a Spatial Transformer layer[@DBLP:journals/corr/JaderbergSZK15], which takes an image and some transformation parameters as inputs and generates a warped version of the input image. The spatial transformer layer in our proposed framework performs the following steps:\ 1. warp every voxel $g$ to a new off grid location $g'$ such that $g'=g+\phi(g)$ where $\phi(g)$ is a voxel-wise shift at voxel $g$ 2. compute a linear interpolation of the image at the new location $g'$ The first category of deep learning-based registration methods involves training a neural network to map a pair of input images to a ground truth deformation field. These supervised registration methods rely on ground truth deformations that are usually obtained from conventional registration methods. Due to the reliance on a ground truth deformation field, the utility of training these supervised models may be more limited[@Krebs2017; @Sokooti2017]. Based on the notion of learning deformation fields to perform registration tasks, several works have used neural networks to learn deformation fields in an unsupervised, end-to-end fashion[@DBLP:journals/corr/abs-1809-05231; @DBLP:journals/corr/abs-1903-03545; @DBLP:journals/corr/abs-1902-05020; @Krebs2019]. Instead of learning from ground truth deformation fields, these unsupervised approaches learn deformation fields that minimize a registration objective function. Similar to conventional registration methods, deep learning-based methods can constrain the deformation field to be diffeomorphic. Further, recent work by Dalca et al. uses a variational inference approach that tries to minimize the Kullback–Leibler (KL) divergence between their predicted deformation field distribution (posterior) and a gaussian deformation prior[@DBLP:journals/corr/abs-1903-03545]. Other works may not use variational inference but still constrain the learned deformation to a diffeomorphism and regularize on the deformation field directly[@Krebs2019]. Generative Model ---------------- Under a generative model formulation, the network learns mean $\mu_z$ and $\log\Sigma_z$ of the velocity field distribution $z$ instead of the velocity field directly. The velocity field distribution is sampled from the predicted mean and $\log\Sigma_z$ using the reparameterization trick: $$\begin{aligned} z=\mu_z+\epsilon\sqrt{e^{log{\Sigma}}}\text{, where} \epsilon\sim\mathcal{N}(0,I)\end{aligned}$$ There are also attempts in literature to use a VAE-like approach where the latent variable $z$ represents a low dimensional embedding instead of the velocity field distribution[@Krebs2019]. A key assumption made, as done in reference[@DBLP:journals/corr/abs-1903-03545], is to model the prior stationary velocity field distribution as a multivariate gaussian: $$\begin{aligned} p(z)=\mathcal{N}(0,\Sigma_z)\end{aligned}$$ where $z$ is the latent variable of voxel wise velocities that parametrize the warping function $\phi_z$, $\mathcal{N}(\mu,\Sigma)$ is the multivariate gaussian with mean $\mu$ and covariance $\Sigma$, and $p$ is the prior probability. In the case of the VAE-like approach, $$\begin{aligned} p(z)=\mathcal{N}(0,I)\end{aligned}$$ The assumption that the posterior distribution can be approximated with a multivariate gaussian typically applies after the moving image M has already been warped by an affine transform. Diffeomorphic Transforms ------------------------ The diffeomorphism $\phi$ is specified by integrating the following ODE using the scaling and squaring method: $$\begin{aligned} \frac{\partial\phi}{\partial t}=v^{(t)}(\phi^{(t)})\end{aligned}$$ Diffeomorphisms are generated by initializing $\phi$ to the identity ($\phi^{(0)}=Ig$) and integrating over unit time to compute $\phi^{(1)}$ [@Ashburner2007; @DBLP:journals/corr/abs-1903-03545]. Following our generative model approach, we define the velocity field $v$ as the latent variable $z$, but the diffeomorphism $\phi$ can be specified generally for a velocity field without taking a generative model approach. We will subsequently notate $\phi_z$ as being parameterized by the latent variable $z$. The integration is then computed using the scaling and squaring method where a large number of small deformations is used to maintain accuracy. The scaling and squaring approach assumes that the number of time steps is a power of two and computes $$\begin{aligned} {\phi_z}^{(1)}=exp(z)\end{aligned}$$ We then can derive the recurrence as follows: $$\begin{aligned} {\phi_z}^{(1)}&={\phi_z}^{(1/2)}\circ{\phi_z}^{(1/2)}\nonumber\\ {\phi_z}^{(1/2)}&={\phi_z}^{(1/4)}\circ{\phi_z}^{(1/4)}\nonumber\\ {\phi_z}^{(1/4)}&={\phi_z}^{(1/8)}\circ{\phi_z}^{(1/8)}\nonumber\\ {\phi_z}^{(1/2^{t-1})}&={\phi_z}^{(1/2^t)}\circ{\phi_z}^{(1/2^t)}\nonumber\\ {\phi_z}^{(1/2^t)}&=g+\frac{z}{2^t}\end{aligned}$$ In practice, we set $t$ to be large so that each deformation is small. Equation 10 then ensures the mapping is diffeomorphic based on the intuition that the Jacobian of a deformation that conforms to an exponential is always positive. ![image](table1.png){width="\textwidth"} \[table1\] Under a probabilistic formulation, the aim is then to estimate the posterior probability of $z$ given the observed images ($p(z|F;M)$) and find the most probable estimate of the values for $z$, which is known as the maximum a posteriori (MAP) estimate. If we approximate the likelihood $p(F| z;M)$ as a multivariate gaussian, $$\begin{aligned} p(F| z;M)=\mathcal{N}(M',\Sigma_F)\end{aligned}$$ A variational learning approach can then be followed, where we minimize the Evidence Lower Bound (ELBO) loss: $$\begin{aligned} \mathcal{L}&_{ELBO}\nonumber\\&=\mathbb{E}_{q(z|F;M)}\big[log\frac{p(F,z;M)}{q(z|F;M)}\big]\nonumber\\ &=\log{p(F;M)}-D_{KL}(q(z|F;M)||p(z|F;M))\nonumber\\ &=-\mathbb{E}_{q(z|F;M)}[\log p(F|z;M)]\nonumber\\ &\hspace{10pt}+D_{KL}(q(z|F;M)||p(z))\nonumber\\ &\hspace{10pt}+\log p(F;M)\nonumber\\ &=-\mathbb{E}_{q(z|F;M)}[\log p(F|z;M)]\nonumber\\ &\hspace{10pt}+\frac{1}{2}[\text{tr}(\lambda D\Sigma_{q}-\log\Sigma_{q})+\mu^T_{q}\Sigma_{p(z)}\mu_{q}]\nonumber\\ &\hspace{10pt}+\log p(F;M) \end{aligned}$$ The equation above follows from a derivation in the recent Voxelmorph paper[@DBLP:journals/corr/abs-1903-03545]. For our objective function, we define $$\begin{aligned} -\mathbb{E}_{q(z|F;M)}[\log p(F|z;M)]& \approx L_{recon-diff}\nonumber\\ &+L_{recon-aff}\nonumber\\ &+L_{segmentation-sim} \end{aligned}$$ which is further expanded on in the section below on the objective function. In the VAE approach[@Krebs2019], the main difference is that the prior is assumed to be a unit gaussian, so the KL divergence term in the above equation is simpler to compute: $$\begin{aligned} \mathcal{L}_{ELBO} &=-\mathbb{E}_{q(z|F;M)}[\log p(F|z;M)]\nonumber\\ &\hspace{10pt}+D_{KL}(q(z|F;M)||p(z))\nonumber\\ &\hspace{10pt}+\log p(F;M)\nonumber\\ &=-\mathbb{E}_{q(z|F;M)}[\log p(F|z;M)]\nonumber\\ &\hspace{10pt}+\frac{1}{2}\sum_{i=1}^{\Omega}{(\sigma_i^2+\mu_i^2-\log\sigma_i^2)}\nonumber\\ &\hspace{10pt}+\log p(F;M) \end{aligned}$$ As we found vanilla VAE implementations to underfit the training data set, we decided to implement an Info VAE instead. In the Info VAE, we replace the KL divergence term with a maximum mean discrepancy loss where $k(\cdot,\cdot)$ is any positive definite kernel[@DBLP:journals/corr/ZhaoSE17b]: $$\begin{aligned} \mathcal{L}_{MMD}&=\mathbb{E}_{q(z|F;M),q(z'|F;M)}[k(z,z')]\nonumber\\ &\hspace{10pt}+\mathbb{E}_{p(z),p(z')}[k(z,z')]\nonumber\\ &\hspace{10pt}-2\mathbb{E}_{q(z|F;M),p(z')}[k(z,z')] \end{aligned}$$ Objective Function (cont.) -------------------------- For convenience, we reproduce Equation 5 (the objective function) below: $$\begin{aligned} \mathcal{L}(M,F,S_A,S_A';\theta)=&\mathcal{L}_{recon-diff}\\ &+\mathcal{L}_{recon-affine}\\ &+\mathcal{L}_{segmentation-sim}\\ &+D_{KL}(q_{diff_1}(z_{diff_1}| F;M)||p(z))\\ &+D_{KL}(q_{diff_2}(z_{diff_2}| F;M)||p(z))\end{aligned}$$ For all reconstruction losses ($\mathcal{L}_{recon}$), we compute the mutual information as follows: $$\begin{aligned} I(X,Y;\theta)=\sum_{x\in X}\sum_{y\in Y}{p(x,y)\log\frac{p(x,y)}{p(x)p(y)}}\end{aligned}$$ In practice, we compute the mutual information using 32 bins where the standard deviation is calculated as half the width of each bin. We then define each component of the overall loss function below. The first component ($\mathcal{L}_{recon-diff}$) captures the similarity between the fixed image and the final moved image: $$\begin{aligned} \mathcal{L}&_{recon-diff}(M,F,\phi^{aff},\phi^{diff_1},\phi^{diff_2};\theta)\nonumber\\ &=I(M{\circ\phi}^{aff}\circ\phi^{diff_1},F;\theta)\nonumber\\ &\hspace{10pt}+I(M\circ\phi^{aff}\circ\phi^{diff_1}\circ\phi^{diff_2},F;\theta)\end{aligned}$$ The second component ($\mathcal{L}_{recon-affine}$) captures the similarity between the fixed image and the moving image warped using an affine transform: $$\begin{aligned} \mathcal{L}&_{recon-affine}(M,F,\phi^{aff};\theta)\nonumber\\ &=I(M\circ\phi^{aff},F;\theta)\end{aligned}$$ The third component ($\mathcal{L}_{segmentation-sim}$) captures the similarity between segmentations $S_F$ and $S_A'$. Recall that $S_A'$ is the set of warped atlas segmentations (where we use 8 out of 36 OARs to generate the segmentation mask) and $S_F$ is the 8 OAR segmentation of $M$ either manually contoured or generated by deploying a supervised CNN on image $F$—we describe this process in detail in the experimental setup. We define this component using the MSE as follows: $$\begin{aligned} \mathcal{L}_{segmentation-sim}(S_F,S_A';\theta)=\frac{1}{2\Omega}\sum_{\Omega}[S_F-S_A']^2\end{aligned}$$ The fourth component $D_{KL}(q(z|F;M)||p(z))$ represents the KL divergence between the approximate posterior $q(z|F;M)$ and our assumed multivariate gaussian prior $p(z)$. The derivation for our overall loss function is shown in the Diffeomorphic Transforms section of our Supplemental Materials, and part of the derivation is reproduced here: $$\begin{aligned} D_{KL}(q(z|F;M)||p(z))=\frac{1}{2}[\text{tr}(\lambda D\Sigma_{q}-\log\Sigma_{q})+\mu^T_{q}\Sigma_{p(z)}\mu_{q}]\end{aligned}$$ Finally, we note one last component ($\mathcal{L}_{smooth}$) that is not a part of our proposed method but acts as a regularization term for non-generative models (which we use in our baseline comparisons). $\mathcal{L}_{smooth}$ is a gradient loss that ensures smoothness of the displacement field $\phi^{diff}$: $$\begin{aligned} \mathcal{L}_{smooth}(\phi^{diff};\theta)=\sum_\Omega ||\nabla\phi^{diff}(g)||^2\end{aligned}$$ where we approximate $\frac{\partial\phi^{diff}(g)}{\partial x_i}\approx\phi^{diff}(g+e_i)-\phi^{diff}(g)$ with $e_{1,2,3}$ forming the natural basis for a 3D image. Finally, we summarize hyperparameter values used in our experiments in Table 2.
--- abstract: 'Neutrino masses and mixing are generated in a supersymmetric standard model when R-parity is violated in bilinear mass terms. The mixing matrix among the neutrinos takes a restrictive form if the lepton flavor universality holds in the R-parity violating soft masses. It turns out that only the small angle MSW solution to the solar neutrino problem is consistent with the result of the CHOOZ experiment and the atmospheric neutrino data.' address: | Department of Physics, Tohoku University\ Sendai 980-8578, Japan author: - 'Fumihiro Takayama[^1] and Masahiro Yamaguchi[^2]' date: May 2000 title: 'Neutrino Masses and Mixing from Bilinear R-Parity Violation [^3] ' --- Introduction ============ R-parity violation in a supersymmetric standard model provides an intriguing mechanism to generate neutrino masses. The R-parity is a $Z_2$ discrete parity which distinguishes (R-parity odd) superparticles from (R-parity even) ordinary particles. Though one often assumes the R-parity conservation in model building, it can break without conflicting phenomenological problems such as very fast proton decays. If the R-parity violating terms also violate lepton number conservation, the neutrinos acquire masses either at tree level or at loop levels [@TY-HallSuzuki]. One of the appealing points of this scenario is that it does not require existence of exotic particles such as heavy right-handed neutrinos apart from the superpartners which are already included in the minimal supersymmetric standard model. The R-parity violation may also give novel collider signatures. If the R-parity were conserved, superparticles would be produced in pairs, and also the lightest superparticle (LSP) would be stable and escape detection, resulting in a missing energy as a supersymmetry signal. If the R-parity is broken, on the other hand, the lightest superparticle may decay to ordinary particles inside a detector. Detailed study of the final states may reveal properties of the R-violating interaction. Thus in this scenario, useful information on the neutrino masses may also be inferred from collider experiments. Here we would like to consider the case of bilinear R-parity violation in which R-parity is broken only in bilinear mass terms. This is particularly interesting because it can be embedded into a grand unified theory (GUT) where quarks and leptons belong to one and the same representation of a GUT group. We will argue that, when the lepton flavor universality holds in the soft supersymmetry breaking masses, the neutrino mixing matrix is in a special form which is parameterized only by two angles. Thus the resulting pattern of the neutrino oscillations should be restricted. In fact we find that when we combine the CHOOZ experiment result with the $\nu_{\mu}$-$\nu_{\tau}$ oscillation solution to the atmospheric neutrino, the only allowed solar neutrino solution is the small angle MSW [@TY-TakayamaYamaguchi]. The results presented here are essentially given in Ref. [@TY-TakayamaYamaguchi] already, but we slightly generalize the previous ones. Namely here we discuss the case where the lepton flavor universality among soft supersymmetry breaking masses is assumed, whereas in the previous paper [@TY-TakayamaYamaguchi] the lepton-Higgs universality was considered. See Ref. [@TY-TakayamaYamaguchi] and references therein for more details. Bilinear R-parity Violation =========================== We first explain the model we are considering. The particle contents of the model are those of the minimal supersymmetric standard model (MSSM). We shall assume R-parity breaking bilinear terms in superpotential $$\begin{aligned} W = \mu H_D H_U +\mu_i L_i H_U +Y^L_i L_i H_D E^c_i +Y^D_i Q_iH_D D^c_i +Y^U_{ij} Q_i H_U U^c_j. \label{eq:superpotential} \end{aligned}$$ Here $H_D$, $H_U$ are two Higgs doublets, $L_i$ a $SU(2)_L$ doublet lepton, $E^c_i$ is a singlet lepton, $Q_i$ a doublet quark, $U^c_i$, $D^c_i$ a singlet quark of up and down type, respectively. Suffices $i, j$ stand for generations. The soft SUSY breaking terms in the scalar potential are $$\begin{aligned} V^{\mathrm{soft}}& =&B H_D H_U+B_i \tilde{L}_i H_U +m_{H_D}^2H_DH_D^{\dagger}+m_{H_U}^2H_UH_U^{\dagger} \nonumber \\ & & +m_{HL_i}^2\tilde{L}_iH_D^{\dagger}+m_{L_{ij}}^2 \tilde{L}_i\tilde{L}_j^{\dagger} +\cdots.\end{aligned}$$ where we have written only bilinear terms explicitly. Here we assume the following lepton flavor universality in the soft masses: $$\begin{aligned} B_i \propto \mu_i,~ m^2_{L_{ij}} \propto \delta_{ij},~ m_{HL_i}^2 \propto \mu_i \label{eq:lH-universality}.\end{aligned}$$ This universality suffers from radiative corrections and we assume that Eq. (\[eq:lH-universality\]) holds at an energy scale where these soft masses are given as the boundary conditions of the renormalization group equations. In this model, the R-parity violating terms are parameterized by $$\begin{aligned} s_3&\equiv&\sin \theta_3= \frac{\sqrt{\mu_1^2+\mu_2^2+\mu_3^2}}{\sqrt{\mu_1^2+\mu_2^2+\mu_3^2+\mu^2}}, \nonumber\\ s_2&\equiv&\sin \theta_2= \frac{\sqrt{\mu_1^2+\mu_2^2}}{\sqrt{\mu_1^2+\mu_2^2+\mu_3^2}}, \nonumber\\ s_1&\equiv&\sin \theta_1=\frac{\mu_1}{\sqrt{ \mu_1^2+\mu_2^2}}\end{aligned}$$ Here, for simplicity, we have taken $\mu$ and $\mu_i$ to be real. $s_3$ represents the magnitude of the R-parity violation, while the other two parameters characterize the mixing of the neutrinos. MNS Mixing Matrix ================= Let us now compute the neutrino masses and their mixing. To do this, it is convenient to use the Lagrangian whose renormalization point is at the electroweak scale. This may be obtained by the use of the renormalization group. Using this technique it is easy to see that one combination of the sneutrino fields $s_1 s_2 \tilde{\nu}_{e}+c_1s_2 \tilde{\nu}_{\mu} +c_2 \tilde{\nu}_{\tau}$ dominantly develops a non-vanishing vacuum expectation value (VEV). The VEV induces a mixing between the neutrino and neutralinos, generating a mass for the neutrino at the tree level. The neutinos also acquire masses at the one-loop level. An analysis of Ref [@TY-TakayamaYamaguchi] shows that the mixing matrix of the neutrino sector, the MNS matrix [@TY-MNS], becomes $$U_{i \alpha}= \left( \begin{array}{@{\,}ccc@{\,}} U_{\tau 3} & U_{\tau 2} & U_{\tau 1} \\ U_{\mu 3} & U_{\mu 2} & U_{\mu 1} \\ U_{e 3} & U_{e 2} & U_{e 1} \\ \end{array} \right) = \left( \begin{array}{@{\,}ccc@{\,}} c_{\theta} & -s_{\theta} & 0 \\ c_1s_{\theta} & c_1c_{\theta} & -s_1 \\ s_1s_{\theta} & s_1c_{\theta} & c_1 \\ \end{array} \right),$$ where $\theta$ is approximately equal to $\theta_2$ with some small correction. Here $i$ and $\alpha$ denote the weak current basis and the mass eigen basis, respectively. The MNS matrix obtained here contains only two angles, while a general $3\times 3$ rotation matrix will have 3 angles. This is essential in the following analysis. Results ======= We are now at the position to give our results. Here we assume the hierarchical mass structure, namely $m_3 \gg m_2 \gg m_1$ so that $\Delta m_{32}^2 \simeq \Delta m_{31}^2 \gg \Delta m_{21}^2$, which are naturally realized in our model. The atmospheric neutrino can be explained by $\nu_{\tau}$-$\nu_{\mu}$ oscillation. The transition probability is proportional to $4|U_{\mu3}|^2|U_{\tau3}|^2=4c_1^2s_{\theta}^2c_{\theta}^2$ and it must be close to unity to accord with the superKamiokande data [@TY-atm]. On the other hand, the CHOOZ experiment [@TY-CHOOZ] gives a bound on $4|U_{e3}|^2=4s_1^2 s_{\theta}^2$ to be smaller than 0.2. These two constraints imply that the angle $s_{1}$ must be small. Therefore the solar neutrino [@TY-solar] must be explained by the small angle MSW solution, since $\nu_{\mu}$-$\nu_{e}$ oscillation involves the small $s_{1}$ in its transition probability. In fact the large angle solutions require a large $s_1$, in contradition with the other experimental results. This non-trivial relation comes from the fact that the MNS matrix is characterized by the two angles. If one relaxed the lepton flavor universality among the soft masses, one would get a more general mixing matrix. However, one should carefully choose the soft masses not to conflict with the severe bounds on the lepton flavor violating processes. Next we would like to discuss the neutrino masses. The atmospheric neutrino requires $\Delta m_{\rm{atm}}^2 \simeq (2-6) \times 10^{-3} \mbox{eV}^2$ and the small angle MSW to the solar neutrino indicates $\Delta m^2_{\rm{SMSW}} \simeq (0.4-1)\times 10^{-5} \mbox{eV}^2$. Since in our scenario the heaviest neutrino mass is obtained at the tree level while the next one is generated at the one loop, the neutrino masses tend to have large hierarchy. The less hierarchical structure suggested by the experiments requires a mild fine tuning of the tree level VEV of the neutrino to suppress the tree-level mass. This can easily be achieved in many ways, one of which is the universality between the leptons and the Higgs in the soft masses and the use of the alignment. Conclusions =========== To summarize, we have considered the case of the bilinear R-parity violation with lepton flavor universality among the soft supersymmetry breaking masses. This generates the neutrino masses and mixing. The mixing matrix of the neutrinos has a very special pattern. This leads us to conclude that the large mixing angle solutions to the solar neutrino problem are ruled out when the CHOOZ result and the atmospheric neutrino data are combined together. Furthermore the relatively less hierarchical structure of the neutrino masses in this case are obtained if the soft SUSY breaking masses are suitably tuned to give small VEV for sneutrinos. It is interesting to mention that neutrino oscillation experiments, [*e.g.*]{} SuperKamiokande, SNO[@TY-SNO], and KamLAND[@TY-KamLAND], as well as collider experiments in future will provide (critical) tests to our scenario. [99]{} L.J. Hall and M. Suzuki, Nucl. Phys. B231 (1984 )419. F. Takayama and M. Yamaguchi, Phys. Lett. B476 (2000) 116. Z. Maki, M. Nakagawa and S. Sakata, Prog. Theor. Phys. 28 (1962) 870. Super-Kamiokande Collaboration, Y. Fukuda et al, Phys. Lett. B436 (1998) 133; Phys. Rev. Lett. 81 (1998) 1158; Phys. Rev. Lett. 81 (1998) 1562. CHOOZ Collaboration, M. Apollonio et al, Phys. Lett. B466 (1999) 415. See [*e.g.*]{} Y. Suzuki, talk given at International Symposium on Lepton and Photon Interactions at High Energies (Lepton-Photon’99), Stanford University, (August, 1999). SNO experiment; http://snodaq.phy.queensu.ca/SNO/sno.html KamLAND experiment;\ http://www.awa.tohoku.ac.jp/html/KamLAND/index.html [^1]: e-mail: [email protected] [^2]: e-mail: [email protected] [^3]: To appear in conference proceedings “Neutrino Oscillations and their Origin”, Fuji-Yoshida, Japan, Feb. 11-13, 2000
--- abstract: 'The critical behavior for intermittency is studied in two coupled one-dimensional (1D) maps. We find two fixed maps of an approximate renormalization operator in the space of coupled maps. Each fixed map has a common relavant eigenvaule associated with the scaling of the control parameter of the uncoupled one-dimensional map. However, the relevant “coupling eigenvalue” associated with coupling perturbation varies depending on the fixed maps. These renormalization results are also confirmed for a linearly-coupled case.' address: | Department of Physics\ Kangwon National University\ Chunchon, Kangwon-Do 200-701, Korea author: - 'Sang-Yoon Kim' title: Renormalization analysis of intermittency in two coupled maps --- A route to chaos via intermittency in the one-dimensional (1D) map is associated with a saddle-node bifurcation [@MP]. Intermittency just preceding a saddle-node bifurcation to a periodic attractor is characterized by the occurrence of intermittent alternations between regular behavior and chaotic behavior. Scaling relations for the average duration of regular behavior in the presence of noise have been first established [@EH] by considering a Langevin equation describing the map near the intermittency threshold and using Fokker-Plank techniques. The same scaling results for intermittency have been later found [@HH] by employing the same renormalization-group equation [@Feigenbaum] for period doubling with a mere change of boundary conditions appropriate to a saddle-node bifurcation. Recently, universal scaling results of period doubling for the 1D map have been generalized to the coupled 1D maps [@Kapral; @Kuznet; @Aranson; @Kim1; @Kim2], which are used to simulate spatially extended systems with effectively many degrees of freedom [@Kaneko]. It has been found that the critical scaling behaviors of period doubling for the coupled 1D maps are much richer than those for the uncoupled 1D map [@Kim1; @Kim2]. These results for the abstract system of the coupled 1D maps are also confirmed in the real system of the coupled oscillators [@Kim3]. Similarly, the scaling results of the higher period $p$-tuplings $(p=,3,4,...)$ in the 1D map are also generalized to the coupled 1D maps [@Kim4]. Here we are interested in another route to chaos via intermittency in coupled 1D maps. Using a renormalization method, we extend the scaling results of intermittency for the 1D map to two coupled 1D maps. Consider a map $T$ consisting of two identical 1D maps coupled symmetrically, $$T: \left \{ \begin{array}{l} x_{n+1}=f(x_n)+g(x_n,y_n), \\ y_{n+1}=f(y_n)+g(y_n,x_n), \end{array} \right. \label{eq:TCM}$$ where the subscript $n$ denotes a discrete time, $f(x)$ is a 1D map with a quadratic maximum, and $g(x,y)$ is a coupling function obeying a condition, $$g(x,x)=0\;\;{\rm for\;\;any}\;\;x. \label{eq:CC}$$ The two-coupled 1D map (\[eq:TCM\]) is called a symmetric map because it has an exchange symmetry such that $${\sigma}^{-1}T{\sigma}({\bf z})=T({\bf z})\;\; {\rm for\;\;all\;\;}{\bf z}, \label{eq:ES}$$ where ${\bf z}=(x,y)$, $\sigma$ is an exchange operator acting on ${\bf z}$ such that $\sigma {\bf z}=(y,x)$, and ${\sigma}^{-1}$ is its inverse. The set of all fixed points of $\sigma$ forms a synchronization line $y=x$ in the state space. It follows from Eq. (\[eq:ES\]) that the exchange operator $\sigma$ commutes with the symmetric map $T$, i.e., $\sigma T = T \sigma$. Thus the synchronization line becomes invariant under $T$. An orbit is called a(n) (in-phase) synchronous orbit if it lies on the invariant synchronization line, i.e., it satisfies $$x_n=y_n\;\;{\rm for\;\;all\;\;}n. \label{IO}$$ Otherwise, it is called an (out-of-phase) asynchronous orbit. Let us introduce new coordinates $X$ and $Y$, $$X={\frac{{x+y} }{2}},\;\;\;Y={\frac{{x-y} }{2}}. \label{eq:NC}$$ Then the map (\[eq:TCM\]) becomes $$\begin{aligned} X_{n+1} &=& F(X_n,Y_n) \nonumber \\ &=& {\frac{1 }{2}}\; [f(X_n+Y_n)+f(X_n-Y_n)] \nonumber \\ &&+{\frac{1 }{2}}\; [g(X_n+Y_n,X_n-Y_n)+g(X_n-Y_n,X_n+Y_n)], \nonumber \\ && \label{eq:NTCM} \\ Y_{n+1} &=& G(X_n,Y_n) \nonumber \\ &=& {\frac{1 }{2}}\; [f(X_n+Y_n)-f(X_n-Y_n)] \nonumber \\ &&+{\frac{1 }{2}}\; [g(X_n+Y_n,X_n-Y_n)-g(X_n-Y_n,X_n+Y_n)]. \nonumber\end{aligned}$$ This map is invariant under the reflection $Y \rightarrow -Y$, and hence the invariant synchronization line becomes $Y=0$. Then the synchronous orbit of the old map (\[eq:TCM\]) becomes the orbit of this new map with $Y=0$. Furthermore, the $X$-coordinate of the synchronous orbit satisfies the uncoupled 1D map, i.e., $X_{n+1}=f(X_n)$, because the coupling function $g$ obeys the condition (\[eq:CC\]). Stability of a synchronous orbit of period $p$ is determined from the Jacobian matrix $M$ of $T^p$, which is given by the $p$ product of the linearized map $DT$ of the map (\[eq:NTCM\]) along the orbit $$\begin{aligned} M &=& {\prod_{n=1}^{p}} DT(X_n,0) \nonumber \\ &=& {\prod_{n=1}^{p}} \left ( \begin{array}{cc} f^{\prime}(X_n) & 0 \\ 0 & f^{\prime}(X_n)-2G(X_n) \end{array} \right ), \label{eq:JM}\end{aligned}$$ where $f^{\prime}(X)=df(X)/dX$ and $G(X)= \partial g(X,Y)/ \partial Y |_{Y=X}$. The eigenvalues of $M$, called the Floquet (stability) multipliers of the orbit, are $$\lambda_1 = {\prod_{n}^{p}} f^{\prime}(X_n),\;\; \lambda_2 = {\prod_{n}^{p}} [f^{\prime}(X_n) - 2G(X_n)]. \label{eq:SM}$$ Note that $\lambda_1$ is just the Floquet multiplier for the case of the uncoupled 1D map and the coupling affects only $\lambda_2$. Consider a synchronous saddle-node bifurcation to a synchronous periodic orbit. The synchronous periodic orbit is stable when both Floquet multipliers lie inside the unit circle, i.e., $|\lambda_j| < 1$ for $j=1$ and $2$. Thus its stable region in the parameter plane is bounded by four bifurcation lines, i.e., those curves determined by the equations $\lambda_j=\pm1$ $(j=1,2)$. When a Floquet multiplier $\lambda_j$ increases thorugh $1$, the stable synchronous periodic orbit loses its stability via saddle-node or pitchfork bifurcation. On the other hand, when a Floquet multiplier $\lambda_j$ decreases thorugh $-1$, it becomes unstable via period-doubling bifurcation. (For more details on bifurcations, refer to Ref. [@Guckenheimer].) Here we are interested in intermittency just preceding the synchronous saddle-node bifurcation. Employing an approximate renormalization operator [@Kim2; @Greene; @Mao; @Lahiri] which includes a truncation, we generalize the 1D scaling results for intermittency to the case of two coupled 1D maps. We thus find two fixed maps of the approximate renormalization operator. They have a common relavant eigenvaule associated with the scaling of the control parameter of the uncoupled 1D map. However, the relevant “coupling eigenvalue” associated with coupling perturbation varies depending on the fixed maps. Truncating the map (\[eq:NTCM\]) at its quadratic terms, we have $$T_{{\bf P}}: \left \{ \begin{array}{l} X_{n+1}=A+BX_n + C X_n^2 + F Y_n^2 \\ Y_{n+1}=D Y_n + E X_n Y_n \end{array} \right. , \label{eq:TM}$$ which is a six-parameter family of coupled maps. Other terms do not appear because $F(X,Y)$ and $G(X,Y)$ in Eq. (\[eq:NTCM\]) are even and odd in $Y$, respectively. Here ${\bf P}$ represents the six parameters, i.e., ${\bf P} = (A,B,C,D,E,F)$. The construction of Eq. (\[eq:TM\]) corresponds to a truncation of the infinite dimensional space of coupled maps to a six-dimensional space. The parameters $A,\;B,\;C,\;D,\;E,\;$ and $F$ can be regarded as the coordinates of the truncated space. We look for fixed points of the renormalization operator ${\cal R}$ in the truncated six-dimensional space of coupled maps, $${\cal R} (T) = \Lambda T^2 \Lambda^{-1}. \label{eq:RO}$$ Here the rescaling operator $\Lambda$ is given by $$\Lambda = \left ( \begin{array}{cc} \alpha & 0 \\ 0 & \alpha \end{array} \right ), \label{eq:SO}$$ where $\alpha$ is a rescaling factor. The operation ${\cal R}$ in the truncated space can be represented by a transformation of parameters, i.e., a map from ${\bf P} \equiv (A,B,C,D,E,F)$ to ${\bf P^{\prime}} \equiv (A^{\prime},B^{\prime},C^{\prime},D^{\prime},E^{\prime},F^{\prime}),$ \[eq:PT\] $$\begin{aligned} A^{\prime}&=& \alpha A (1+B+AC), \label{eq:RGTA} \\ B^{\prime}&=& B (B+2AC), \label{eq:RGTB} \\ C^{\prime}&=& {\frac{C }{\alpha}} (B+B^2+2AC), \label{eq:RGTC} \\ D^{\prime}&=& D (D+AE), \label{eq:RGTD} \\ E^{\prime}&=& {\frac{E }{\alpha}} (BD+D+AE), \label{eq:RGTE} \\ F^{\prime}&=& {\frac{F }{\alpha}} (2AC+B+D^2) . \label{eq:RGTF}\end{aligned}$$ The fixed point ${\bf P}^* =(A^*,B^*,C^*,D^*,E^*,F^*)$ of this map can be determined by solving ${\bf P}^{\prime}={\bf P}$. We thus find two solutions associated with a saddle-node bifurcation, as will be seen below. The map (\[eq:TM\]) with a solution ${\bf P^*}$ $(T_{{\bf P^*}})$ is the fixed map of the renormalization transformation ${\cal R}$; for brevity $T_{{\bf P^*}}$ will be denoted as $T^*$. For a saddle-node bifurcation at $x=0$, the 1D map $f(x)$ satisfies $$f(0)=0,\;\;\;f^{\prime}(0)=1. \label{eq:bc}$$ Hence the function $F(X,Y)$ in Eq. (\[eq:NTCM\]) obeys $$\left. F(0,0)=0,\;\;\;{\frac{{\partial F} }{{\partial X}}} \right|_{(0,0)}=1. \label{eq:NBC}$$ We first note that Eqs. (\[eq:RGTA\])-(\[eq:RGTC\]) are only for $A,\;B,\;C,$ and $\alpha$. We find one solution for $A^*,\;B^*,\;C^*,$ and $\alpha$ satisfying the conditions (\[eq:NBC\]), $$\alpha=2,\;\;A^*=0,\;\;B^*=1,\;\;C^*:{\rm arbitrary\;number}.$$ Substituting the values of $A^*,\;B^*$ and $\alpha$ into Eqs. (\[eq:RGTD\])-(\[eq:RGTF\]), we have two solutions for $D^*, \;E^*,$ and $F^*$, \[eq:FP\] $$\begin{aligned} D^*&=&0,\;\;E^*=0,\;\;F^*=0, \\ D^*&=&1,\;\;E^*:{\rm arbitrary\;number},\;\;F^*: {\rm arbitrary\;number}.\end{aligned}$$ These two solutions are associated with intermittency in the coupled 1D maps, as will be seen below. Hereafter we will call each map from the top as the $I$ and $E$ map, respectively, as listed in Table \[FP\]. Consider an infinitesimal perturbation $\epsilon \, \delta {\bf P}$ to a fixed point ${\bf P}^*$ of the transformation of parameters (\[eq:RGTA\])-(\[eq:RGTF\]). Linearizing the transformation at ${\bf P}^*$, we obtain the equation for the evolution of $\delta {\bf P}$, $$\delta {\bf P}^{\prime}= J \delta {\bf P},$$ where $J$ is the Jacobian matrix of the transformation at ${\bf P}^*$. Since the $6 \times 6$ Jacobian matrix $J$ decomposes into smaller blocks, one can easily obtain its eigenvalues. Two of them are $$\left. \lambda_1 = {\frac{{\partial C^{\prime}} }{{\partial C}}} \right|_{{\bf P^*}} =1, \;\;\; \left. \lambda_2 = {\frac{{\partial F^{\prime}} }{{\partial F}}} \right|_{{\bf P^*}} = {\frac{{1+D^{*2}}} {2}}.$$ Here $\lambda_1$ is an eigenvalue associated with scale change in $X$, and hence $C^*$ is arbitrary. The eigenvalue $\lambda_2$ is also associated with scale change in $Y$ in the case $D^*=1$; this case corresponds to the $E$ map. Thus $F^*$ for this case becomes arbitrary. However, in the case $D^*=0$ corresponding to the $I$ map, $\lambda_2$ becomes an irrelevant eigenvalue. Note that the $I$ map is invariant under a scale change in $Y$ because $F^*=0$. The remaining four eigenvalues are those of the following $2 \times 2$ blocks, $$\begin{aligned} \left. M_1 = {\frac{ {\partial (A^{\prime},B^{\prime})} } {{\partial (A,B)} }} \right|_{{\bf P^*}} = \left( \begin{array}{cc} 4 & 0 \\ 2\,C^* & 2 \end{array} \right), \\ \left. M_2 = {\frac{ {\partial (D^{\prime},E^{\prime})} } {{\partial (D,E)} }} \right|_{{\bf P^*}} = \left( \begin{array}{cc} 2\,D^* & 0 \\ E^* & D^* \end{array} \right).\end{aligned}$$ The two eigenvalues of $M_i$ $(i=1,2)$ are called $\delta_i$ and $\delta_i^{\prime}$, and they are listed in Table \[EV\]. The two $I$ and $E$ maps have common eigenvalues of $M_1$. They are $ \delta_1=4$ and $\delta_1^{\prime}=2$, which are just the relevant eigenvalues [@HH] for the case of uncoupled 1D maps. Here the largest relevant eigenvalue $\delta_1$ is associated with scaling of the control parameter of the 1D map near the intermittency threshold. The eigenvalues $\delta_2$ and $\delta_2^{\prime}$ of $M_2$ are associated with coupling perturbations. These eigenvalues will be referred to as “coupling eigenvalues” (CE’s). The submatrix $M_2$ for the $I$ map becomes a null matrix, and hence there exist no CE’s. On the other hand, the $E$ map has a relevant CE $\delta_2=2$ and a marginal CE $\delta_2^{\prime}=1$. Here the relevant CE $\delta_2$ is associated with scaling of the coupling parameter, while the marginal one $\delta_2^{\prime}$ is associated with the arbitrary constant $E^*$. We also obtain the Floquet multipliers $\lambda_1^*$ and $\lambda_2^*$ of the fixed point $(0,0)$ of the fixed map $T^*$ of the renormalization transformation ${\cal R}$. They are given by $$\lambda_1^*=1,\;\;\;\lambda_2^*=D^*. \label{eq:CSM}$$ The $I$ and $E$ maps have a common Floquet multiplier $\lambda_1^*$, which is just that for the 1D case. However, the second Floquet multiplier $\lambda_2^*$ affected by coupling depends on the fixed maps; $\lambda_2^*=0$ $(1)$ for the $I$ $(E)$ map. In order to confirm the above renormalization results, we also study the intermittency for the linearly-coupled case. The critical set (set of critical points) for the intermittency consists of critical line segments. It is found that the $I$ map with no relevant CE’s governs the critical behavior at interior points of each critical line segment, while the $E$ map with one relevant CE $\delta_2$ $(=2)$ governs the critical behavior at both ends. We choose $f(x)=1-a x^2$ as the uncoupled 1D map in Eq. (\[eq:TCM\]) and consider a linear coupling case $g(x,y)=c(y-x)$. Here $c$ is a coupling parameter. Three critical line segments are found on a synchronous saddle-node bifurcation line $a=a_c$ $(=1.75$, above which a pair of synchronous orbits with period 3 appears. The critical behaviors near the three critical line segment are the same. As an example, consider a critical line segment including the zero-coupling point $c=0$ as one end point. Figure \[PD\] shows a phase diagram near this critical line segment denoted by a solid line. This diagram is obtained from the calculation of two Lyapunov exponents. In case of a synchronous orbit, its Lyapunov exponents are given by $$\sigma_\| (a) = {\lim_{m \rightarrow \infty}}\, {\frac{1 }{m}} \sum_{n=0}^{m-1} \ln|f^{\prime}(x_n)|,\;\; \sigma_\bot (a,c) = {\lim_{m \rightarrow \infty}}\, {\frac{1 }{m}} \sum_{n=0}^{m-1} \ln|f^{\prime}(x_n)-2c|. \label{eq:LE}$$ Here $\sigma_\|$ $(\sigma_\bot)$ denotes the mean exponential rate of divergence of nearby orbits along (across) the synchronization line $y=x$. Hereafter, $\sigma_\|$ and $\sigma_\bot$ will be referred to as tangential and transversal Lyapunov exponents, respectively. Note also that $\sigma_\|$ is just the Lyapunov exponent for the 1D case, and the coupling affects only $\sigma_\bot$. The data points on the $\sigma_\bot=0$ curve are denoted by solid circles in Fig. \[PD\]. A synchronous orbit on the synchronization line $y=x$ becomes a synchronous attractor with $\sigma_\bot <0 $ inside the $\sigma_\bot=0$ curve. The type of this synchronization attractor is determined according to the sign of $\sigma_\|$. A synchronous period-3 orbit with $\sigma_\| < 0$ becomes a synchronous periodic attractor above the critical line segment, while there exists a synchronous chaotic attractor with $\sigma_\| >0$ below the critical line segment. These periodic and chaotic regions are denoted by P and C in the diagram, respectively. There exists a synchronous period-3 attractor with $\sigma_\| =0$ on the critical line segment between these two regions. The motion on the synchronous chaotic attractor in the region C just below the critical line segment is characterized by the occurrence of intermittent alternations between regular behavior and chaotic behavior on the synchronization line. This is just the intermittency occurring in the uncoupled 1D map, because the motion on the synchronization line is the same as that for the uncoupled 1D case. Thus, a transition from a regular behavior to an intermittent chaotic behavior, which is essentially the same as that for the 1D case, occurs near the critical line segment joining two end points $c_l = -0.109045 \cdots$ and $c_r=0$ on the synchronous saddle-node bifurcation line $a=a_c(=1.75)$. Consider a “1D-like” intermittent transition to chaos near an interior point with $c_l < c < c_r$ of the critical line segment. We fix the value of $c$ at some interior point and vary the control parameter $\epsilon$ $(\equiv a_c -a)$. For $\epsilon <0$, there exists a synchronous period-3 attractor on the synchronization line. However, as $\epsilon$ is increased from zero, an intermittent synchronous chaotic attractor appears. Like the 1D case [@HH], the scaling relations of the mean duration $\bar l$ of regular behavior and the tangential Lyapunov exponent $\sigma_\|$ for an intermittent chaotic orbit on the synchronization line are obtained from the leading relavant eigenvalue $\delta_1$ $(=4)$ of the $I$ map, as will be seen below. We first note that the $I$ map is essentially a 1D map with zero Jacobian determinant (see Table \[FP\]). Since there exists no relevant CE’s associated with coupling perturbation, it has only relevant eigenvalues $\delta_1$ and $\delta_1^{\prime}$ like the 1D case. The $I$ map is therefore associated with the critical behavior at interior points of the critical line segments. A map with non-zero $\epsilon$ near a critical interior point is transformed to a new map of the same form, but with a new parameter $\epsilon ^{\prime}$ under a renormalization transformation ${\cal R}$. Here the control parameter scales as $$\epsilon ^{\prime}= \delta_1 \, \epsilon \,=\, 2^2 \epsilon.$$ Then the mean duration $\bar l$ and the tangential Lyapunov exponent $\sigma_\|$ satisfy the homogeneity relations, $${\bar l} (\epsilon ^{\prime}) = {\frac{1 }{2}} {\bar l} (\epsilon), \;\;\; {\sigma_\|} (\epsilon ^{\prime}) = 2 {\sigma_\|}(\epsilon),$$ which lead to the scaling relations, $${\bar l} (\epsilon) \sim \epsilon ^{- \mu},\;\;\; {\sigma_\|} (\epsilon) \sim \epsilon ^{\mu}, \label{eq:SR}$$ with exponent $$\mu = {\log 2} / \log {\delta_1} = 0.5.$$ The above 1D-like intermittent transition to chaos ends at both ends of the critical line segment. We fix the value of the control parameter $a=a_c$ $(=1.75)$ and study the critical behavior near both ends $c_l$ and $c_r$ by varying the coupling parameter $c$. Inside the critical line segment $(c_l < c < c_r)$, a synchronous period-3 attractor with $\sigma_\bot <0$ exists on the synchronization line, and hence the coupling tends to synchronize the interacting systems. However, as the coupling parameter $c$ passes through both ends, the transversal Lyapunov exponent $\sigma_\bot$ of the synchronous periodic orbit grows continuously from zero, and hence the coupling leads to desynchronization of the interacting systems. The synchronous orbit of period 3 is therefore no longer an attractor outside the critical line segments, and a new asynchronous attractor appears. The critical behaviors near both ends are the same. As an example, consider the case of the zero-coupling point $c_r=0$. Figure \[CLexp\] shows the plot of $\sigma_\bot$ versus $c$ for $a=a_c$. Note that $\sigma_\bot$ increases linearly with respect to $c$. Hence a transition from a synchronous to an asynchronous state occurs at the zero-coupling end point. The scaling relation of $\sigma_\bot (c)$ for $a=a_c$ is obtained from the relevant CE $\delta_2$ $(=2)$ of the $E$ map as follows. Consider a map with non-zero $c$ near the zero-coupling point. It is then transformed to a map of the same form, but with a renormalized parameter $c^{\prime}$ under a renormalization transformation ${\cal R}$. Here the coupling parameter obeys a scaling law, $$c^{\prime}= \delta_2 c = 2 c.$$ Then the transversal Lyapunov exponent $\sigma_\bot$ satisfies the homogeneity relation, $$\sigma_\bot (c^{\prime}) = 2 \sigma_\bot (c).$$ This leads to the scaling relation, $$\sigma_\bot (c) \sim c^{\nu},$$ with exponent $$\nu = {\log 2} / {\log \delta_2} =1.$$ Like the case of the $I$ map, the scaling behavior of $\sigma_\| (\epsilon)$ for $c=c_l$ or $c_r$ is obtained from the relevant eigenvalue $\delta_1$ $(=4)$ of the $E$ map, and hence it also satisfies the scaling relation (\[eq:SR\]). The critical behaviors of both exponents $\sigma_\|$ and $\sigma_\bot$ near an end point are thus determined from two relevant eigenvalues $\delta_1$ and $\delta_2$ of the $E$ map. An extended version of this work including the results of a renormalization analysis without truncation, the results for the many-coupled cases and so on will be given elsewhere [@Kim5] This work was supported by the the Korea Research Foundation under Project No. 1997-001-D00099. P. Manneville and Y. Pomeau, Phys. Lett. A [**75**]{}, 1 (1979); Physica D [**1**]{}, 219 (1980); Y. Pomeau and P. Manneville, Commun. Math. Phys.  [**74**]{}, 189 (1980). J-P Eckmann, L. Thomas and P. Wittwer, J. Phys. A [**14**]{}, 3153 (1981); J. E. Hirsh, B. A. Hubermann and D. J. Scalapino, Phys. Rev. A [**25**]{}, 519 (1982). J. E. Hirsh, M. Nauenberg, and D. J. Scalapino, Phys. Lett. A [**87**]{}, 391 (1982); B. Hu and J. Rudnick, Phys. Rev. Lett. [**48**]{}, 1645 (1982). M. J. Feigenbaum, J. Stat. Phys. [**19**]{}, 25 (1978); [**21**]{}, 669 (1979). I. Waller and R. Kapral, Phys. Rev. A [**30**]{}, 2047 (1984); R. Kapral, Phys. Rev. A [**31**]{}, 3868 (1985). S. P. Kuznetsov, Radiophys. Quantum Electron. [**28**]{}, 681 (1985); S. P. Kuznetsov and A. S. Pikovsky, Physica D [**19**]{}, 384 (1986); H. Kook, F. H. Ling, and G. Schmidt, Phys. Rev. A [**43**]{}, 2700 (1991). I. S. Aranson, A. V. Gaponov-Grekhov and M. I. Rabinovich, Physica D [**33**]{}, 1 (1988). S.-Y. Kim and H. Kook, Phys. Rev. A [**46**]{}, R4467 (1992); Phys. Rev. E [**48**]{}, 785 (1993); S.-Y. Kim, Phys. Rev. E [**49**]{}, 1745 (1994). S.-Y. Kim and H. Kook, Phys. Lett. A [**178**]{}, 258 (1993) K. Kaneko, in [*Theory and applications of coupled map lattices*]{}, edited by K. Kaneko (John Willy & Sons, New York, 1992), p. 1, and references cited therein. S.-Y. Kim and K. Lee, Phys. Rev. E [**54**]{}, 1237 (1996). S.-Y. Kim, Phys. Rev. E [**52**]{}, 1206 (1995); Phys. Rev. E [**54**]{}, 3393 (1996). J. Guckenheimer and P. Holmes, [*Nonlinear Oscillations, Dynamical Systems, and Bifurcations of Vector Fields*]{} (Springer-Verlag, New York, 1983), Sec. 3.5. J. Greene, R. S. MacKay, F. Vivaldi and M. J. Feigenbaum, Physica D [**3**]{}, 468 (1981). J.-M. Mao and J. Greene , Phys. Rev. A [**35**]{}, 3911 (1987). A. Lahiri, Phys. Rev. A [**45**]{}, 757 (1992). S.-Y. Kim (unpublished). ----------- ---------- ------- ------- ----------- ------- ----------- ----------- fixed map $\alpha$ $A^*$ $B^*$ $C^*$ $D^*$ $E^*$ $F^*$ $I$ map 2 0 1 arbitrary 0 0 0 $E$ map 2 0 1 arbitrary 1 arbitrary arbitrary ----------- ---------- ------- ------- ----------- ------- ----------- ----------- : Fixed point ${\bf P}^*$ of the renormalization transformation ${\cal R}$ and the rescaling factor $\alpha$. []{data-label="FP"} ----------- ------------ --------------------- ------------- --------------------- fixed map $\delta_1$ $\delta^{\prime}_1$ $\delta_2$ $\delta^{\prime}_2$ $I$ map 4 2 nonexistent nonexistent $E$ map 4 2 2 1 ----------- ------------ --------------------- ------------- --------------------- : Some eigenvalues $\delta_1, \delta^{\prime}_1, \delta_2,$ and $\delta^{\prime}_2$ of a fixed map $T^*$ of the renormalization operator are shown.[]{data-label="EV"}
--- author: - 'Daniel A. Bobylev' - 'Daria A. Smirnova' - 'Maxim A. Gorlach' bibliography: - 'TopologicalLib.bib' title: | Photonic topological states mediated by staggered bianisotropy.\ Supplementary Materials ---   In these Supplementary Materials, we provide further details on our theoretical model and numerical simulations of a single disk scattering spectrum. Bloch Hamiltonian {#sec:Bloch} ================= We start our analysis here from Eq. (9) of the article main text and choose the unit cell to be inversion-symmetric as depicted in Fig. 1(b) in the main text. Additionally, we consider a fixed polarization identifying $\phi^{(\pm)}$ with $m_x\pm i\,p_y$. The periodic part of the wave function is defined as $\ket{u_k}=\left(\phi_1^{(+)},\phi_1^{(-)},\phi_2^{(+)},\phi_2^{(-)},\phi_3^{(+)},\phi_3^{(-)},\phi_4^{(+)},\phi_4^{(-)}\right)^T$. The resulting $8\times 8$ Bloch Hamiltonian reads: $$\label{BlochHamiltonian} \hat{H}(k)= \begin{pmatrix} \mu & 0 & 1 & 3 & 0 & 0 & e^{-ik} & 3\,e^{-ik}\\ 0 & -\mu & 3 & 1 & 0 & 0 & 3\,e^{-ik} & e^{-ik}\\ 1 & 3 & -\mu & 0 & 1 & 3 & 0 & 0\\ 3 & 1 & 0 & \mu &3 & 1 & 0 & 0\\ 0 & 0 & 1 & 3 & -\mu & 0 & 1 & 3\\ 0 & 0 & 3 & 1 & 0 & \mu & 3 & 1\\ e^{ik} & 3\,e^{ik} & 0 & 0 & 1 & 3 & \mu & 0\\ 3\,e^{ik} & e^{ik} & 0 & 0 & 3 & 1 & 0 & -\mu \end{pmatrix}\:.$$ Thus, the designed array has 8 bands corresponding to the dipole excitations with a given polarization $(p_y, m_x)$. Pairwise degeneracy of the bands at the edge of Brillouin zone {#sec:Degeneracy} ============================================================== In our calculations we observe that each pair of the bulk bands becomes degenerate at the edge of Brillouin zone for $k=\pm K\equiv\pm\pi/(4\,a)$ \[see Fig. 3(a) of the article main text\]. As we show in this section, this property is related to the symmetry of the system under mirror reflection in $Oxy$ plane accompanied by the unit cell shift by half a period. Reflection in $Oxy$ plane preserves the magnitude of electric dipole moments which are polar vectors, and changes the sign of magnetic dipole moments which are axial vectors: $\tilde{m}_{nx}=-m_{nx}$, $\tilde{p}_{ny}=p_{ny}$. Therefore, the components $\phi_n^{(\pm)}$ transform as $\tilde{\phi}_n^{(+)}=-\phi_n^{(-)}$ and $\tilde{\phi}_n^{(-)}=-\phi_n^{(+)}$. Hence, for $k=\pm K$ this symmetry operation transforms the wave function as $\ket{\tilde{\psi}}=S\,\ket{\psi}$ with the transformation matrix $$\label{DegeneracyMatrix} S=\begin{pmatrix} 0 & 0 & -\sigma_x & 0\\ 0 & 0 & 0 & -\sigma_x\\ \sigma_x & 0 & 0 & 0\\ 0 & \sigma_x & 0 & 0 \end{pmatrix}\:,$$ where $\sigma_x$ is $2\times 2$ Pauli matrix. It is straightforward to check that $S\,\hat{H}(K)-\hat{H}(K)\,S=0$, i.e. matrix Eq.  commutes with the Hamiltonian for $k=K$. Hence, any eigenstate of the Bloch Hamiltonian $\ket{\psi_1(K)}$ is degenerate with another eigenstate $S\,\ket{\psi_1(K)}$ corresponding to the same Bloch wave number. At the same time, $S^2=-I$, i.e. double multiplication by $S$ yields the initial eigenstate. Therefore, the symmetry described by the matrix $S$ explains pairwise degeneracy of the Bloch bands at the edge of Brillouin zone. It should be stressed, that the degeneracy of the Bloch bands at the edge of Brillouin zone holds not only in our simplified theoretical model, but also in full-wave simulations, see Fig. 3(b) of the article main text. Chiral symmetry {#sec:ChiralSymmetry} =============== To prove chiral symmetry of our system, we construct the operator $$P=\begin{pmatrix} -\sigma_x & 0 & 0 & 0\\ 0 & \sigma_x & 0 & 0\\ 0 & 0 & -\sigma_x & 0\\ 0 & 0 & 0 & \sigma_x \end{pmatrix}\:.$$ It is straightforward to check that this operator anticommutes with the Bloch Hamiltonian, i.e. $P\,\hat{H}(k)+\hat{H}(k)\,P=0$. Therefore, our system possesses chiral symmetry. Next we calculate the eigenvectors of chiral symmetry operator and construct a basis out of them. Performing a unitary transformation $$U=\frac{1}{\sqrt{2}}\, \begin{pmatrix} 0 & 0 & 0 & 0 & 0 & 0 & -1 & 1\\ 0 & 0 & 0 & 0 & 1 & 1 & 0 & 0\\ 0 & 0 & -1 & 1 & 0 & 0 & 0 & 0\\ 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1\\ 0 & 0 & 0 & 0 & -1 & 1 & 0 & 0\\ 0 & 0 & 1 & 1 & 0 & 0 & 0 & 0 \\ -1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \end{pmatrix}$$ we obtain the Hamiltonian written in off-diagonal form $$\hat{H}(k)=\begin{pmatrix} 0 & \hat{Q}(k)\\ Q^\dag(k) & 0 \end{pmatrix} \:,$$ where a single $4\times 4$ block is given by $$\hat{Q}(k)=\begin{pmatrix} -\mu & -2 & 0 & -2\,e^{ik}\\ 4 & \mu & 4 & 0\\ 0 & -2 & \mu & -2\\ 4\,e^{-ik} & 0 & 4 & -\mu \end{pmatrix} \:.$$ The determinant of this block $\text{det}\,\hat{Q}(k)=\mu^4-128\,\cos k+128$ remains real and positive for all $k$. Thus, winding number for our system is zero, which means that there are no zero-energy edge states [@Ryu]. However, this does not mean that there are no topological states at all. In fact, our system provides an example of the situation when winding number is zero, but the topological states are present. Derivation of the effective Hamiltonian {#sec:EffectiveHamiltonian} ======================================= Bloch Hamiltonian has eight bands four of which have energy around $+\mu$, and the remaining four have energy around $-\mu$. In this section, we derive the effective Hamiltonian for the group of the four bands centered near energy $+\mu$. To apply the degenerate perturbation theory [@Bir], we take the limit of strong bianisotropy assuming $\mu\gg 1$. Applying the unitary transformation $$U_1=\begin{pmatrix} 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0\\ 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1\\ \end{pmatrix}\:,$$ to the Hamiltonian Eq. , we arrive to the result $$\hat{H}_1(k)=\begin{pmatrix} \mu & 3 & 0 & e^{-ik} & 0 & 1 & 0 & 3\,e^{-ik}\\ 3 & \mu & 1 & 0 & 1 & 0 & 3 & 0\\ 0 & 1 & \mu & 3 & 0 & 3 & 0 & 1\\ e^{ik} & 0 & 3 & \mu & 3\,e^{ik} & 0 & 1 & 0\\ 0 & 1 & 0 & 3\,e^{-ik} & -\mu & 3 & 0 & e^{-ik}\\ 1 & 0 & 3 & 0 & 3 & -\mu & 1 & 0\\ 0 & 3 & 0 & 1 & 0 & 1 & -\mu & 3\\ 3\,e^{ik} & 0 & 1 & 0 & e^{ik} & 0 & 3 & -\mu \end{pmatrix} \:.$$ We consider the terms proportional to $\mu$ as the leading-order ones, while the rest of the terms are considered as perturbation. At this point, we apply the degenerate perturbation theory [@Bir]: $$H_{mm'}^{\rm{eff}} = H_{mm'} + \frac{1}{2} \sum_{s} \left[ \frac{1}{E_m^{(0)}-E_s^{(0)}} + \frac{1}{E_{m'}^{(0)}-E_s^{(0)}} \right] H'_{ms} H'_{sm'},$$ where $m,m'=1\dots 4$ and $s=5\dots 8$. As a result, we derive the following effective Hamiltonian: $$\label{EffectiveHamiltonian} \hat{H}^{\rm{eff}}=\left(\mu+\frac{5}{\mu}\right)\hat{I}+\begin{pmatrix} 0 & 3 & t'\,(1+e^{-ik}) & e^{-ik}\\ 3 & 0 & 1 & t'\,(1+e^{-ik})\\ t'\,(1+e^{ik}) & 1 & 0 & 3\\ e^{ik} & t'\,(1+e^{ik}) & 3 & 0\\ \end{pmatrix}\:.$$ This Hamiltonian describes modified SSH model with the eigenfrequency of all sites equal to $\mu+5/\mu$, nearest neighbor hoppings equal to $1$ and $3$, and next-nearest-neighbor coupling $t'=3/(2\,\mu)$ depicted in Fig. \[fig:EffHamiltonian\](a). Thus, finite bianisotropy induces effective next-nearest-neighbor hopping, and as a result of that chiral symmetry of the effective Hamiltonian is broken. The energies of the bands in descending order read: $$\begin{gathered} {\varepsilon}_1=\mu+\frac{5}{\mu}+\frac{3}{\mu}\,\cos\,\frac{k}{2}+\sqrt{10+6\,\cos\,\frac{k}{2}}\:,\\ {\varepsilon}_2=\mu+\frac{5}{\mu}-\frac{3}{\mu}\,\cos\,\frac{k}{2}+\sqrt{10-6\,\cos\,\frac{k}{2}}\:,\\ {\varepsilon}_3=\mu+\frac{5}{\mu}+\frac{3}{\mu}\,\cos\,\frac{k}{2}-\sqrt{10+6\,\cos\,\frac{k}{2}}\:,\\ {\varepsilon}_4=\mu+\frac{5}{\mu}-\frac{3}{\mu}\,\cos\,\frac{k}{2}-\sqrt{10-6\,\cos\,\frac{k}{2}}\:.\end{gathered}$$ Comparing these results with the predictions of the full model Eq. , we observe quite good agreement in the region of $\mu\geq 7$ \[Fig. \[fig:EffHamiltonian\](b)\]. Still, even at such $\mu$ breaking of chiral symmetry of the effective Hamiltonian is clearly observable. ![(a) One-dimensional tight-binding model corresponding to the Bloch Hamiltonian Eq. . Effective next-nearest-neighbor coupling $t'=3/(2\,\mu)$ arises. (b) Calculated dispersion for the group of the four upper bands for $\mu=7$. Red solid and blue dashed lines correspond to the eigenstates of full Bloch Hamiltonian Eq.  and effective Hamiltonian Eq. , respectively. (c) Spectral position of higher-energy edge state versus dimensionless bianisotropy parameter $\mu$. Red solid and blue dashed lines correspond to the predictions of the full model \[Eq. (9) of the article main text\] and result of the first-order perturbation theory, respectively.[]{data-label="fig:EffHamiltonian"}](FigS1.pdf){width="0.7\linewidth"} To discuss the topological properties of the arising edge states, we present the derived Hamiltonian Eq.  in extended band representation, keeping just two sites in the unit cell, since the intrinsic periodicity of the effective model Fig. \[fig:EffHamiltonian\](a) is just $2\,a$. This yields $2\times 2$ effective Bloch Hamiltonian $$\label{EffHam2} \hat{H}^{\rm{ext}}=\left(\mu+\frac{5}{\mu}+\frac{3}{\mu}\,\cos k\right)\hat{I}+ \begin{pmatrix} 0 & 1+3\,e^{-ik}\\ 1+3\,e^{ik} & 0 \end{pmatrix}\:.$$ Note that the only difference from the canonical SSH model is in the nonzero diagonal entries which provide just the $k$-dependent shift of energy bands but do not modify the structure of the eigenstates. Hence, the Zak phase is still quantized and equal to $\pi$ once the unit cell with the weak link inside is chosen. Next we derive the asymptotics for the topological edge state arising in our model. In the case of a semi-infinite array the Hamiltonian of the effective model reads: $$\hat{H}=\begin{pmatrix} 0 & 1 & t' & 0 & 0 & \dots\\ 1 & 0 & 3 & t' & 0 & \dots\\ t' & 3 & 0 & 1 & t' & \dots\\ 0 & t' & 1 & 0 & 3 & \dots\\ 0 & 0 & t' & 3 & 0 & \dots\\ & & \dots & \ldots \end{pmatrix}$$ We treat the part proportional to $t'$ as a perturbation $\hat{V}$ and apply the first-order perturbation theory. An unperturbed wave function, describing the edge state in SSH model is given by $$\ket{\psi_0}=\frac{\sqrt{8}}{3}\,\left(1,0,-\frac{1}{3},0,\frac{1}{9},0,-\frac{1}{27},0,\dots\right)^T\:,$$ while the first-order approximation for the energy of this state is $${\varepsilon}_{\rm{edge}}=\mu+\frac{5}{\mu}+\left\langle\psi_0\left|\hat{V}\right|\psi_0\right\rangle\:.$$ Straightforward calculation yields $${\varepsilon}_{\rm{edge}}=\mu+\frac{4}{\mu}\:.$$ The derived asymptotics perfectly agrees with the result of the full model as shown in Fig. \[fig:EffHamiltonian\](c) starting from $\mu\geq 2$. Hence, perturbative treatment developed here, highlights important distinctive features of our system from the SSH model from one side, proving the topological origin of the observed edge state from the other. Multipolar expansion of the scattered fields for a single disk {#sec:Multipole} ============================================================== ![Scattering spectrum (a) for the original non-perturbed disk with diameter and height $D_0=27.5$ mm, $H_0=11.0$ mm, respectively; (b,c) for bianisotropic ceramic disk with diameter and height $D = 29.1$ mm, $H = 11.607$ mm. Diameter and depth of the hole are $d = D/2=14.55$ mm and $h = H/4$. Top (b) and bottom (c) illuminations are considered. Permittivity of ceramics in both cases is equal to ${\varepsilon}=39$. Contributions of electric and magnetic in-plane dipole moments are shown by red dashed and blue dot-dashed lines, respectively. Frequency splitting between the peaks caused by bianisotropy is around 300 MHz.[]{data-label="fig:Multipole"}](FigS2.pdf){width="0.45\linewidth"} To assess the validity of the discrete dipole model in our case, we simulate scattering of a plane wave on a single disk. We start from unperturbed (i.e. symmetric) disk with the aspect ratio chosen in such a way that electric and magnetic dipole resonances for in-plane dipoles overlap \[Fig. \[fig:Multipole\](a)\]. As expected, the scattering spectrum features a single peak. Multipolar expansion [@Corbaton] indicates that electric and magnetic dipole resonances coincide, while the magnitude of electric and magnetic polarizabilities is different. Breaking mirror symmetry of the disk, we introduce bianisotropy, which results in splitting of the central scattering peak into two side peaks. Each of them corresponds to the hybrid mode of the disk involving in general case both electric and magnetic dipole moments. While the overall scattering cross section of the disk is the same for top and bottom illumination directions \[Fig. \[fig:Multipole\](b,c)\], the multipolar composition of the scattered fields does depend on illumination direction [@Alaee-Rockstuhl]. In both cases, however, electric dipole contribution into the higher-frequency peak has strongly asymmetric profile.
--- abstract: 'Motivated by the gravity/fluid correspondence, we introduce a new method for characterizing nonlinear gravitational interactions. Namely we map the nonlinear perturbative form of the Einstein equation to the equations of motion of a collection of nonlinearly-coupled harmonic oscillators. These oscillators correspond to the quasinormal or normal modes of the background spacetime. We demonstrate the mechanics and the utility of this formalism within the context of perturbed asymptotically anti-de Sitter black brane spacetimes. We confirm in this case that the boundary fluid dynamics are equivalent to those of the hydrodynamic quasinormal modes of the bulk spacetime. We expect this formalism to remain valid in more general spacetimes, including those without a fluid dual. In other words, although borne out of the gravity/fluid correspondence, the formalism is fully independent and it has a much wider range of applicability. In particular, as this formalism inspires an especially transparent physical intuition, we expect its introduction to simplify the often highly technical analytical exploration of nonlinear gravitational dynamics.' author: - Huan Yang - Fan Zhang - 'Stephen R. Green' - Luis Lehner bibliography: - 'References.bib' title: Coupled Oscillator Model for Nonlinear Gravitational Perturbations --- Introduction ============ Can spacetimes become turbulent? Direct numerical simulations of large asymptotically anti–de Sitter (AdS) black holes [@Adams:2013vsa] and their holographically dual fluids [@Carrasco:2012nf; @Green:2013zba] have provided convincing evidence that this is the case. This phenomenon, perhaps counterintuitive at first glance,[^1] can be understood through the gravity/fluid correspondence [@Baier:2007ix; @Bhattacharyya:2008jc; @VanRaamsdonk:2008fp]. This correspondence links the behavior of long-wavelength perturbations of black holes in AdS to viscous relativistic hydrodynamics, and its regime of applicability can include cases of high Reynolds number on the fluid side. Spacetime turbulence then follows from turbulence in the dual fluid [@VanRaamsdonk:2008fp; @Carrasco:2012nf]. On the gravity side, a high Reynolds number corresponds to dissipation of gravitational perturbations that is weak when compared with nonlinear interactions. It is therefore not surprising that it arises in the vicinity of asymptotically AdS black holes, which can have relatively long lived quasinormal modes. The observation of gravitational turbulence in AdS motivates a further question: Can one analyze this striking nonlinear behavior directly in general relativity [*without relying on the existence of a holographic dual*]{}? That is, rather than borrowing from the dual hydrodynamic description—and any restricted regime of applicability—can one establish a bona-fide description of turbulence as a perturbative solution of the Einstein equation? Recall that turbulence is a nonlinear phenomenon characterized, in particular, by cascades of energy (and sometimes enstrophy) between wave numbers. It is therefore delicate to fully capture this behavior within ordinary perturbation theory without carrying it out to sufficiently high orders and performing a suitable resummation [@Green:2013zba]. In order to take into account the essential gravitational self-interactions of perturbations that are present in the Einstein equation we will require a more general perturbative framework. In this work we introduce a nonlinear coupled-oscillator model to describe the [*interaction*]{} of quasinormal or normal modes of a background spacetime, in particular their mode-mode couplings. This proposal is a natural generalization of our earlier study of nonlinear scalar wave generation around rapidly-spinning asymptotically flat black holes [@Yang:2014tla], where the back-reaction on the driving mode was neglected (we account for it properly in this paper). This previous model illustrated that the onset of turbulence in gravity does not require the spacetime to be asymptotically anti–de Sitter[^2]. In the nonlinear oscillator model presented here, the coupling between modes is accounted for explicitly and in real time as opposed to implicitly through a recursive scheme. Therefore the equations of motion provide solutions that are valid over longer time scales. Within this model, nonlinear gravitational perturbations are described by excitations of modes (quasinormal or normal). For a given background spacetime, the collection of modes is parametrized by a particular set of frequencies, damping rates, and, at the nonlinear level, [*mode coupling coefficients*]{}. Through these parameters, we can quantitatively compare and contrast signatures of nonlinear gravitational perturbations in different backgrounds, in the same way that frequencies and damping rates alone characterize linear perturbations. In this way we can gain a better understanding of nonlinear interactions and associated phenomena (such as turbulence) in general relativity. The route taken when constructing this formalism essentially offers a new perspective on how to deal with nonlinear metric perturbations that is conducive to intuition building. This compares favorably with more traditional methods, where one has to contend with difficult technical details that often mask the underlying physics. To provide a concrete example, we will apply our methods to study nonlinear perturbations of an asymptotically AdS black brane. The gravity/fluid correspondence applies in this case and the resulting coupled-oscillator system may be compared against the dual fluid. We find that our equations are consistent with the relativistic hydrodynamic equations provided by the duality. Although the agreement is expected, our derivation provides an explicit demonstration and a natural physical interpretation of the observed phenomena in terms of quasinormal modes. We emphasize that the derivations in the gravity and fluid sides are independent of each other, and so the treatment for gravitational perturbations does not depend on the existence of a dual fluid and can be applied to more general spacetimes. In the interest of caution, we recall that quasinormal modes do not form a basis for generic metric perturbations (see [@Warnick:2013hba] for a recent discussion). For instance, consider linear perturbations of the (asymptotically flat) Kerr spacetime as an example (see also discussions in Sec. \[sec2\]). The signal sourced by some matter distribution comprises quasinormal modes, the late-time “tail” term, as well as a prompt piece that travels along the light cone. In this sense, our formalism is approximate as we consider only the quasinormal mode contributions. However, in many cases, such as the ringdown stage of binary black hole mergers or when considering long wavelength perturbations of an asymptotically AdS black brane, it is sufficient to track only the quasinormal modes, as they are the dominant part of the signal (see, e.g., [@Barranco:2013rua], for a related discussion). In more general scenarios, we can always check the validity of our approximation by estimating the magnitudes of the other contributions. This paper is organized as follows. In Sec. \[sec2\], we introduce the general formalism of the nonlinear coupled-oscillator model, and compare it with traditional methods for handling nonlinear gravitational perturbations. In Sec. \[sec3\], we briefly review the asymptotically AdS black brane spacetime and the gravity/fluid correspondence, and we analyze the boundary fluid in the mode-expansion picture. In Sec. \[sec4\], we apply the general formalism to the specific case of the asymptotically AdS black brane. We conclude in Sec. \[conclusion\]. The gravitational constant $G$ and the speed of light $c$ are both set to one, unless otherwise specified. Appendices are provided to elaborate on certain details. General Formalism {#sec2} ================= In this section, we begin by reviewing the traditional approach to solving the Einstein equation using ordinary perturbation theory and assuming a series expansion in the perturbation amplitude. This method might not lend itself to easily capturing relevant phenomena like turbulence. In the case where the linearized dynamics take the form of independently evolving normal or quasinormal modes (in the absence or presence of dissipation, respectively), we then show how the nonlinear Einstein equation can be represented as a set of coupled oscillator equations, which is analogous to treatments of the Navier-Stokes equation in fluid dynamics, and [*is*]{} indeed capable of cleanly capturing turbulence. For simplicity, we restrict our discussion to vacuum spacetimes, but it is straightforward to generalize the analysis to spacetimes with a cosmological constant. Ordinary perturbation theory ---------------------------- Given any metric $g_{\mu\nu}$, one can split it into the sum of a “background” metric and a “perturbation”, $$\label{eqdefh} g_{\mu\nu} = g^{\rm B}_{\mu\nu}+h_{\mu\nu}\,.$$ Without invoking any approximation, the vacuum Einstein equation may then be written as $$\label{eqpertein} R_{\mu\nu}(g^{\rm B}) + R^{(1)}_{\mu\nu}(g^{\rm B},h) + R^{(2)}_{\mu\nu}(g^{\rm B},h) +\sum_{n=3}^\infty R^{(n)}_{\mu\nu}(g^{\rm B},h)=0\,,$$ where $R^{(n)}_{\mu\nu}(g^{\rm B},h)$ denotes the $n$th order Ricci tensor expanded about $g^{\rm B}_{\mu\nu}$. Explicitly, the linearized and second order terms are $$R^{(1)}_{\mu\nu} \equiv \frac{1}{2} (-h_{|\mu\nu} -{h_{\mu\nu|\alpha}}^\alpha+{h_{\alpha \mu|\nu}}^\alpha + {h_{\alpha \nu | \mu}}^\alpha)\,,$$ and $$\begin{aligned} \label{eqr2} R^{(2)}_{\mu\nu} \equiv & \frac{1}{4} \left[ h^{\alpha\beta}{}_{|\nu} h_{\alpha\beta|\mu} +2\left( h_{\nu\alpha|\beta} - h_{\nu \beta|\alpha} \right)h_{\mu}{}^{\alpha|\beta} \right . \nonumber \\ &\left . + \left( h_{\alpha \mu |\nu} + h_{\alpha \nu|\mu} - h_{\mu \nu |\alpha}\right)\left(h_{\beta}{}^{\beta|\alpha} - 2 h^{\alpha \beta}{}_{|\beta} \right) \notag \right. \\ & \left. +2h^{\alpha\beta}\left( h_{\alpha \beta|\mu|\nu} + h_{\mu \nu|\alpha |\beta} - h_{\alpha\mu | \nu|\beta} - h_{\alpha\nu | \mu|\beta} \right) \right]\,.\end{aligned}$$ In these expressions, covariant derivatives associated to the background metric $g^{\rm B}_{\mu\nu}$ are denoted by vertical lines. The background metric is also used to raise and lower indices. As described in [@Wald], ordinary perturbation theory assumes the existence of a one-parameter family of solutions $g_{\mu\nu}(\epsilon)$, where $g_{\mu\nu}(0)=g_{\mu\nu}^{\text B}$, and $h_{\mu\nu}$ depends differentiably on $\epsilon$. One can then Taylor expand the perturbation, $$\label{eq:hexpansion} h_{\mu\nu} = \epsilon h^{(1)}_{\mu\nu}+\epsilon^2 h^{(2)}_{\mu\nu} + \cdots.$$ Perturbative equations of motion of order $n$ follow by differentiating the Einstein equation  $n$ times with respect to $\epsilon$, and then setting $\epsilon=0$. At zeroth order we have simply $$R_{\mu\nu}(g^{\text B}) = 0\,,$$ so that $g^{\text B}_{\mu\nu}$ is a vacuum solution itself. At first order in $\epsilon$ we have the linearized Einstein equation, $$R^{(1)}_{\mu\nu}(g^{\text B},h^{(1)}) =0\,.$$ It is generally much easier to solve this equation (after making appropriate gauge choices and imposing boundary and initial conditions) than it is to solve the full Einstein equation. Then for sufficiently small $\epsilon$, $g_{\mu\nu}^{\text B} + \epsilon h_{\mu\nu}^{(1)}$ should be a good approximation to $g_{\mu\nu}(\epsilon)$. This procedure may be continued to higher orders. For instance, at second order, we obtain $$R^{(1)}_{\mu\nu} (g^{\text B},h^{(2)}) =- R^{(2)}_{\mu\nu}(g^{\text B},h^{(1)})\,.$$ The second order perturbation is seen to evolve in the background spacetime $g^{\text{B}}_{\mu\nu}$, and it is sourced by the first order solution $h^{(1)}_{\mu\nu}$. Generically, this approach reduces the nonlinear problem to a series of linear inhomogeneous problems of the form $$\label{eq:PTschem} R^{(1)}_{\mu\nu}(g^{\text B},h^{(n)}) = S_{\mu\nu}^{(n)}(g^{\text B};h^{(1)},\ldots,h^{(n-1)})\,.$$ Thus, at each order, one solves a linear partial differential equation with a source, subject to appropriate boundary conditions and gauge choices. The left hand side of the equation at order $n$ consists always of the $n$th order perturbation $h^{(n)}_{\mu\nu}$ evolving linearly in the background spacetime $g^{\text{B}}_{\mu\nu}$. The source term $S_{\mu\nu}^{(n)}$ involves only already-solved lower order pieces $h_{\mu\nu}^{(m)}$ for $m<n$, so a higher order perturbation does not backreact on one of lower order. Moreover, since the $n$th order perturbation evolves in the zeroth order background metric—not the $(n-1)$th order metric—the efficient capture of parametric resonance type effects is precluded [@Green:2013zba; @Yang:2014tla]. (Of course, with enough intuition, it may be possible to identify this behavior through a suitable resummation of perturbations of sufficiently high order.) In following this program, the calculations are quite involved and the gauge choices at different orders are often subtle (see e.g., [@Bruni:1996im; @Gleiser:1998rw; @Ioka:2007ak; @Brizuela:2009qd]). In the specific context of extreme mass ratio binaries, recent examples of this program are given in [@Pound:2012nt; @Gralla:2012db]. Larger perturbations {#sec:larger} -------------------- After iterating the above procedure to any given order, the resulting perturbative metric should be a good approximation to $g_{ab}(\epsilon)$ [*for sufficiently small*]{} $\epsilon$. However, in certain situations one may be interested in studying systems with [*larger*]{} (but still small) values of $\epsilon$, where the Taylor expansion  either fails to converge or would require a large number of terms to obtain a good solution. Typically the perturbative solution would be valid for a short time, but for long times secular terms might dominate. Therefore, a more suitable scheme would be required. In, for example, the context of the Navier-Stokes equation, ordinary perturbation theory might be capable of capturing the initial onset of turbulence, but it would be ineffective in capturing fully developed turbulence (and likewise for gravitational turbulence [@Green:2013zba; @Yang:2014tla]). In order to characterize the nonlinear dynamics in general relativity in a more efficient and transparent manner, we present here an alternative way of obtaining approximate solutions that is better suited for exploring certain nonlinear phenomena such as wave interactions and turbulence. We assume as before that $g_{\mu\nu}^{\text B}$ satisfies the vacuum Einstein equation. But then, rather than Taylor expanding $h_{\mu\nu}$ as in , we consider the full metric perturbation $h_{\mu\nu}$, and we attempt to solve directly a truncated version of . In fact truncation at second order, $$\label{eq:htruncated} R^{(1)}_{\mu\nu}(g^{\text B},h) + R^{(2)}_{\mu\nu}(g^{\text B},h) = 0\,,$$ captures the essential nonlinearities of interest to us here. We note that our formalism could straightforwardly be extended to higher orders, but for simplicity we restrict to second order nonlinearities here. To summarize, instead of solving a tower of inhomogeneous linear equations we solve a [*nonlinear equation*]{}, but we neglect the higher order nonlinearities. Instead of dealing with gauge issues at each order, we have only to impose the gauge condition once on $h_{\mu\nu}$. Of course, the truncation of the Ricci tensor is not a tensor itself so the equation (\[eq:htruncated\]) is not gauge invariant. But it should be sufficient to the order we are working ($O(h^2)$). As we shall see, this approach readily captures the nonlinear mode coupling effects of interest to us. In general it will be very difficult to solve , even neglecting the higher order nonlinearities as we have done. However, as we describe in the following subsection, in cases where the linear dynamics is dominated by the evolution of normal or quasinormal modes,  reduces to a system of nonlinearly coupled (and possibly damped) oscillators. Expansion into modes {#sec:ModeExp} -------------------- We now restrict consideration to background spacetimes whose [*linear*]{} perturbations are characterized (for some region of spacetime) by a set of modes (normal or quasinormal). In this case the first order metric perturbation may be written $$\label{eqhdec-lin} h_{\mu\nu}^{(1)} (t, {\bf x}) \sim \sum_j [ q^-_j(t) \mathcal{Z}^{(j-)}_{\mu\nu}({\bf x}) +q^+_j(t) \mathcal{Z}^{(j+)}_{\mu\nu}({\bf x})]\,,$$ with $$\label{eqqab-const} q^-_j(t) = A_j e^{-i\omega_j t},\quad q^+_j(t)= B_j e^{i \omega^*_j t}\,.$$ The background spacetime is assumed to be stationary and the $t$ coordinate is the associated Killing parameter. Modes always occur in pairs with frequencies $\omega_j$ and $-\omega_j^\ast$, so we have organized the summation above along these lines, labeling each pair with a multi-index $j$ (denoting both the transverse harmonic and radial overtone). The associated spatial wave functions are denoted $\mathcal{Z}^{(j\pm)}_{\mu\nu}({\bf x})$. Finally, $q^\pm_j$ and $\{A_j,B_j\}$ are the displacements and the amplitudes for modes $j\pm$, respectively. As $h_{\mu\nu}$ must be real at all time, we expect that $\{A_j,B_j\}$ (as well as $ \{\mathcal{Z}^{(j-)}, \mathcal{Z}^{(j+)}\}$) are conjugate to each other. The reason we organize our modes into pairs in  is to emphasize that [*all*]{} modes must be included in the nonlinear analysis; many linear analyses use symmetry arguments to only treat modes with $\Re(\omega)>0$ [@Berti2009]. In the case of [*normal*]{} modes, the mode functions $\mathcal{Z}^{(j\pm)}_{\mu\nu}$ are degenerate and $\omega_j\in\mathbb{R}$, so we take $q_j = q^-_j+q^+_j = A_je^{-i\omega_jt}+B_je^{i\omega_jt}$. For [ *quasinormal*]{} modes, the radial dependence of $\mathcal{Z}^{(j\pm)}_{\mu\nu}$, along with the dissipative boundary conditions at the horizon and/or infinity, fixes the time dependence of the mode uniquely. Any “degenerate” mode in this case must therefore have $\omega_j=-\omega_j^\ast$, so the frequency is purely imaginary, and the multi-index $j$ describes just a single mode. We analyze these cases separately from the non-degenerate case in the following sections. Frequencies of quasinormal modes have nonzero positive imaginary part, which implies an exponential time decay as a result of energy dissipation. In addition, this complex frequency means that the mode functions generally blow up at spatial infinity and the horizon bifurcation surface. However, as physical observers effectively lie near null infinity, the quasinormal-mode signals they observe are finite and the modes are indeed physical perturbations of the spacetime. For such observers, the sum in  can become a good approximation over finite time intervals, although we remind the reader that quasinormal modes do not form a complete basis for generic metric perturbations[^3]. Additional contributions to the metric can arise at late times from waves being scattered by the background potential at large distances (the “tail” term), or at early times from a prompt signal (on the light cone) from the source (see, e.g., [@Leaver1986; @Kokkotas1999; @Berti2009; @Casals:2013mpa]); we collect these into the “residual part”. In this paper our focus is on mode-mode interactions and the associated coupling coefficients. We will therefore not consider the nonlinear interactions between the modes and the tail and prompt components of the metric perturbation. We caution, however, that such couplings need not always be small. While they are small for perturbations of AdS black branes in the hydrodynamic limit (which we analyze below), readers should keep in mind that they will lead to additional contributions to, e.g., Eq.  below. Furthermore, questions as to how quasinormal modes are excited by moving matter, or how to compute the excitation factors for these modes based on some arbitrary initial data are also beyond the scope of this work (see [@Leaver1986; @Hadar09; @ZhangZhongYang2013] and Appendix \[sec:schwarz\]). With these observations in mind, following the discussion in Sec. \[sec:larger\] we write the [*full*]{} metric perturbation as $$\begin{aligned} \label{eqhdec} h_{\mu\nu} (t, {\bf x}) &=& \sum_j [ q^-_j(t) \mathcal{Z}^{(j-)}_{\mu\nu}({\bf x}) + q^+_j(t) \mathcal{Z}^{(j+)}_{\mu\nu}({\bf x}) ]\nonumber \\ &&\quad+\text{ residual part,}\end{aligned}$$ but now generalizing the coefficients $A_j$ and $B_j$ to be functions of time, $$\label{eqqab} q^-_j(t) = A_j(t) e^{-i\omega_j t},\quad q^+_j(t)= B_j(t) e^{i \omega^*_j t}\,.$$ Our task is to determine the nonlinear evolution of quasinormal modes; in other words, to evaluate the time dependence of $q^{\pm}_j$. Addressing this task is generally nontrivial as it requires the proper separation of the quasinormal modes from the residual part of the full metric perturbation. For Schwarzschild and Kerr spacetimes this is achievable by invoking the Green’s function technique (Appendix \[sec:schwarz\]), whereas the generalization of this approach to generic spacetimes remains an open problem. To present the coupled-oscillator model, we apply an alternative strategy of plugging  into the truncated Einstein equation  and projecting our the spatial dependencies, thereby obtaining mode evolution equations. This method is most accurate for dealing with normal-mode evolutions and cases where the residual parts are negligible (for example, see Sec. \[sec4\]). In more general scenarios, we shall make several additional approximations (such as neglecting certain time derivatives, neglecting the residual part) to single out the ordinary differential equations for $q^{\pm}_j$. We also caution that since the set of modes generally does not form a complete basis, the resulting $h_{\mu\nu}$ is still only an approximate solution to the truncated Einstein equation. For simplicity, hereafter we shall not explicitly write down the residual part in the equations. Upon substitution, the truncated Einstein equation  takes the form $$\begin{aligned} \label{eq:pluggedin} &&\sum_j\sum_{s=\pm}\left[ \rho_j^s({\bf x}) \ddot{q}_j^s+\tau_j^s({\bf x}) \dot{q}_j^s + \sigma_j^s({\bf x}) q_j^s\right] \nonumber\\ &=&O\left(q_k^{s'}q_l^{s''},q_k^{s'}\dot{q}_l^{s''},\dot{q}_k^{s'}\dot{q}_l^{s''},q_k^{s'}\ddot{q}_l^{s''}\right)\,.\end{aligned}$$ Here $\rho_j^s$, $\tau_j^s$, and $\sigma_j^s$ are tensor functions of the spatial coordinates, and they depend on the background metric as well as the corresponding wave function of the quasinormal mode. The right hand side of the equation has a complicated $\bf x$-dependence that we have suppressed. We would now like to project Eq.  onto individual modes to obtain equations for a set of nonlinearly coupled oscillators in the form of $$\begin{aligned} \label{eq:projectedEinstein} &&a_j^s \ddot{q}_j^s+b_j^s \dot{q}_j^s + c_j^s q_j^s \nonumber\\ &=&\hat{S}_j^s\left(q_k^{s'}q_l^{s^{\prime\prime}},q_k^{s'}\dot{q}_l^{s^{\prime\prime}},\dot{q}_k^{s'}\dot{q}_l^{s^{\prime\prime}},q_k^{s'}\ddot{q}_l^{s^{\prime\prime}}\right)\,,\end{aligned}$$ for each $j$ and $s$. In order to do so we require a suitable set of projectors. If, along any of the dimensions transverse to the radial direction, the background metric possesses a suitable isometry group so that this part of the wave function is described by tensor harmonics (Fourier modes, tensor spherical harmonics, etc.) then it is easy to project out this part by using an inner product. The remaining part (generally including the radial direction) is however more problematic. It is often the case that the equations can be written in the form of a standard eigenvalue problem, $\ddot\Psi=-A\Psi$. For normal modes, one can define an inner product $\langle\chi|\eta\rangle$ with respect to which $A$ is self-adjoint, and the modes are orthogonal. One can then use this inner product to define the projector. For dissipative systems with quasinormal modes, the eigenvalues are complex and $A$ cannot be self-adjoint. Another problem is that often $|\mathcal{Z}_{\mu\nu}^{(j\pm)}|\to\infty$ at the dissipative boundaries of the system. Nevertheless, it is still possible to define a suitable bilinear form, with respect to which $A$ is symmetric [@Leung:1994; @Leung:1998; @Leung:1999rh; @Yang:2014tla; @Zimmerman2014pr; @Yang:2014zva; @Mark2014]. This bilinear form involves an integral of $\chi\eta$ without any complex conjugation so symmetry of $A$ does not imply that the eigenvalues are real. Furthermore it is still necessary to appropriately regulate the integration to eliminate divergences. The bilinear form may be regarded as a “generalized” inner product, and be used as such. In particular, it may then be shown that $\langle\mathcal{Z}^{j\pm}|\mathcal{Z}^{k\pm}\rangle = 0$ for $\omega_j\ne\omega_k$, and this orthogonality leads to a suitable projector. In the general case (such as the coordinate system we use in Sec. \[sec4\]) it is not necessarily possible to re-write the equation as a standard eigenvalue problem. Nevertheless, we can still define a generalized inner product and use it to project the equation onto modes. It may be that the modes are not orthogonal with respect to this inner product, in which case the projection of the left hand side of contains contributions from additional modes beyond the desired projection mode. After performing projections onto all modes, it would then be necessary to diagonalize the system to obtain a set of equations of the form . This is possible by applying procedures described in Sec. \[sec:nondeg\] to remove “unphysical modes" and reduce the order of the differential equations. At this point, it is worth noting that in principle any inner product which leaves this set of equations non-degenerate fits our purpose. However, in order to minimize the error from neglecting the residual part, it is good practice to adopt an inner-product suitable for eigenvalue perturbation analysis (see Sec. \[sec:innerprod\] for a concrete example of such an inner-product). With the equations decoupled as in  with a suitable generalized inner product, we can now substitute in Eq.  for $q_j^\pm$. We obtain, $$\begin{aligned} \label{eq:Addot}a_j^-\ddot{A}_j+\tilde{b}_j^-\dot{A}_j &=& S_j^-\left(A_k,B_l\right),\\ \label{eq:Bddot}a_j^+\ddot{B}_j+\tilde{b}_j^+\dot{B}_j &=& S_j^+\left(A_k,B_l\right),\end{aligned}$$ where $\tilde{b}_j^- \equiv b_j^--2i\omega_ja_j^-$ and $\tilde{b}_j^+ \equiv b_j^++2i\omega_j^\ast a_j^+$. We have used the fact that $e^{-i\omega_j t}$ and $e^{i\omega_j^\ast t}$ are homogeneous solutions to simplify the left hand sides. The “source” terms on the right hand sides are quadratic in $A_k$ and $B_k$. We have dropped quadratic terms involving derivatives of $A_k$ and $B_k$ in $S_j^s$ as we expect them to be smaller than quadratic terms not involving derivatives. Indeed Eqs. – already indicate that time derivatives of the coefficients are of quadratic order in the perturbation amplitudes, so that, e.g., terms on the right hand side of the form $A_k\dot{A}_l$ would be of cubic order. In general, the nonlinear terms will then be of the form $$\begin{aligned} S_j^-&=&\sum_{lk} \left[\kappa^{-(1)}_{jkl} A_k A_l e^{-i (\omega_k+\omega_l-\omega_j) t}+ {\kappa}^{-(2)}_{jkl} A_k B_l e^{-i (\omega_k-\omega^*_l-\omega_j) t}\right. \nonumber \\ &&\quad \left.+ {\kappa}^{-(3)}_{jkl} B_k B_l e^{i (\omega^*_k+\omega^*_l+\omega_j) t}\right],\end{aligned}$$ where the coefficients $\kappa_{jkl}^{-(n)}$ are constants (and similarly for $S_j^+$). We now proceed to separately analyze non-degenerate and degenerate modes. ### Non-degenerate modes {#sec:nondeg} The non-degenerate case applies to quasinormal modes only. We immediately see from examining – that with $S_j^s=0$, $\{A_j,B_j\}=\text{ constants}$ are solutions. This is by design as (\[eqqab-const\]) are solutions to the linearized equations. However, if $a_j^s\ne0$ the left hand sides of – are second order in time, so that there are additional homogeneous solutions, $$A_j \propto e^{-\tilde{b}_j^+t/a_j^+},\quad B_j \propto e^{-\tilde{b}_j^-t/a_j^-},$$ which give rise to $$q_j^+\propto e^{(i\omega_j-b_j^+/a_j^+)t},\quad q_j^- \propto e^{(-i\omega_j^\ast-b_j^-/a_j^-)t}.$$ These solutions are clearly not quasinormal modes since when combined with the spatial wavefunctions, they do not satisfy the appropriate dissipative boundary conditions. In addition, if we multiply them with the wave function $\mathcal{Z}^{\pm}_j$, the original linearized Einstein equation is not necessarily satisfied (if $a^s_j \neq 0$ and $b^s_j \neq 0$). At the linear level, one can require $A_j, B_j$ to be constants to remove these spurious modes. At the nonlinear level, we need a systematic strategy to eliminate this extra unphysical degree of freedom. Let us first assume that $a_j^s\ne0$. For clarity we only consider the $s=+$ modes, but the analysis carries over directly to $s=-$. We will argue that the second time derivative terms in equations  and should be dropped. To arrive at an intuition for this, first note that we are considering the problem of mode excitation in the presence of sources. In equations  and , the source terms come from nonlinear couplings, but it is more instructive to move beyond this particular specialization and consider generic sources. If a delta-function source $S=\delta^{(4)}(x^\mu-x^\mu_0)$ is introduced to the spacetime, it gives rise to a finite-value discontinuity of the quasinormal mode amplitude at $t=t_0$, after which quasinormal modes evolve freely and $A_j$ remains constant (see the example in Appendix \[sec:schwarz\]). In other words, only $A_j$ jumps at the delta source while $\dot{A}_j$ is unaffected (otherwise it will not remain constant in the ensuing free-evolution), so that only $\dot{A}_j$ is needed in a sourced mode evolution equation to account for the influence of that source, while $\ddot{A}$ does not in fact contribute to the evolution of the physical modes. Furthermore, dropping $\ddot{A}$ also frees us of the unphysical spurious modes, as the evolution equation is now first order in time. We have subsequently $$\label{eq:ndg} \dot{A}_j=\frac{S_j^-}{\tilde{b}_j^+},\qquad\dot{B}_j=\frac{S_j^+}{\tilde{b}_j^+}.$$ Mathematically, this physical intuition is reflected in the fact that when we integrate from $t_{0-}$ to $t_{0+}$ with a delta-function source at $t=t_0$, we realize that the integration of the $\ddot{A}$ term in fact vanishes because $\dot{A}^s_j(t_{0-})$ and $\dot{A}^s_j(t_{0+})$ must both be zero in order to satisfy the free evolution condition when the source vanishes. We note of course that the solutions of equation  no longer strictly satisfy the original equations or . However, since both set of equations should be satisfied on physical grounds, $\ddot{A}$ and $\ddot{B}$ terms should be balanced by the residual part of the metric perturbations, which is implicit in the left hand sides of and . The situation with $a_j^s=0$ does not present any of the above difficulties as the oscillator equation (\[eq:Addot\]) or (\[eq:Bddot\]) is already first order in time, so that $$\label{eq:nondegena0} \dot{A}_j=\frac{S_j^-}{b_j^+},\quad\text{or}\quad\dot{B}_j=\frac{S_j^+}{b_j^+}.$$ In fact, this is the case we shall encounter in Sec. \[sec4\] when we perturb about the anti–de Sitter black brane background in ingoing Eddington-Finkelstein coordinates. In that case perturbations are described by a first order in time and second order in space partial differential equation. ### Degenerate modes For a degenerate mode, the two equations in (\[eq:projectedEinstein\]) for $s=\pm$ degenerate to a single equation for $q_j=q_j^-+q_j^+$. Thus the 4 degrees of freedom present for a given $j$ that we saw in the non-degenerate case reduce to 2 degrees of freedom (or 1 if $a_j=0$). In other words, we do not have any unphysical spurious solutions in the degenerate case, but instead two sets of physical solutions with the same spatial wavefunction, which should both be kept. The consequence of this observation is that in the end, the evolution equation for each mode is of first order, and we need not apply the treatment for the $\ddot{A}$ term employed in the non-degenerate case. Consider first the case where $a_j\ne 0$. As noted earlier, this corresponds to a non-dissipative (i.e. normal) mode. An example where this occurs is in perturbations about pure anti–de Sitter spacetime (without any black hole). (The case of coupled scalar field-general relativity perturbations about AdS was analyzed as coupled oscillators within the context of a two timescale expansion in [@Balasubramanian:2014cja].) As discussed before, even for this $a_j\ne 0$ case, the – should reduce to first order, and we show below how this is to be achieved. First note that we have $$\label{eq:qvsAB2} q_j = A_j(t) e^{-i\omega_j t} + B_j(t) e^{i\omega^*_j t}\,,$$ and when we introduced time dependence into $A_j$ and $B_j$, these parameters can in themselves contain $e^{-i\omega_j t}$ and $e^{i\omega^*_j t}$ factors, so their choices in equation are not unique, and we have in effect a freedom that we have to fix. The most obvious optimal choice is to enforce $$\dot{q}_j=-i\omega_jA_je^{-i\omega_jt}+i\omega_j^\ast B_j e^{-\omega_j^\ast t}.$$ as a gauge fixing, or equivalently $$\label{eqqdab} \dot{A}_je^{-i\omega_jt}+\dot{B}_je^{i\omega_j^\ast t}=0,$$ which incidentally looks as if we were solving an inhomogeneous equation through a variation of parameters method. The physical intuition behind this constraint is that $A_j$ and $B_j$ change only slowly with time so it is appropriate to regard them as “instantaneous” amplitudes. (However, this does not constitute a restriction on the solution.) We then have $$\begin{aligned} \label{eqab} & A_j =e^{i \omega_j t} \frac{\omega^* q_j+i \dot{q}_j}{\omega+\omega^*}\,,\nonumber \\ & B_j =e^{-i \omega^*_j t} \frac{\omega^* q_j-i \dot{q}_j}{\omega+\omega^*}\,.\end{aligned}$$ So far we have not imposed any equation of motion, and after substituting in equation  and walking through the same procedure as that presented in Appendix \[sec:Oscillators\], we obtain $$\label{eqwant} \dot A_j = \frac{i e^{i \omega_j t}}{ a_j (\omega_j^*+\omega_j)} \hat{S}_j\,, \quad \dot B_j = -\frac{i e^{-i \omega^*_j t}}{ a_j (\omega_j+\omega^*_j)} \hat{S}_j\,.$$ We have thus re-expressed the second order equation (\[eq:projectedEinstein\]) for $q_j$ in terms of first order equations for the amplitudes $A_j$ and $B_j$ . In the case where $a_j=0$, we have $\omega_j=-\omega_j^\ast=- ic_j/b_j$, so $\omega_j$ is purely imaginary and there is a single degree of freedom. There is then no need to distinguish $A_j$ and $B_j$, so we can set $B_j=0$. Equation (\[eq:Addot\]) easily reduces to $$\label{eqwant2} \dot A_j = \frac{S_j}{b_j} =\frac{\hat{S}_j e^{i \omega_j t}}{b_j}\,.$$ Equations , , and are our desired first order equations of motion. They describe a collection of nonlinearly coupled harmonic oscillators. For any suitable background spacetime, perturbations are characterized by the mode spectrum, the mode-mode coupling coefficients and the mode excitation factors. Despite being a simplified model in the small amplitude limit, the formalism we introduced in this section effectively serves as a general platform to quantitatively compare and study the nature of nonlinear gravitational phenomena in different spacetimes. A most attractive feature is that the vast literature on nonlinear coupled oscillators that has been developed in other branches of physics can now be applied directly to the study of gravitational interactions. For example, a precursor to the present procedure led to the discovery of the parametric instability in the wave generation process in near-extremal Kerr spacetimes in Ref. [@Yang:2014tla], which exhibited similar properties to the parametric instability in nonlinear driven oscillators. In general relativity, another example is furnished by the study of perturbed anti–de Sitter spacetimes through a two timescale analysis [@Balasubramanian:2014cja] and its connection to the Fermi-Pasta-Ulam problem [@fpubook; @2005Chaos..15a5104B]. In Sec. \[sec4\] below (with some details relegated to Appendix \[sec:Fluid\]), we provide a concrete example on how to implement the abstract procedure laid out in this section, using the asymptotically AdS spacetime containing a black brane as the background. The study of this particular case also results in a number of interesting physical observations, and so has its own intrinsic value. For example, we shall see that relativistic hydrodynamics admits a similar description to the gravitational equations of motion, thus expanding the gravity/fluid correspondence. Additionally, by connecting it to the fluid side one concludes that the symmetry of $\kappa$ is closely connected to the cascading/inverse-cascading behavior in the turbulent regime. Hence, this duality mapping provides further evidence and insights for the behavior of turbulence in gravity. AdS black brane spacetimes and the gravity/fluid correspondence {#sec3} =============================================================== In advance of our analysis of coupled AdS black brane quasinormal modes in Sec. \[sec4\], here we review the gravity/fluid correspondence and study the black blane perturbations from the fluid side. We first present the background uniform AdS black brane solution. We then review the derivative expansion method that leads to boundary fluid equations that describe long wavelength perturbations. Finally, by Fourier transforming the boundary coordinates we re-write the system as a set of coupled oscillators to facilitate comparison with our later gravitational analysis. For a more complete introduction to the gravity/fluid correspondence, interested readers should consult the original references [@Baier:2007ix; @Hubeny:2011hd; @Bhattacharyya:2008jc; @VanRaamsdonk:2008fp]. Background metric ----------------- The metric for the $d+1$ dimensional uniformly boosted AdS black brane is given in ingoing Eddington-Finkelstein coordinates by $$\label{eq:metric0} ds^2_{[0]} = -2 u_\mu dx^\mu dr + r^2 \left( \eta_{\mu\nu} + \frac{1}{(b r)^d} u_\mu u_\nu \right) dx^\mu dx^\nu.$$ where $u^\mu$ (with $u^\mu u_\mu = -1$) is some arbitrary constant four velocity, $r$ is the radial coordinate and $x^\mu$ are the boundary coordinates. The Hawking temperature of the black brane is the constant $T =d/(4\pi b)$. This metric satisfies the Einstein equation $$\label{eq:einsteineqns} G_{\mu\nu}+\Lambda g_{\mu\nu}=0\,,$$ with cosmological constant $\Lambda = -d(d-1)/2$. Different choices of $u^\mu$ correspond simply to different Lorentz-boosted boundary frames. In particular, in the case where the spatial velocity vanishes, the above metric simplifies to $$\label{eqingo} ds^2_{[0]} = 2 dv dr -r^2 f(r) dv^2+ r^2\sum^{d-1}_{i=1} (dx_i)^2\,,$$ where $f(r) \equiv 1-1/(b r)^d$ and $v=x^0$ is the ingoing Eddington-Finkelstein coordinate. The horizon is then located at $r=1/b$. If we define the tortoise coordinate $r_*$ as $d r_* = dr\, / (r^2 f(r))$ and $dv =dt +dr_*$, then the metric can be re-written $$d s^2_{[0]} = r^2 \left [-f(r) dt^2+ \sum^{d-1}_{i=1} (dx_i)^2 \right ]+\frac{dr^2}{r^2 f(r)}\,,$$ which is in the same form of Eq. (4.1) of Ref. [@Kovtun:2005ev]. Sometimes it is more convenient to work with a compactified radial coordinate, and normalize the boundary coordinates by the scale of the black brane horizon. With $u \equiv 1/(b r)^2$, $\tilde{t} \equiv t\, 8\pi T/d $, $\tilde{x}^i \equiv x^i\,8 \pi T/d $ and $f(u) = 1-u^{d/2}$, the metric becomes $$\begin{aligned} \label{eqkov} d s^2_{[0]} = &\frac{( 4\pi T /d)^2}{ u} \left [ -f(u) dt^2+ \sum^{d-1}_{i=1} (dx^i)^2\right ]+\frac{1}{4 u^2 f(u)} du^2 \nonumber \\ =& \frac{1}{4 u} \left [ -f(u) d \tilde{t}^2+ \sum^{d-1}_{i=1} (d \tilde{x}^i)^2\right ]+\frac{1}{4 u^2 f(u)} du^2\,.\end{aligned}$$ To derive the gravity/fluid correspondence, we take as our starting point the uniformly boosted black brane . Gravity/fluid correspondence ---------------------------- To each asymptotically AdS bulk solution there is an associated metric and conserved stress-energy tensor on the timelike boundary of the spacetime at $r\to\infty$ (see, e.g., Ref. [@Balasubramanian:1999re]). The boundary metric, in the case of is $\eta_{\mu\nu}$, while the boundary stress-energy tensor is $$T_{\mu\nu}^{[0]} = \frac{1}{16\pi G_{d+1} b^d}(du_\mu u_\nu + \eta_{\mu\nu}).$$ This describes a perfect fluid with energy density $\rho$ and pressure $p$ given by $$\begin{aligned} \rho &= \frac{d-1}{16\pi G_{d+1} b^d},\\ p &= \frac{1}{16\pi G_{d+1} b^d}.\end{aligned}$$ The stress-energy tensor is traceless, with equation of state $$p=\frac{\rho}{d-1},$$ as required by conformal invariance. Imposing the first law of thermodynamics, $\mathrm{d}\rho = T \mathrm{d}s$, as well as the relation $\rho+p = sT$, gives the entropy density $s$ and fluid temperature $T$, $$\begin{aligned} s&=AT^{d-1},\\ \rho&=\frac{d-1}{d}AT^d.\end{aligned}$$ Here, $A$ is a constant of integration. This is fixed to $A \equiv (4\pi)^d/(16\pi G_{d+1} d^{d-1})$ by equating $T$ with the Hawking temperature. At this point, the fluid we have described is of constant density, pressure and velocity. To go beyond the uniform fluid, $b$ and $u^\mu$ are promoted to functions of the boundary coordinates $x^\mu$. Importantly, these will be assumed to vary slowly; that is, if $L$ is the typical length scale of variation of these fields, then $L\gg b$. With non-constant boundary fields, the metric no longer describes a solution to the Einstein equation. However, a solution can be obtained by systematically correcting the metric order by order though a [*derivative expansion*]{}, so that the Einstein equation is solved to any desired order in derivatives. One can then compute the boundary stress-energy tensor corresponding to the metric at each order, and take this as defining the boundary fluid. After a rather long, but direct, calculation, the resulting boundary stress-energy tensor (to second order in derivatives) is $$\label{eq:Tmunu2} T_{\mu\nu}^{[0+1+2]} = \frac{\rho}{d-1}\left(d u_\mu u_\nu + \eta_{\mu\nu}\right) + \Pi_{\mu\nu},$$ where the viscous part $\Pi_{\mu\nu}$ is (see, e.g., Eq. (3.11) of Ref. [@Baier:2007ix]) $$\begin{aligned} \label{eq:Pi1} \Pi_{\mu\nu}={}&- 2\eta \sigma_{\mu\nu} \nonumber\\ & + 2\eta\tau_\Pi\left(\langle u^\alpha\partial_\alpha \sigma_{\mu\nu}\rangle + \frac{1}{d-1}\sigma_{\mu\nu}\partial_\alpha u^\alpha\right) \nonumber\\ &+ \langle \lambda_1 \sigma_{\mu\alpha}\sigma_{\nu}^{\phantom{\nu}\alpha} + \lambda_2 \sigma_{\mu\alpha}\omega_{\nu}^{\phantom{\nu}\alpha} + \lambda_3 \omega_{\mu\alpha}\omega_{\nu}^{\phantom{\nu}\alpha}\rangle.\end{aligned}$$ The shear and vorticity tensors are defined as, $$\begin{aligned} \sigma_{\mu\nu} &\equiv \langle\partial_\mu u_\nu\rangle,\\ \omega_{\mu\nu} &\equiv P_\mu^{\phantom{\mu}\alpha}P_\nu^{\phantom{\nu}\beta}\partial_{[\alpha}u_{\beta]}.\end{aligned}$$ We have employed angled brackets to denote the symmetric traceless part of the projection orthogonal to $u^\mu$, $$\langle A_{\mu\nu}\rangle \equiv \left(P_{(\mu}^{\phantom{(\mu}\alpha}P_{\nu)}^{\phantom{\nu)}\beta} - \frac{1}{d-1}P_{\mu\nu}P^{\alpha\beta}\right)A_{\alpha\beta},$$ and defined $P_{\mu\nu}$ to be the spatial projector orthogonal to $u^\mu$, $$P_{\mu\nu} \equiv \eta_{\mu\nu} + u_\mu u_\nu.$$ Notice that $\Pi_{\mu\nu}$ is symmetric and satisfies $$\begin{aligned} \Pi^\mu_{\phantom{\mu}\mu}&=0,\\ u^\nu\Pi_{\mu\nu}&=0.\end{aligned}$$ The transport coefficients $\{\eta,\,\tau_\Pi,\,\lambda_i\}$ for various dimensions can be found in, e.g., [@VanRaamsdonk:2008fp; @Haack:2008cp; @Bhattacharyya:2008mz]. In particular, $\eta=s/(4\pi)$. Projection of the Einstein equation along the boundary directions shows that the boundary stress-energy tensor is conserved, giving rise to the fluid equations of motion, $$\begin{aligned} 0&=& u^\nu \partial_\nu \rho+\frac{d}{d-1} \rho \partial_{\nu} u^\nu - u^\mu \partial^\nu \Pi_{\mu\nu}\,, \\ 0&=& \frac{d}{d-1} \rho u^\mu \partial_\mu u^\alpha +\frac{\partial^\alpha \rho}{d-1} -\frac{d}{(d-1)^2} u^\alpha \rho \partial_\mu u^\mu \nonumber \\ &&+\frac{1}{d-1} u^\alpha u^\mu\partial^\nu \Pi_{\mu\nu} +P^{\alpha\mu} \partial^\nu \Pi_{\mu\nu}\,.\end{aligned}$$ The gravity/fluid correspondence thus provides an explicit link between black hole perturbations in the sufficiently [*long wavelength regime*]{}—described by small wave numbers—and relativistic hydrodynamics. Ordinary perturbation theory, by contrast, provides a solution that is valid for sufficiently [*small amplitudes*]{}, but cannot easily capture the transfer of energy between modes. Our coupled-oscillator approach in contrast [*does*]{} capture the leading mode-mode couplings that are manifest in the fluid picture, and it is in that sense valid for larger amplitudes (see Sec. \[sec:larger\]). As illustrated in Fig. \[fig:comparison\], there is an overlapping regime where the predictions of both approaches can be compared. ![An illustration of the hydrodynamical expansion (small wave number) and black-hole perturbation (small amplitude). They both admit effective coupled oscillator descriptions. In AdS black-brane spacetime we compare the results from both sides of the duality, in the shaded region of the plot. For small perturbation amplitude, this comparison has been done in the linearized perturbation theory (as depicted by region “A" and see for example [@Kovtun:2005ev]). For larger perturbation amplitude (region “B"), we are able to expand the comparison to equations of motion *with nonlinear couplings* using the coupled-oscillator model. []{data-label="fig:comparison"}](comparison.png){width="0.90\columnwidth"} Mode expansion of the boundary fluid ------------------------------------ We now proceed to re-write the fluid equations as a set of coupled oscillator equations so that they can be compared with the equations we will derive on the gravity side. We denote the four velocity $u^\mu=(\gamma, {\bf u})$, where $\gamma^2=1+{\bf u} \cdot {\bf u}$, and the density $\rho =\rho_0 e^\xi$. Keeping viscous terms to linear order in $\bf u$ and $\xi$, and inviscid terms to quadratic order (as needed for the comparison), the energy conservation and Euler equations reduce to $$\begin{aligned} \label{eqmexpl} 0&=&\partial_t \xi +{\bf u} \cdot \nabla \xi+\frac{d}{d-1}(\partial_t \gamma+\nabla \cdot {\bf u})\,, \\ \label{eq:eulerapprox}0&=&\partial_t {\bf u} +{\bf u} \cdot \nabla {\bf u}+\frac{1}{d}\nabla \xi -\frac{1}{d-1} (\partial_t \gamma+\nabla \cdot {\bf u}) {\bf u}\nonumber\\ &&-\frac{\eta}{\rho_0}\left(\frac{d-1}{d}\nabla^2{\bf u}+\frac{d-3}{d}\nabla(\nabla\cdot{\bf u})\right)\,.\end{aligned}$$ Furthermore, dropping nonlinear terms, $$\begin{aligned} 0&=& \partial_t \xi^{(1)} +\frac{d}{d-1}\nabla \cdot {\bf u}^{(1)}\,, \label{eq:LinFluidEq1} \\ 0&=& \partial_t {\bf u}^{(1)} +\frac{1}{d}\nabla \xi^{(1)} \nonumber\\ &&-\frac{\eta}{\rho_0}\left(\frac{d-1}{d}\nabla^2{\bf u}^{(1)}+\frac{d-3}{d}\nabla(\nabla\cdot{\bf u}^{(1)})\right) \label{eq:LinFluidEq2} \,.\end{aligned}$$ Linearized solutions are decomposed into two families of modes: sound and shear. A sound wave of momentum $\bf k$ takes the form $${\bf u}^{(1)}_b \sim A_b({\bf k}) e^{-i \omega_b t} e^{i {\bf k} \cdot {\bf x}} \hat {\bf k}\,, \quad \xi^{(1)} \sim B_b({\bf k}) e^{-i\omega_b t} e^{i {\bf k} \cdot {\bf x}}\,.$$ By solving the linearized equations and , the dispersion relation is found to be $$\omega_b = \pm \frac{k}{\sqrt{d-1}} - i\frac{d-2}{d}\frac{\eta}{\rho_0}k^2+O(k^3)\,,$$ and $$B_b({\bf k})=\frac{d}{d-1}\frac{k}{\omega_b}A_b({\bf k})\,.$$ For the shear modes, $\xi^{(1)}=0$ and $${\bf u}^{(1)}_s \sim A_s({\bf k}) e^{-i \omega_s t} e^{i {\bf k} \cdot {\bf x}} \hat {\bf u}_s\,,$$ with $\hat {\bf u}_s \cdot {\bf k}=0$. The resulting dispersion relation is $$\label{eq:shearfreqfluid} \omega_s=-i\frac{d-1}{d}\frac{\eta}{\rho_0}k^2+O(k^3)\,,$$ so shear modes are purely decaying. The general solution to the linearized fluid equations is simply a sum over sound and shear modes of different $\bf k$ and shear polarizations $s$. We are now in a position to include the effects of nonlinear coupling terms. To do so, we express $\xi$ and $\bf u$ as sums over linear modes, but we allow for the coefficients $A$ and $B$ to be functions of time. The velocity ansatz then takes the form $$\label{eqmexpan} {\bf u}({\bf x}, t) = \sum_{\bf k} \left[q_b({\bf k}, t) \hat {\bf k}+\sum_s q_s({\bf k}, t) \hat {\bf u}_s\right] e^{i {\bf k} \cdot {\bf x}}\,,$$ where $q_s({\bf k},t) = A_s({\bf k},t) e^{-i \omega_s t}$ and $q_b({\bf k},t) = A_b({\bf k},t) e^{-i \omega_b t}$. The coefficients are of course subject to a reality condition. Inserting this expansion into Eq. , and projecting it onto a particular shear mode, we obtain $$\begin{aligned} \label{eqshear} &&\partial_t A_s({\bf k}, t) \\ &=&i \sum_{{\bf p}+{\bf q}={\bf k},\, s',\,{s''}} [\hat {\bf u}_{s'}({\bf p},t) \cdot {\bf q}][\hat {\bf u}_s ({\bf k},t)\cdot \hat {\bf u}_{s''}({\bf q},t) ] A_{s'}({\bf p},t) A_{s''}({\bf q},t) \nonumber \\ &&+\sum_{{\bf p}+{\bf q}={\bf k},\, s'} (\cdots) A_{s'}({\bf p},t) q_{b}({\bf q},t) +\sum_{{\bf p}+{\bf q}={\bf k}} (\cdots) q_{b}({\bf p},t) q_{b}({\bf q},t) \,,\nonumber\end{aligned}$$ Notice that the left hand side has been reduced to simply the time derivative of $A_s$ because the mode function satisfies the linearized equation of motion. The right hand side describes the nonlinear coupling between modes. The second and third terms (coupling coefficients unspecified) in Eq.  describe the mixing between the sound modes and the shear modes, as well as between two sound modes. The coefficients to these terms contain fast \[$\exp(i\omega t)$ type\] oscillatory time-dependent factors, so their effects tend to average to zero during the longer time scales in which we examine the growth and decay of modes. On the other hand, the first term describes the mixing between two shear modes, and it trivially satisfies the “resonant condition” in the time-domain since $\Re(\omega_s)=0$. This results in significant energy transfer between shear modes (and had we been performing an ordinary perturbative expansion would have resulted in secular growth). It is then natural to expect that the effect of sound modes is sub-dominant in the turbulent process of conformal fluids, where the viscous damping is less important. In fact, if we ignore all the sound modes in the relativistic hydro equation, the resulting Eq. (\[eqshear\]) is the same as the one for incompressible fluid (Appendix \[sec:Fluid\]), and they share the same conservation laws in the Fourier domain. Equation  expresses the fluid as a collection of coupled oscillators, to be compared with  on the gravity side. In the next section we shall apply the general formalism of Sec. \[sec2\] to the AdS black brane spacetime and directly match its mode coupling coefficients (for the fundamental hydro shear quasinormal modes) to the shear-shear mode coupling coefficients in Eq. . One can apply the same procedure to verify the correspondence in the sound channel (which we have not written down). We will only address the shear modes, as the main purpose of this work is to formulate the coupled oscillator model and to illustrate its technical details, rather than to provide a full verification of the gravity/fluid correspondence. We envisage that this framework shall prove its unique value when studying gravitational interactions in spacetime without a clear gravity/fluid correspondence, or in cases where the hydrodynamical (long-wavelength) approximation becomes too restrictive. Linear and nonlinear gravitational perturbations of the AdS$_5$ black-brane {#sec4} =========================================================================== In this section we study gravitational perturbations about an asymptotically AdS black brane within the context of the coupled oscillator model. We adopt this particular example for two reasons: On the one hand, the boundary metric of the background spacetime is flat, which simplifies calculations when performing wave function projections. On the other hand, the gravity/fluid correspondence is well established in this spacetime, and this allows us to compare results obtained in the gravity and dual fluid pictures, as depicted in Fig. \[fig:comparison\]. In particular, we shall focus on the analysis of shear modes at both linear and nonlinear levels. We also fix the spacetime dimension to $d+1=5$, although it is straightforward to generalize the analysis below to other dimensions. For calculations within this section, we make further simplifications by scaling the coordinates such that $b=1$, so the horizon is located at $r=1$. This means that we effectively choose $T=1/\pi$ so \[see above Eq. \] $$\begin{aligned} \label{eq:CoordChoice} x^i = \frac{1}{2} \tilde{x}^i\,, \quad k_i = 2 \tilde{k}_i\,.\end{aligned}$$ Linear perturbation {#sec:gravitylinear} ------------------- Linear quasinormal mode perturbations of AdS black branes have been thoroughly analyzed in [@Kovtun:2005ev]. There, the fundamental (slowly decaying) quasinormal modes of the spacetime were shown to be the same as the hydrodynamical modes of the boundary fluid. The analysis was performed using the coordinate system of Eq. , whereas for our purposes it is more convenient to use the ingoing coordinates of Eq. . As discussed in Appendix \[sec:TwoBasis\], choosing different coordinates leads to different definitions for the modes. At the linear level there exists a clean one-to-one mapping of modes in different bases as each quasinormal mode is a solution to the linear Einstein equation. However, when studying nonlinear perturbations, their projection with respect to a mode-basis associated to a different coordinate system leads to an expansion with a less direct identification. In Appendix \[sec:TwoBasis\] we illustrate this point with a simple example describing a scalar field propagating on Minkowski spacetime. As demonstrated in [@Kovtun:2005ev], linear perturbations of the AdS black brane can be classified into shear, sound and scalar sectors. In addition, as the boundary metric is flat, it is straightforward to Fourier transform the metric components along the boundary coordinates. The same logic applies when we adopt ingoing coordinates. Without loss of generality, we consider a mode whose boundary-coordinate dependence is $e^{i k z}$. For shear perturbations, the relevant metric components are then $h_{r\alpha}, h_{v\alpha}, h_{z\alpha}$, where $\alpha=x,y$. Without loss of generality, we choose the polarization $\alpha =x$, and impose the radial gauge condition $h_{rM} =0$, with $M = \{r,v,z,x,y\}$. Defining the auxiliary variables $$\begin{aligned} H_{zx} \equiv h_{zx} \frac{e^{- i k z}}{r^2}, \quad H_{vx} \equiv h_{vx} \frac{e^{-i k z}}{r^2} \,,\end{aligned}$$ the independent components of the linearized Einstein equation take the form $$\begin{aligned} \label{eqshearcompo} 0&=&5 r \frac{\partial H_{vx}}{\partial r}+i k \frac{\partial H_{zx}}{\partial r}+r^2\frac{\partial^2 H_{vx}}{\partial r^2}\,,\\ 0&=&k^2 H_{vx}-5 r^3 f \frac{\partial H_{vx}}{\partial r}-r^4 f \frac{\partial^2 H_{vx}}{\partial r^2}+i k\frac{\partial H_{zx}}{\partial v}-r^2\frac{\partial^2 H_{vx}}{\partial v\partial r}\,.\nonumber\end{aligned}$$ We can further simplify this system by defining the master variable, $\Psi \equiv \partial_r H_{vx}$. This satisfies the master equation, $$\label{eq:Psieq} -k^2 \Psi+(5 r^3 f \Psi)'+(r^4 f \Psi')'+7r \dot \Psi+2r^2 \dot \Psi'=0\,,$$ where in this section we will often denote partial derivatives as $(\cdot)' \equiv \partial_r$ and $\dot{(\cdot)} \equiv \partial_v$. To look for quasinormal modes, we first take advantage of the time translation symmetry of the equation to impose a $e^{-i\omega v}$ time dependence (so $\dot\Psi \to -i\omega \Psi$). Solving the remaining spatial equation with appropriate boundary conditions at the horizon and spatial infinity gives rise to a set of quasinormal modes in the ingoing coordinates, and the frequency spectrum $\omega(k)$. To analyze the horizon boundary, we multiply Eq.  by $f$ and take the horizon limit $r \rightarrow1$. The wave equation becomes $$(\partial^2_{r_*}+2 \partial_v \partial_{r_*})\Psi=0\,,$$ with two independent solutions, $$\partial_{r_*} \Psi=0,\quad {\rm and} \quad ( \partial_{r_*} +2 \partial_v )\Psi=0\,.$$ The ingoing boundary condition for the quasinormal modes selects $$\frac{\partial \Psi}{\partial r_*} \to 0,\quad r \rightarrow 1\,.$$ As $r\to\infty$ we impose a reflecting boundary condition (since the spacetime is asymptotically AdS), so the metric perturbation is required to vanish. This means that we should at least expect $h = O(1/r)$ and $\Psi = O(1/r^4)$. The above discussion applies to all quasinormal modes of our system. However, the dual fluid captures only the longest lived shear and sound modes, which have $\omega\to0$ as $k\to0$ (known as the “hydro” modes). In order to compare our results with the fluid we therefore restrict to $\tilde k\ll1$. We can then construct the eigenfunctions perturbatively in $k$ (and $\omega$). In this expansion, the leading order part of equation is $$(5 r^3 f)' \Psi+5 f r^3 \Psi'+r^4 f \Psi''+(r^4 f)'\Psi'=0\,.$$ After imposing the horizon boundary condition, the solution is $$\Psi_0 = \frac{C(v)}{r^5}\,.$$ where the subscript $0$ indicates that this solves the leading order equation. (Notice that this solution also falls off sufficiently rapidly at spatial infinity.) To look for quasinormal mode solutions we take $C(v)=e^{-i\omega v}$. The leading order solution $\Psi_0$ then sources the first order correction $\Psi_1$ through $$\begin{aligned} &(5 r^3 f)' \Psi_1+5 f r^3 \Psi_1'+r^4 f \Psi_1''+(r^4 f)'\Psi_1' \nonumber \\ &= -(-k^2 \Psi_0+7r \dot \Psi_0+2r^2 \dot \Psi'_0)\,.\end{aligned}$$ The combined solution $\Psi=\Psi_0+\Psi_1$ is then $$\begin{aligned} \Psi = &\left[\frac{1}{r^5}+\frac{ -(k^2-4 i \omega ) \log (1-r) -(k^2+4 i \omega) \log (1+r) }{16 r^5}\right. \nonumber \\ & \left.+\frac{8 i \omega \arctan r +k^2\log (1+r^2)}{16 r^5}\right]e^{-i\omega v}\,. \end{aligned}$$ In order to satisfy the horizon boundary condition we must impose $k^2 = 4 i \omega$, resulting in $$\Psi = \left[\frac{1}{r^5}+\frac{2 k^2 \arctan r -2k^2\log (1+r) +k^2\log (1+r^2)}{16 r^5}\right]e^{-i\omega v}\,.$$ Using Eq. , we verify that $k^2 = 4 i \omega$ is equivalent to $\tilde \omega = -i \tilde k^2 /2$, which is exactly the dispersion relation of shear hydro quasinormal modes derived in [@Kovtun:2005ev] using a different coordinate system. In addition, it is easy to check that the dispersion relation matches , derived on the fluid side. Knowing $\Psi$, it is straightforward to use Eq.  to reconstruct the metric perturbations. For the shear modes considered here, the metric perturbation is $$\begin{aligned} h_{vx}&=& -\frac{A}{4r^2} e^{-i \omega v+i k z}\left[1+\frac{k^2r^2}{16}(\pi r^2 - 4r +2)\right.\nonumber\\ &&\quad\left.-\frac{k^2}{16}(r^4-1)\left(2\arctan r +\log\frac{1+r^2}{(1+r)^2}\right)\right]\,,\nonumber \\ \\ h_{zx} &=& i \frac{A}{4} k r^2 e^{-i \omega v+i k z}\left[\frac{\pi}{4}-\frac{1}{r}-\frac{\arctan r}{2}\right. \nonumber\\ &&\quad\left. +\frac{1}{4}\log \frac{(1+r^2)(1+r)^2}{r^4}\right]\,.\end{aligned}$$ Mode projection\[sec:innerprod\] -------------------------------- Having carried out the linear analysis, we are almost ready to calculate the shear-shear mode coupling coefficient. There is one more problem to tackle however, which is to project the Einstein equation onto an individual mode to see how a source term affects its evolution. As described in Sec. \[sec:ModeExp\], we adopt a technique that has been proven very powerful in solving similar problems [@Leung:1999rh; @Leung:1999iq; @Yang:2014tla; @Mark2014; @Yang:2014zva; @Zimmerman2014pr]. Namely, we enlist a suitable bilinear form to project the equation onto individual modes. For later convenience, we define $\phi = r^5 \Psi$, so that Eq. (\[eq:Psieq\]) takes the form $$\label{eq:phiEqlinear} \left( \frac{f}{r} \phi'\right)'-k^2 \frac{\phi}{r^5}+\frac{7}{r^4} \dot \phi+2 r^2 \left(\frac{\dot \phi}{r^5}\right)'=0\,.$$ Fourier transforming the wave operator in $v$, we define $$\label{eqphih} H_\omega \phi \equiv \left( \frac{f}{r} \phi'\right)'-k^2 \frac{\phi}{r^5}-\frac{ 7i \omega }{r^4} \phi-2 i \omega r^2 \left( \frac{\phi}{r^5}\right)'\,.$$ We also define a generalized inner product, $$\label{eq:InnerProd} \langle \chi | \eta \rangle = \int^\infty_1 dr \chi \, \eta\,.$$ The operator $H_\omega$ is not symmetric under this bilinear form, i.e., $\langle \chi | H_{\omega} \eta \rangle \neq \langle H_{\omega} \chi | \eta \rangle $, because of the fourth term in $H_{\omega}$. However, in the hydrodynamic limit ($\tilde{k} \ll 1$) this term is neglected, so is suitable for our purpose of comparing to the dual fluid. For completeness, we note that should the need arises for the study of perturbations of higher overtones away from the hydro limit, we may use an alternative bilinear form (dependent on $\omega$) with respect to which $H_{\omega}$ is symmetric so that $\langle \chi | H_{\omega} \eta \rangle_{\omega} = \langle H_{\omega} \chi | \eta \rangle_{\omega}$. In this case, $$\begin{aligned} \label{eq:InnerProd2} \langle \chi | \eta \rangle_{\omega} &=& \int^\infty_1 dr g_\omega(r ) \chi \, \eta\,,\quad\text{with} \\ \log g_\omega(r )&=&-i \omega \left ( \arctan r+\frac{1}{2}\log \frac{1-r}{1+r}\right )+{\rm const}\,,\nonumber\end{aligned}$$ is the unique option. There is one $g_{\omega}$ for each $\omega$, so we have a family of such generalized inner products. Using $g_{\omega}$ to project onto the mode with frequency $\omega$ \[followed by a diagonalization procedure as per the discussion above Eq. \] is a natural choice, and indeed leads to agreement with the Green’s function method for projecting modes (see Appendix \[sec:schwarz\]). In any case, to $O(k^2)$, these generalized inner products reduce to Eq. \[eq:InnerProd\]. For the purpose of the time-domain analysis in the next section, we expect the effect of non-hydrodynamical modes \[see Eq.  below\] and the excitation of residual parts[^4] to be at least $O(k)$. Therefore only the hydrodynamical modes are important to leading order and we shall adopt the generalized inner product for calculations, as it is easier to implement in the time-domain analysis. As an example, we show below that this inner product generates the correct leading order (in $k$) frequency in the eigenvalue analysis. Let us now consider a simple example that demonstrates the essence of how to utilize this inner product to carry out perturbation studies. Suppose we perturb $k$ to $k+\epsilon \delta k$ ($\epsilon \ll 1$) and ask for the change of $\omega$. On the one hand, based on the dispersion relation $\omega= -i k^2/4$, we immediately know that $\delta \omega = -i k \delta k/2$. On the other hand, we can arrive at the same conclusion through a perturbation analysis of the eigenvalue problem defined by Eq. . The change $k \rightarrow k+\epsilon \delta k$ causes $H $ to pick up an extra term, $-2 \epsilon k \delta k/r^5$. We expect both the eigenfrequency and the eigenfunction to also change to order $\epsilon$, $$\begin{aligned} & \phi \rightarrow \phi+\epsilon \phi^{(1)}+ O(\epsilon^2) \,, \nonumber \\ & \omega \rightarrow \omega + \epsilon \delta \omega +O(\epsilon^2)\,.\end{aligned}$$ Plugging into the wave equation Eq. , and projecting both sides onto $\phi$ while keeping only the $O(\epsilon)$ terms, we can eliminate the unknown function $\phi^{(1)}$ to obtain $$\begin{aligned} - i \delta \omega = & 2 k \delta k \frac{\langle \phi | 1/r^5 \phi \rangle}{ \langle \phi | 7\phi/r^4 +2r^2(\phi/r^5)' \rangle}+O(k^2) \nonumber \\ \approx &- \frac{k}{2} \delta k\,,\end{aligned}$$ which is consistent with our expectation. We note that it was necessary in this analysis to use the symmetry property of $H_\omega$ to eliminate terms involving $\phi^{(1)}$. Although somewhat excessive for this simple problem, we see that with the help of our generalized inner product, it is now possible to carry out a perturbation analysis in a manner analogous to the application of perturbation theory in quantum mechanics [@Shankar] (for a direct mapping of a wave equation with outgoing boundary condition into a Schrödinger equation with non-Hermitian Hamiltonian, see [@Leung:1998]). Nonlinear analysis {#sec43} ------------------ We are now in a position to move beyond the linear level and study the second order (nonlinear) Einstein equation (\[eq:htruncated\]). We begin by considering its projection onto the shear sector with spatial dependence $e^{ikx}$ and spatial polarization $\alpha=z$ (see Sec. \[sec:gravitylinear\]). (It is straightforward to perform this projection onto a Fourier basis element with an ordinary inner product. The nontrivial aspect is the subsequent projection onto the hydro mode.) The non-vanishing $vx$ and $rx$ components of the Einstein equation take the form $$\begin{aligned} \label{eqrs} & 5 r \frac{\partial H_{vz}}{\partial r}+i k \frac{\partial H_{zx}}{\partial r}+r^2\frac{\partial^2 H_{vz}}{\partial r^2}=\tau_{rz} \,,\nonumber \\ &k^2 H_{vz}-5 r^3 f \frac{\partial H_{vz}}{\partial r}-r^4 f \frac{\partial^2 H_{vz}}{\partial r^2}+i k\frac{\partial H_{zx}}{\partial v}-r^2\frac{\partial^2 H_{vz}}{\partial v\partial r} \nonumber \\ &=\tau_{vz}\,.\end{aligned}$$ We have formally written the nonlinear terms as “sources” on the right hand side of the equation. At quadratic order the nonlinear terms are $$\label{eq:fourierprojection} \tau_{rz} \equiv -\langle e^{- i k x}, 2 R^{(2)}_{rz} \rangle, \quad \tau_{vz} \equiv -\langle e^{- i k x}, 2 R^{(2)}_{vz} \rangle\,.$$ The inner product $\langle\cdot,\cdot\rangle$ is the ordinary inner product over the boundary spatial coordinates. Equation  is simply  with nonlinear terms included, and a simple switch of coordinates $x \leftrightarrow z$. Since the second order Ricci tensor is a quadratic function of the metric perturbation, which can be expanded over Fourier modes (and scalar, sound, shear sectors), the projection  enforces a wave number matching condition on the terms that can contribute to the right hand side of . Namely, modes with wave numbers ${\bf p}$ and ${\bf q}$ can only act as a source for mode ${\bf k}$ if ${\bf p}+{\bf q}={\bf k}$ (see Fig. \[fig:trig\]). \[This of course also holds for the fluid analysis in .\] We define the angles $\theta_1 \equiv \arccos(\hat{q} \cdot \hat{k})$ and $\theta_2 \equiv \arccos(\hat{p} \cdot \hat{k})$. ![An illustration of three wave numbers satisfying the “momentum matching” condition.[]{data-label="fig:trig"}](trig.png){width="0.66\columnwidth"} Following the same procedure as in the linear analysis, we re-write Eq.  in the form of a sourced version of Eq. , $$\begin{aligned} \label{eqweqs} &&\left(\frac{f}{r} \phi'\right)'-k^2 \frac{\phi}{r^5}+\frac{7}{r^4} \dot \phi+2 r^2 \left( \frac{\dot \phi}{r^5}\right)' \nonumber \\ &=& - \dot \tau_{rz} - \tau_{vz}' \equiv S_{\rm in}\,.\end{aligned}$$ Since only first order time derivatives appear in this wave equation and we know from the previous subsection that the quasinormal frequency is purely imaginary, this shear hydrodynamic mode belongs to the class described by Eq. . We now proceed to compute the nonlinear source $S_j$ \[see \] using the generalized inner product of Sec. \[sec:innerprod\]. As in Sec. \[sec:ModeExp\], we first express the field $\phi$ as a sum over radial modes $$\label{eq:phiansatz} \phi \to A_0(t)e^{-i\omega_0 v} \chi_0(r )+\sum_{j>0} A_j(t) e^{-i \omega_j v} \chi_j(r ) +O(k)\,,$$ but we allow for the modes to have additional time dependence through the mode amplitudes $A_j$. Here the spatial wavefunctions are denoted $\chi_j$, with $j=0$ corresponding to the hydro mode. The non-hydro modes all have frequencies $\omega_j=O(1)$, while $\omega_0=-ik^2/4$. While $\phi$ is to be matched to modes of the $4$-velocity $u^\mu$ on the fluid side, we normalize the wave function $$\chi_0=4$$ accordingly [^5]. The $O(k)$ appearing in the expression for $\phi$ includes the residual contribution under the hydrodynamical approximation (see Footnote \[fn5\]). We can now plug into the wave equation , and then take the generalized inner product of both sides with $\chi_0$ using . Within this computation, the effect of the non-hydrodynamical terms is at least $O(k)$ \[in fact $O(k^2)$\] as we claimed in Sec. \[sec:innerprod\]. This is because $e^{-i\omega_j v}\chi_j$ solves the linear equation , so for $j>0$ $$\begin{aligned} \label{eq:NonHy} && \omega_j \left\langle \chi_0 \left| \frac{7}{r^4}\chi_j +2r^2\left(\frac{\chi_j}{r^5}\right.\right)' \right\rangle \nonumber \\ &=& - \left\langle \chi_0 \left| \left( \frac{f}{r} \chi_j'\right )'-k^2 \frac{\chi_j}{r^5} \right.\right\rangle \nonumber \\ &=& -\left\langle \left( \left. \frac{f}{r} \chi_0' \right)' - k^2 \frac{\chi_0}{r^5} \right| \chi_j \right\rangle \nonumber \\ &=& \omega_0 \left\langle \left. \frac{7}{r^4}\chi_0 +2r^2\left(\frac{\chi_0}{r^5}\right)' \right| \chi_j \right\rangle \nonumber \\ &=& O(k^2)\,.\end{aligned}$$ Given this observation, it is now simple to show that $$\begin{aligned} \label{eqdrive} \dot A &\approx& \frac{1}{4} \frac{\langle \chi_0 | S_{\rm in} \rangle e^{i\omega_0v}}{ \langle \chi_0 | 7\chi_0/r^4 +2r^2(\chi_0/r^5)' \rangle} \nonumber \\ &\approx& -\frac{1}{4} \tau_{vz} |_{r=1} \,,\end{aligned}$$ where we dropped high order \[$O(k^2)$ and higher\] terms in $k$, including nonlinear terms containing time derivatives (as discussed in Sec. \[sec:ModeExp\]). Using Eq. , the mode expansion of $h_{\mu\nu}$, and after some lengthy but nevertheless straightforward calculations, one can show that the shear-shear mode coupling coefficient arising from  is $$\begin{aligned} \kappa_{kpq} &=& i k \sin (\theta_2-\theta_1) \,,\end{aligned}$$ which agrees with the result obtained with its fluid counterpart from Eq.  $$\label{eqkpq} \kappa_{kpq} = i [\hat {\bf u}_{s'}({\bf p},t) \cdot {\bf q}][\hat {\bf u}_s ({\bf k},t)\cdot \hat {\bf u}_{s''}({\bf q},t) ] +({\bf p} \leftrightarrow {\bf q})\,.$$ We end this section by noting that the agreement between the mode coupling coefficients inferred from the fluid equations and the AdS black brane perturbation theory relies on the fact that they are computed using the same mode basis, and that the comparison is made in the regime where $\tilde{k} \ll 1$ and $|h| \ll 1$ (cf. Fig. \[fig:comparison\]). However, the coupled oscillator model is applicable more broadly. Conclusions {#conclusion} =========== The study of nonlinear wave phenomena is undoubtedly a fascinating subject. Gaining understanding in the particular case of general relativity poses unique challenges even given the fixed speed of propagation of physical perturbations. These challenges are rooted in the covariant nature of the theory and physical degrees of freedom often hidden within a larger set of (metric) variables. These issues have hampered understanding of gravitational perturbations beyond linear order except in a few specialized regimes [@Gleiser:1998rw; @Ioka:2007ak; @Brizuela:2009qd; @Pound:2012nt; @Gralla:2012db], seamingly leaving full numerical simulations as the main tool to try to understand these issues (for a recent overview of these efforts, see [@chotuiklehnerpretorius] and references cited therein). In the current work, we have presented a model to capture the nonlinear behavior of gravitational perturbations[^6]. This model regards the system as composed of a collection of nonlinearly coupled (damped) harmonic oscillators with characteristic (isolated) frequencies given by quasinormal modes. By construction this model reproduces standard results obtained at the linearized level. At the nonlinear level, it describes mode-mode couplings and their effect on frequency and amplitude shifts. As an illustration, we have shown how our model reproduces recent results captured through the gravity/fluid correspondence via a purely gravitational calculation. Importantly, the applicability of our formalism is not restricted to long-wavelength perturbations—as in the case of the gravity/fluid correspondence—so the coupled oscillator model can also treat so-called “fast (non-hydrodynamical) modes” of perturbed black holes [@Friess:2006kw]. As a consequence it can be employed to study a broder phenomenology than that reachable via the correspondence[^7]. We stress that our formalism is also applicable beyond asymptotically AdS spacetimes. Thus it can also help shed light on nonlinear mode generation in perturbations of asymptotically flat black hole spacetimes [@Papadopoulos:2001zf; @Zlochower:2003yh]. We thank David Radice for stimulating discussions about turbulent fluids, Vitor Cardoso for further insights into perturbations of AdS spacetimes as well as Michal Heller and Olivier Sarbach for general discussions. This work was supported in part by NSERC through a Discovery Grant and CIFAR (to LL). FZ would like to thank the Perimeter Institute for hospitality during the closing stages of this work. Research at Perimeter Institute is supported through Industry Canada and by the Province of Ontario through the Ministry of Research & Innovation. Brief overview of coupled oscillator systems {#sec:Oscillators} ============================================ Consider a family of nonlinearly coupled harmonic oscillators governed by, $$\begin{aligned} \label{eqeom} &\ddot q_j +\gamma_j \dot q_j+\tilde \omega^2_j q_j \nonumber \\ &=\sum_{kl} ( \tilde{\lambda}^{(1)}_{jkl} q_k q_l + \tilde{\lambda}^{(2)}_{jkl} \dot q_k q_l+ \tilde{\lambda}^{(3)}_{jkl} \dot q_k \dot q_l) \equiv S_j\,, \end{aligned}$$ where $\tilde \omega^2_j q_j$ is the restoring force and $\gamma_j$ is the damping coefficient. Each oscillator’s displacement can be decomposed in the same way as Eq. , with $\omega_j$ satisfying $$-\omega^2_j - i \gamma_j \omega_j+\tilde \omega^2_j=0\,.$$ In the presence of nonlinear mode-mode coupling ($\tilde{\lambda}^{(n)}_{jkl}\ne0$), $A_j$ and $B_j$ are both time-dependent. In fact, we can take one more time derivative of the first equation in Eq. (\[eqab\]) , and obtain $$\begin{aligned} &(\dot A_j - i \omega _j A_j ) e^{- i \omega_j t} = \frac{1}{\omega_j+\omega^*_j} (\omega^*_j \dot q_j+i \ddot q_j) \nonumber \\ & =\frac{i S_j}{\omega_j+ \omega^*_j} +\left (\frac{\dot q_j \omega_j^*}{\omega_j+\omega^*_j} +\frac{\gamma_j \dot q_j+\tilde \omega^2_j q_j }{i (\omega^*_j+\omega_j)} \right )\,, \nonumber \\ & = \frac{i S_j}{ \omega_j+ \omega^*_j} - \frac{i \omega_j}{\omega_j+\omega^*_j} (\omega^*_j q_j+i \dot q_j)\,,\end{aligned}$$ such that $$\dot A_j = \frac{i S_j}{ \omega_j+ \omega^*_j} e^{i \omega_j t}\,,$$ and similarly $$\dot B_j =- \frac{i S_j}{ \omega_j+ \omega^*_j} e^{- i \omega^*_j t}\,.$$ These effective equations of motion have the same kind of first-order form as Eq.  and Eq. , which means that one can utilize results from previous studies on nonlinear coupled oscillators to analyze nonlinear gravitational interactions. Two-dimensional incompressible fluid in the inertial regime \[sec:Fluid\] ========================================================================= Here we review the Navier-Stokes equation for a two-dimensional incompressible fluid. This discussion highlights how a new symmetry for the mode-mode coupling coefficient arises in the mode-expansion picture. Such symmetry is critical for the double-cascading (inverse energy and direct enstrophy cascades) behavior in two-dimensionalfluids. A more detailed discussion can be found in Ref. [@Kraichnan1967]. The Navier-Stokes equation for an incompressible fluid in the spatial-frequency domain reads $$\begin{aligned} &\left (\frac{\partial}{\partial t}+\nu k^2 \right ) u_j({\bf k}, t) \nonumber \\ &= i k_l P_{jn}({\bf k})\sum_{{\bf p}+{\bf q}={\bf k}} u_n({\bf p}, t) u_l({\bf q}, t)\, \nonumber \\ &=\frac{i k_l P_{jn}({\bf k})}{2}\sum_{{\bf p}+{\bf q}={\bf k}} [u_n({\bf p}, t) u_l({\bf q}, t) + u_n({\bf q}, t) u_l({\bf p}, t)]\, \end{aligned}$$ where ${\bf u}({\bf x}, t) = \sum_{{\bf k}} e^{i {\bf k} \cdot {\bf x}} {\bf u}({\bf k},t)$ and $P_{jn}({\bf k}) \equiv \delta_{jn} -k_j k_n/k^2$. In incompressible fluids, the condition $\nabla \cdot {\bf u}=0$ translates to ${\bf k} \cdot {\bf u}({\bf k}, t)=0$ in the Fourier domain. We can write ${\bf u}({\bf k}, t)$ as $${\bf u}({\bf k}, t) =A({\bf k},t) {\hat u}({\bf k}, t) \,,$$ where $\hat u({\bf k},t )$ satisfies $\hat u\cdot {\bf k} =0$ and $\hat u\cdot \hat u=1$. In $2+1$ fluids, $\hat u$ is unique for any ${\bf k}$. Using the new variables, the Navier-Stokes equation can be rewritten as $$\begin{aligned} \left (\frac{\partial}{\partial t}+\nu k^2 \right ) A({\bf k}, t) &= i \sum_{{\bf p}+{\bf q}={\bf k}} \kappa({\bf k},{\bf p},{\bf q})A({\bf p}, t) A({\bf q}, t)\, \nonumber \\ &=i \sum_{{\bf p}+{\bf q}={\bf k}}\{[\hat u({\bf k}, t) \cdot \hat u({\bf p},t) ] [{\bf k} \cdot \hat u({\bf q},t)]+[\hat u({\bf k},t) \cdot \hat u({\bf q},t)] [{\bf k} \cdot \hat u({\bf p},t)]\}A({\bf p}, t) A({\bf q}, t) \,. \end{aligned}$$ This is the same as the shear-shear coupling term in Eq. , which is already written in a form consistent with the coupled oscillator model. In the inertial regime we shall set the viscosity coefficient $\nu$ to zero (as such coefficient only governs the extent of the regime but not the behavior within it) and recall that ${\bf u}({\bf x}, t)$ must be real. One can then show that $$\begin{aligned} -\frac{\partial [u_j({\bf k}, t) u^*_j({\bf k}, t)]}{\partial t}= \sum_{{\bf p}+{\bf q}+{\bf k}=0}{\rm Im}\{[{\bf u}({\bf k},t) \cdot {\bf u}({\bf p},t) ] [{\bf k} \cdot {\bf u}({\bf q},t)]+[{\bf u}({\bf k},t) \cdot {\bf u}({\bf q},t) ] [{\bf k} \cdot {\bf u}({\bf p},t)]\} \nonumber \\ \equiv \sum_{{\bf p}+{\bf q}+{\bf k}=0} {\rm Im}[\kappa({\bf k},{\bf p},{\bf q}) A({\bf p},t) A({\bf q},t ) A({\bf k},t)]\,.\end{aligned}$$ Energy conservation requires that $$\frac{\partial [u_j({\bf k}, t) u^*_j({\bf k}, t)]}{\partial t} +\frac{\partial [u_j({\bf p}, t) u^*_j({\bf p}, t)]}{\partial t} +\frac{\partial [u_j({\bf q}, t) u^*_j({\bf q}, t)]}{\partial t} =0\,,$$ which is equivalent to demanding $$\kappa({\bf k},{\bf p},{\bf q})+\kappa({\bf q},{\bf k},{\bf p})+ \kappa({\bf p},{\bf q},{\bf k})=0$$ for any vectors ${\bf k}$, ${\bf p}$, and ${\bf q}$ satisfying ${\bf p}+{\bf q}+{\bf k}=0$. It is straightforward to check that the above relation is automatically satisfied given the expression of $\mathcal T$. Moreover, for $2+1$ fluids, by using the fact that ${\bf k} \cdot {\bf u}({\bf k}, t)={\bf p} \cdot {\bf u}({\bf p}, t)={\bf q} \cdot {\bf u}({\bf q}, t)=0$ and the identity $$\sin^3\theta_1\cos(\theta_2-\theta_3)+\sin^3\theta_2\cos(\theta_3-\theta_1)+\sin^3\theta_3\cos(\theta_1-\theta_2)=0\,,$$ for $\forall \theta_1+\theta_2+\theta_3=\pi\,,$ we can show that an additional symmetry for the mode-mode coupling exists, which is $$k^2 \kappa({\bf k},{\bf p},{\bf q})+q^2\kappa({\bf q},{\bf k},{\bf p})+p^2 \kappa({\bf p},{\bf q},{\bf k})=0\,.$$ This additional symmetry is directly connected with the additional conserved quantity in $2+1$ fluids: enstrophy. With two conserved quantities in the inertial regime, Kraichnan [@Kraichnan1967] explained that a dual-cascading behavior should be expected in the turbulent regime. This example strongly suggests that the symmetry of the mode-mode coupling coefficients in our coupled oscillator model could be crucial for classifying the nonlinear behavior of gravitational evolutions. Expansion in two different bases {#sec:TwoBasis} ================================ Let us imagine a simple example of a scalar field whose perturbations propagate on a 2-dimensional flat spacetime with time-like boundaries at $x=0$ and $x=1$. For comparison purposes, we have assigned two coordinate systems in this spacetime: standard Cartesian coordinates $(t, x)$ and “null” $(v, x)$ coordinates, with $v \equiv t+x$. For simplicity, we impose Dirichlet boundary conditions $\Phi |_{x=0}=\Phi |_{x=1} =0$ for the wave. At linear order, the scalar wave satisfies the following wave equation $$(-\partial^2_t+\partial^2_x)\Phi=0\,,$$ in the $(t, x)$ coordinate system or $$(\partial^2_x+2\partial_v\partial_x)\Phi=0\,,$$ in the $(v, x)$ coordinate system. Based on the wave equation and the boundary conditions, we can see that this is a standard Sturm-Liouville problem, where it is straightforward to write down the solutions of the wave equation in a mode expansion $$\Phi(t,x) = \sum_j \left(A_j e^{- i \omega_j t} +B_j e^{i \omega_j t}\right) \sin (j\pi x)\,,$$ and $$\Phi(v,x) = \sum_j \left(\tilde A_j e^{- i \omega_j v} e^{i \omega_j x} +\tilde B_j e^{i \omega_j v} e^{- i \omega_j x}\right) \sin (j\pi x) \,,$$ with $\omega_j =j \pi$. It is obvious that we can match up the linear modes from the two different expansions above, and in fact we can make the identifications $$\label{eqma} A_j = \tilde A_j,\quad B_j =\tilde B_j\,.$$ ![An illustration for mode decompositions of a scalar field in a flat spacetime. At each point (such as the star in the diagram), we show two possible mode bases with respect to which to decompose the scalar wave.[]{data-label="fig:spacetime"}](spacetime.png){width="0.75\columnwidth"} Now suppose nonlinear terms ($\Phi^2$, $\Phi^3$ or even higher order) are present in the wave equations, resulting in a new solution \[$\Phi(t,x)$ or $\Phi(v,x)$\]. For such a wave, we can still choose constant-$t$ or constant-$v$ slices, and use the above spatial mode basis to perform a decomposition $$\Phi(t,x) = \sum_j \left[A_j(t) e^{- i \omega_j t} +B_j(t) e^{i \omega_j t}\right] \sin (j\pi x)\,,$$ in the $(t, x)$ coordinates, and $$\Phi(v,x) = \sum_j \left[\tilde A_j(v) e^{- i \omega_j v} e^{i \omega_j x} +\tilde B_j(v) e^{i \omega_j v} e^{- i \omega_j x}\right] \sin (j\pi x)$$ in the $(v, x)$ coordinates. We note that the mode amplitudes are generically time-dependent now. Pick an arbitrary point in the spacetime (for example, the one labeled with a “star” in Fig. \[fig:spacetime\]). There we can ask whether the matching described in Eq.  still holds for the two different mode expansions at that point. As we can see from Fig \[fig:spacetime\], these two mode expansions sample two different slices of the spacetime: one at constant $t$ and the other at constant $v$. Unlike the linear case, the scalar wave distributions on these two slices can be made quite “independent” of each other by freely detuning the nonlinear terms in the wave equations. In the end, the largely independent data on these two slices imply that simple mappings such as Eq.  no longer exist for mode expansions under different bases in the general nonlinear scenario. However, we emphasize that despite the lack of a simple mapping between them, both mode expansions are equally valid in describing the wave evolution. Although our present analysis is performed using this simple example where the mode expansion is complete, we see no reason why a similar conclusion would not hold for quasinormal mode expansions of generic spacetimes. Coupled oscillator model in Schwarzschild spacetime {#sec:schwarz} =================================================== As discussed in Sec. \[sec2\], generic linear metric perturbations can be decomposed into quasinormal modes plus a residual part. Unless we are dealing with normal modes which form a complete basis, or under certain physical conditions in which quasinormal modes dominate (e.g., AdS perturbations in the hydrodynamical limit), ignoring the contribution from the residual part should always require justification. Here we offer an alternative way of arriving at the coupled oscillator model, using the Green’s function approach (see also [@Barranco:2013rua]). Using this method, the quasinormal mode excitations can be unambiguously determined given a driving source term. So far this approach can only be demonstrated for perturbations with separable wave equations, such as Schwarzschild and Kerr perturbations, and we shall leave extensions to more general spacetimes to future studies. To simplify the problem, we assume that the angular dependence has been factored out, and we focus on the nonlinear evolution of modes with spherical harmonic indices $(l, m)$, which satisfy the Regge-Wheeler (odd partity) and Zerilli-Moncrief (even parity) wave equations $$\left [ -\frac{\partial^2}{ \partial t^2}+\frac{\partial^2}{\partial r^2_*}+V_{\rm e/o}(r )\right ] \Psi_{\rm e/o}=S_{\rm e/o}(r,t)\,.$$ Here $r_* \equiv r+ 2M \log[r/(2M)-1]$ and $\Psi_{\rm e}, \Psi_{\rm o}$ are the Zerelli-Moncrief and Regge-Wheeler gauge invariant quantities, respectively. The expressions for the potential $V_{\rm e/o}$ and angular-projected source $S_{\rm e/o}$ can be found in [@Martel2005; @Yang:2014ae]. In our present study, $S_{\rm e/o}$ is defined by the second order Ricci tensor, which is bilinear in the metric perturbations. Without the source term, for fixed time dependence $e^{-i \omega t}$ there are two independent solutions to each wave equation. One solution asymptotes to $$u_{\rm in} \rightarrow e^{-i \omega (t+r_*)},\quad r_* \rightarrow -\infty$$ near the event horizon, and $$u_{\rm in} \rightarrow C_{\rm in}(\omega) e^{-i \omega (t+r_*)}+C_{\rm out}(\omega)e^{-i \omega(r-r_*)},\quad r_* \rightarrow \infty$$ at spatial infinity. The other solution satisfies $$u_{\rm out} \rightarrow e^{-i \omega (t-r_*)},\quad r_* \rightarrow \infty$$ at the spatial infinity, and $$u_{\rm out} \rightarrow \tilde{C}_{\rm in}(\omega) e^{-i \omega (t+r_*)}+\tilde{C}_{\rm out}(\omega)e^{-i \omega(r-r_*)},\quad r_* \rightarrow -\infty$$ near the horizon. At the quasinormal mode frequencies $\omega_n$, these two solutions become degenerate, and $C_{\rm in}(\omega_n)=\tilde{C}_{\rm out}(\omega_n)=0$. Using the Green’s function technique, Leaver [@Leaver1986] showed that $\Psi$ can be decomposed as $$\Psi = \Psi_{\rm QNM} + \Psi_{\rm F}+\Psi_{\rm BC}\,,$$ where $\Psi_{\rm F}$ is the contribution from high-frequency propagator, $\Psi_{\rm BC}$ is the branch-cut contribution in the Green function calculation, and $\Psi_{\rm QNM}$ is the quasinormal mode contribution that we seek. In addition, he showed that $$\begin{aligned} \label{eqgreen} \Psi_{\rm QNM}(r,t) &= &2{\rm Re}\left [ \sum_n \frac{u_{\rm in}(r )e^{-i \omega_n t}}{D_n} \int^t_{-\infty}d t' \int^\infty_{-\infty}d r_*' \right . \nonumber \\ &&\quad\left . e^{i \omega_n t'} u_{\rm in}(r' ) S(r', t')\right ]\,,\end{aligned}$$ with $$D_n \equiv 2 \omega_n \left .\frac{d C_{\rm in}}{d \omega} \right |_{\omega_n}C^{-1}_{\rm out} (\omega_n)\,.$$ Notice that we are taking the real part because this QNM contribution is supposed to sum over both positive and negative frequencies. Also note that in order to maintain causality, we have introduced an upper bound $t$ into the time integral of Eq. , while in the original paper [@Leaver1986] this bound was set to $\infty$ (see also [@Andersson:1996cm]). From Eq. , it is then straightforward to derive the equations of motion for the amplitude of mode $n$ $$\dot A_n(r, t) = \frac{e^{i \omega_n t}}{D_n} \int dr'_* u_{\rm in}(r') S(r', t) \equiv \frac{e^{i \omega_n t}}{D_n} \langle u_{\rm in} | S \rangle_{\text{CI}}\,,$$ where the integration should be performed as a contour integral in the complex $r'$ plane to ensure convergence [@Yang:2014zva]. Interestingly, when we apply this Green’s function technique to analyze generation of the shear quasinormal modes in Sec. \[sec4\] (as the wave equation is separable), we find that the generalized inner product $\langle\cdot |\cdot \rangle_{\rm CI}$ coincides with $\langle\cdot |\cdot \rangle_\omega$ defined in Eq. . [^1]: Due to a crucial difference: the Einstein equation is linearly degenerate as opposed to truly nonlinear as is the case of e.g., the Navier-Stokes equations. [^2]: In analogy to hydrodynamics, it is of course necessary to be in the regime of high [*gravitational Reynolds number*]{}. [^3]: This qualification is represented by the use of the “$\sim$” notation in  (see, e.g., [@Kokkotas1999]). [^4]: \[fn5\]The prompt piece of the residual can be intuitively understood as the source terms propagating on the light-cone. Also notice that the source terms, as represented by Eq.  or Eq. , are linear in the hydrodynamical momentum, so overall the source terms are of $O(k)$, as is the excitation amount of the prompt residual. [^5]: This is of course just an inconsequential overall constant rescaling of $A$, the more important goal is to match the angular dependence of the coupling constants. [^6]: In this work we have included up to three-mode interactions, but the formalism can be extended to include higher order interactions. [^7]: Recently, resummation techniques have been proposed to take some of these higher modes into account within an extended hydrodynamical description [@Heller:2013fn]. This requires knowledge of the hydrodynamical expansion to very large orders.
--- abstract: 'Wide-field images obtained with the 3.6 meter Canada-France-Hawaii Telescope are used to investigate the spatial distribution and photometric properties of the brightest stars in the disk of M81 (NGC 3031). With the exception of the central $\sim 2$ kpc of the galaxy and gaps between CCDs, the survey is spatially complete for stars with $i'' < 24$ and major axis distances of 18 kpc. A more modest near-infrared survey detects stars with $K < 20$ over roughly one third of the disk. Bright main sequence (MS) stars and RSGs are traced out to galactocentric distances of at least 18 kpc. The color of the red supergiant (RSG) locus suggests that Z = 0.008 when R$_{GC} > 6$ kpc, and such a radially uniform RSG metallicity is consistent with \[O/H\] measurements from HII regions. The density of bright MS stars and RSGs drops when R$_{GC} < 4$ kpc, suggesting that star formation in the inner disk was curtailed within the past $\sim 100$ Myr. The spatial distribution of bright MS stars tracks emission at far-ultraviolet, mid- and far-infrared wavelengths, although tidal features contain bright MS stars but have little or no infrared flux. The specific frequency of bright MS stars and RSGs, normalized to $K-$band integrated brightness, increases with radius, indicating that during the past $\sim 30$ Myr the specific star formation rate (SSFR) has increased with increasing radius. Still, the SSFR of the M81 disk at intermediate radii is consistent with that expected for an isolated galaxy as massive as M81, indicating that the star formation rate in the disk of M81 has not been markedly elevated during the past few tens of millions of years. The stellar content of the M81 disk undergoes a distinct change near R$_{GC} \sim 14$ kpc; the $K-$band light profile, which is dominated by old and intermediate age stars, breaks downward at this radius, whereas the density profile of young stars flattens, but does not break downwards. Thus, the luminosity-weighted mean age decreases with increasing radius in the outer regions of the M81 disk.' author: - 'T. J. Davidge' title: The Stellar Disk of M81 --- =1.0cm INTRODUCTION ============ The formation of spiral galaxy disks is likely a long-term process that continues to the present day. Bekki & Chiba (2001) model the formation of a Galaxy-like system, and find that while a central spheroid forms from the merger of dominant clumps $\sim 8$ Gyr in the past, a stable gas disk only forms $\sim 2$ Gyr later. Disks that formed at moderate to high redshift may continue to grow to the present-day, as fresh material is accreted (e.g. Trujillo & Pohlen 2005; Barden et al. 2005). Naab & Ostriker (2006) model the infall of material onto the Galactic disk, and find that the disk grows outward with time, with a rate of growth at the present day of $\sim 1$ kpc Gyr$^{-1}$. That disk size increases with time is consistent with observations of galaxies spanning a range of redshifts (Trujillo & Aguerri 2004; Trujillo & Pohlen 2005). Large-scale radial trends are likely imprinted in disks during their assembly. While secular processes will blur these trends on timescales of many crossing times (e.g. Sellwood & Binney 2002; Roskar et al. 2008), mergers and galaxy-galaxy interactions may have an even greater and more rapid impact on the distribution of stars and gas. There is a high merger rate among spiral galaxies at intermediate redshifts (Hammer et al. 2005) and, depending on the orbital geometry, the accretion of a minor satellite may have a profound impact on the structural properties of disks, causing rings, warps, and a vertical morphology that can be loosely interpreted in the context of thin and thick disk components (e.g. Read et al. 2008; Kazantzidis et al. 2008). Galaxy-galaxy interactions may drive gas into the central regions of galaxies (e.g. Mihos & Hernquist 1994), resulting in centrally-concentrated star-forming activity, and the choking of star formation at large radii, as may have happened in M82 (Davidge 2008a). While major mergers will have a large impact on the properties of the host systems, these may not permanently obliterate disks if the merging systems are gas-dominated, as gas disks can re-form after a merger (e.g. Robertson et al. 2006; Governato et al. 2007). Based largely on the angular momentum of disks and the chemical composition of stars at large radii, Hammer et al. (2007) argue that the majority of nearby spirals experienced major mergers within the past $\sim 6$ Gyr, and that the Galaxy has apparantly escaped such activity. While the main period of mass assembly in the Galaxy may then have terminated some 8 - 10 Gyr in the past (e.g. Zentner & Bullock 2003; Bullock & Johnston 2005), the star-forming history of the solar neighborhood shows evidence of a possible accretion event during intermediate epochs (Cignoni et al. 2006), suggesting that the Galactic disk may not have completely escaped interactions with companions. Located at a distance of only 3.9 Mpc, the SA(s)ab galaxy M81 is an important laboratory for probing the impact of interactions on the disk of a large spiral galaxy. Dynamical arguments (Yun, Ho, & Lo 1994) and the stellar content of M82 (e.g. de Grijs, O’Connell, & Gallagher 2001; Mayya et al. 2006) indicate that M81 and M82 interacted a few hundred Myr ago, when M82 appears to have passed through the disk of M81 (Yun et al. 1994). Streams of HI link M81, M82, and NGC 3077 (e.g. Brouillet et al. 1991; Yun et al. 1994; Boyce et al. 2001). The debris field between these galaxies is an area of recent star formation (e.g. de Mello et al. 2008), and diffuse ensembles of young stars are present (Durrell et al 2004; Davidge 2008b). Previous studies of the stellar content of M81 have explored only a small part of the disk. Hughes et al. (1994) discuss two WFPC2 fields that sample the disk of M81 at intermediate radii. Their CMD has a broad blue plume that contains a mix of main sequence (MS) stars and blue supergiants (BSGs) and a red supergiant (RSG) sequence; the blue and red plumes both peak near M$_V \sim -7.5$. Tikhonov, Galazutdinova, & Drozdovsky (2005) probe the stellar content of M81 in 6 WFPC2 fields, and resolve red giant branch (RGB) stars in some of these. Using the RGB-tip brightness, they estimate the distance of M81 to be 3.85 Mpc, and this distance is adopted for M81 throughout this study. Tikhonov et al. (2005) also detect radial gradients in the properties of the RGB population that they attribute to a metallicity gradient. Based on four fields that sample the outer disk, they find that RGB stars have a mean metallicity \[Fe/H\] $= -0.65 \pm 0.04$. There is evidence that the interaction with M82 had an impact on the recent star-forming history of M81. Chandar, Tsvetanov, & Ford (2001) find a population of compact blue star clusters in M81, the formation of which appeared to have commenced $\sim 600$ Myr in the past. Clusters of this nature are typically associated with elevated levels of star-forming activity, and the youngest of these have ages $\sim 6$ Myr. Swartz et al. (2003) find a population of x-ray sources in the central 2 kpc of M81 that probably belongs to the bulge. With an estimated age $\sim 400$ Myr, these sources appear to be the remnant of an elevated episode of centrally-concentrated star formation. Davidge (2006a) examines the distribution of RSGs in four disk fields at intermediate radii, and finds evidence for spatial variations in the star-forming history during the past $\sim 25$ Myr. These data suggest that star formation was distributed over a larger fraction of the disk 25 Myr in the past than at the present day, and that the northern portion of the disk has been a site of long-term star-forming activity. Most recently, Williams et al. (2008) discuss deep ACS images of a field in the outer disk of M81. When averaged over Gyr time scales, the star formation rate (SFR) in this field stayed roughly constant for a large fraction of the age of the Universe. However, a few hundred Myr ago the SFR declined, in marked contrast with the inner regions of the galaxy. A spiral arm passes through the Williams et al. (2008) field, and the SFR in the arm increased during the past few tens of Myr. Williams et al. (2008) conclude that the stars in the outer disk of M81 experienced rapid early enrichment, and that the stars that formed in the past 0.1 Gyr typically have \[M/H\] $\sim -0.4$. The studies discussed above indicate that there have been spatial and temporal variations in the star-forming history of M81 during the past few hundred Myr. Arguably of greatest interest in the context of galaxy-galaxy interactions is the increase in the central SFR during this time, and the corresponding drop in the SFR of the outer disk, likely due to the movement of gas in the galaxy. Measuring the ages and metallicities of stars across the disk of M81 will help establish the chronology of these events, and the timescales over which gas has moved since the interaction. A census of the brightest blue and red stars in M81 is a modest but logical starting point to investigate spatial variations in the star-forming history during the past $\sim 100$ Myr. Stellar structure models also suggest that the color of the RSG sequence is sensitive to metallicity, and so it can be used to gauge the metallicities of the youngest stars, and investigate possible radial trends. Efforts to study the stellar content of the outer disk of M81 are complicated by contamination from tidal debris. Some tidal companions (e.g. Makarova et al. 2002; Sabbi et al. 2008) are relatively dense stellar aggregates, and stars from these objects can be culled from the data with relative ease. However, diffuse collections of young stars that are probably in the process of tidal disruption have also been identified (e.g. Durrell et al. 2004; Davidge 2008b). While the objects identified by Durrell et al. (2004) and Davidge (2008b) are located outside of the area considered in this paper, the identification and removal of stars that might belong to such low density structures of this type that are closer to M81 is problematic. Still, a survey that covers the entire disk of M81 can be used to suppress the impact of such structures by azimuthally averaging stellar properties over the entire disk. In the present study, the wide-field MegaCam and WIRCam imagers on the 3.6 meter Canada-France-Hawaii Telescope (CFHT) are used to investigate the photometric properties and spatial distributions of the brightest stars in the M81 disk. With the exception of the crowded central regions of the galaxy and gaps between detector elements, the MegaCam data cover the entire disk of the galaxy and the debris field between M81 and M82, sampling the brightest MS stars, BSGs and RSGs. The WIRCam data covers much of the south west quadrant of the galaxy, detecting the brightest RSGs and asymptotic giant branch (AGB) stars in this part of the galaxy. The paper is structured as follows. The observations, along with the procedures used to reduce the raw data and make the photometric measurements, are described in §2. The photometric properties of the brightest blue and red stars are discussed in §3, while the spatial distribution of the brightest stars is examined in §4. A summary and discussion of the results follows in §5. In Appendix A the old age estimate for M81 that is determined from integrated light studies is reconciled with the evidence for an elevated level of star formation in the central regions of the galaxy, while in Appendix B near-infrared measurements of spectroscopically confirmed globular clusters are presented. OBSERVATIONS & REDUCTIONS ========================= MegaCam ------- Images of a one degree$^2$ field that is centered midway between M81 and M82 were recorded with MegaCam (Boulade et al. 2003) through $r', i'$ and $z'$ filters on the night of October 23 UT 2006. The detector in Megacam is a mosaic of thirty six $2048 \times 4612$ pixel$^2$ CCDs, with 0.185 arcsec pixel$^{-1}$ sampling. A four-point square dither pattern was used to assist with the identification of bad pixels and cosmic rays. Four 300 sec exposures were recorded in $r'$ and $i'$, while four 500 sec exposures were recorded in $z'$. Stars have 0.7 – 0.8 arcsec FWHM in the final processed images, depending on the filter. Instrumental and environmental signatures were removed from the MegaCam data with the CFHT ELIXIR pipeline, which performs bias subtraction, flat-fielding, and the subtraction of a fringe frame. The ELIXIR-processed images were aligned, stacked, and then trimmed to the area that is common to all exposures. These data were used previously to examine the stellar content in and around M82 (Davidge 2008a,c), and search for diffuse stellar groupings in the debris field between M81 and M82 (Davidge 2008b). WIRCam ------ The western disk of M81 was imaged through $J, H$ and $Ks$ filters with WIRCam (Puget et al. 2004) on the night of February 4 UT 2007. The field is centered on Ho IX. The detector in WIRCam is a mosaic of four $2048 \times 2048$ HgCdTe arrays, that together image a $21 \times 21$ arcmin$^2$ field with 0.3 arcsec pixel$^{-1}$ sampling. A square-shaped dither pattern was used during data acquisition. A set of $20 \times 45$ sec exposures was recorded in $J$, while $120 \times 15$ sec exposures were recorded in $H$, and $160 \times 15$ sec exposures were recorded in $Ks$. The $J$ images were obtained over one dither cycle (i.e. five 45 sec exposures were recorded per dither position), while the $H$ and $Ks$ images were obtained over two complete dither cycles. Stars in the final images have 0.8 arcsec FWHM. The initial processing of these data was done with the CFHT I‘IWI pipeline, and this consisted of dark subtraction and flat-fielding. A calibration frame to remove interference fringes and thermal emission artifacts was constructed by median-combining all I’IWI-processed images obtained for this program, including those of M82 (Davidge 2008a). A low-pass clipping algorithm was applied to suppress stars and galaxies in the combined images, and the resulting calibration frames were subtracted from the flat-fielded data. The de-fringed images were aligned, stacked, and then trimmed to the area common to all exposures. Photometric Measurements ------------------------ The photometric measurements were made with the point spread function (PSF) fitting routine in ALLSTAR (Stetson & Harris 1988). The PSFs were constructed iteratively using the DAOPHOT (Stetson 1987) PSF task, with contaminating objects close to PSF stars being subtracted using progressively improved PSFs. Each PSF was typically constructed from 80 – 100 stars. Image quality varies by small amounts across the $1 \times 1$ degree MegaCam field. To mitigate the impact of these variations, the MegaCam images were divided into six 30 arcmin $\times$ 20 arcmin panels, and separate PSFs were generated for each panel. An inspection of star-subtracted images showed that the PSFs provided acceptable fits over each panel. The image quality is more stable across the smaller WIRCam field, and a single PSF was constructed in each filter using stars from all four detector elements. The photometric catalogues were culled to reject objects with large uncertainties in their measurements. Objects for which the fit error computed by ALLSTAR, $\epsilon$, exceeded $\pm 0.3$ magnitudes were rejected to remove sources near the faint limit of the data, where photometry is problematic. In addition, $\epsilon$ tends to rise monotonically towards fainter magnitudes for the majority of objects, and there are some objects for which $\epsilon$ is markedly higher than the norm at a given brightness. Such objects were also removed, and these tend to be either (1) galaxies, (2) multi-pixel cosmetic defects, and/or (3) in the crowded central regions of M81. Photometric zeropoints that are measured from standard stars that are observed as part of each MegaCam run are placed in MegaCam data headers during ELIXIR processing. The photometric calibration of the M81 Megacam data used the zeropoints measured in October 2006. As for WIRCam, the calibration used zeropoints that are posted on the CFHT website, which were computed from standard star observations made in February 2007. The photometric calibration was checked using published photometric catalogues. The MegaCam photometric measurements were compared with entries in the Sloan Dgital Sky Survey (SDSS) Data Release 7 (Adelman-McCarthy et al. 2009, in preparation). The differences between the MegaCam and SDSS measurements for sources with $i'$ between 17 and 19 are $\Delta i' = 0.025 \pm 0.012$, $\Delta (r'-i') = -0.001 \pm 0.015$, and $\Delta (i'-z') = 0.013 \pm 0.020$, where the uncertainties are the standard errors of the mean. The WIRCam calibration was checked using entries with $K < 14.5$ in the 2MASS Point Source Catalogue (Cutri et al. 2003). The average differences between measurements in the WIRCam and 2MASS systems are $\Delta K = -0.03 \pm 0.05$, $\Delta (J-K) = 0.01 \pm 0.06$, and $\Delta (H-K) = -0.01 \pm 0.06$. Sample completeness and the photometric scatter due to photon noise and crowding were assessed from artificial star experiments. Artificial stars were assigned colors that follow the main locus of stars in the M81 CMDs, and their brightnesses were measured using the same procedures that were applied to the actual data. Artificial stars were considered to be recovered only if they were detected in at least two bandpasses, after applying the $\epsilon$-based rejection critera described earlier. The artificial star experiments indicate that 50% completeness is encountered near $i' = 24.5$ and $K = 20$ throughout most of the M81 disk. Stellar density increases with decreasing distance from the center of M81, and so incompleteness sets in at brighter magnitudes in the central 4 kpc of M81. The WIRCam data is less susceptible to changes in stellar density because there is greater contrast in the infrared between the brightest red stars and the bluer unresolved stellar body than at visible wavelengths. The artificial star experiments further indicate that blends may account for a significant fraction of detected objects in the MegaCam and WIRCam data at magnitudes where the completeness fraction $< 50\%$. RESULTS: THE CMDs ================= The Morphology of the CMDs -------------------------- ### The MegaCam Data The $(i', r'-i')$ and $(z', i'-z')$ CMDs of sources in various annuli are shown in Figures 1 and 2. The distance interval specified in each panel is in the M81 disk plane and assumes that the disk is inclined at 59.3 degrees, based on the ellipticity of M81 in 2MASS images (Jarrett et al. 2003). There are stellar ensembles in the peripheral regions of M81 that are likely tidal in origin (e.g. Sun et al. 2005, Durrell et al. 2004, Davidge 2008b). The most obvious tidal contamination comes from Ho IX, and sources that belong to Ho IX have been culled from the data plotted in Figures 1 and 2. The CMDs of sources with R$_{GC}$ between 6 and 14 kpc have similar morphologies. Given that stellar density changes with R$_{GC}$, then this radial stability of the CMDs suggests that crowding does not affect the photometry of the brightest stars in this radial interval. The dominant feature in the $(i', r'-i')$ CMDs of sources with R$_{GC} < 14$ kpc is a concentration of objects that have $i' > 22.5$ and $r'-i'$ between 0 and 1.5. These objects are a combination of RSGs with ages in excess of $\sim 50 - 100$ Myr and the brightest AGB stars. The mixing of stars with such different evolutionary states and progenitor properties produces an amorphous cloud-like structure in the lower portions of the CMDs. Bright RSGs form a sequence that rises out of the concentration of AGB stars and fainter RSGs in the CMDs of objects with R$_{GC}$ between 4 and 12 kpc. The RSG sequence contains stars with $r'-i'$ between 0.5 and 1.0, and $i'-z'$ between 0 and 0.2. The diminished impact of line blanketing on the photometric properties of RSGs in the $(z', i'-z')$ CMDs is clearly evident when compared with the $(i', r'-i')$ CMDs, as RSGs form a near-vertical sequence in the $(z', i'-z')$ CMDs. This tighter morphology aids efforts to trace RSGs out to large radii, and a RSG sequence is seen in the 12 – 14 kpc $(z', i'-z')$ CMD, but not in the $(i', r'-i')$ CMD of the same radial interval. A prominant population of objects with $r'-i' < 0$ is seen in the $(i', r'-i')$ CMDs of most radial intervals. Although less pronounced because of the redder wavelength coverage, a corresponding sequence of blue objects is also present in the $(z', i'-z')$ CMDs. Given that the number density of background galaxies and foreground stars with $r'-i' < 0$ and $i'$ between 20 and 24 is modest (e.g. Davidge 2008b), then the blue objects in the CMDs are almost certainly MS stars, BSGs, and unresolved compact young star clusters that belong to M81 or the surrounding tidal debris field. The number of bright MS stars is comparatively modest in the 2 – 4 kpc interval, suggesting that the SFR in this part of M81 during recent epochs was lower than at larger radii. Gordon et al. (2004) use far-infrared flux measurements obtained from [*Spitzer*]{} MIPS data to investigate the SFR throughout the disk of M81, and the 2 – 4 kpc interval overlaps with the region that they refer to as the ‘inner ring’. Gordon et al. (2004) find that the specific SFR (SSFR) in the inner ring is near the lower limit of what is seen throughout the entire disk, in qualitative agreement with the modest number of bright MS stars in the CMDs of this part of M81. Gordon et al. (2004) found that the SFRs estimated for M81 from UV and H$\alpha$ emission are systematically lower than those from $24\mu$m emission due, at least in part, to star-forming areas that are obscured by dust at visible wavelengths. Much of the $24\mu$m emission in M81 is concentrated in the 4 – 6 kpc annulus, and the MS and RSG plumes in the CMDs of this annulus are broader than at larger radii. This suggests that there may be a larger amount of dust mixed with young stars and along the line of sight in the 4 – 6 kpc annulus than at larger radii. The majority of objects in the 16 – 18 kpc CMDs are foreground stars and background galaxies. Foreground Galactic stars form a diffuse population of objects with $r'-i' > 0$ in the $(i', r'-i')$ CMDs, while background galaxies have relatively red colors and dominate over foreground stars when $i' > 22$. Foreground stars and background galaxies make progressively larger contributions to the CMDs as R$_{GC}$ increases because as one moves to larger R$_{GC}$ then (1) the density of sources that belong to M81 diminishes, and (2) larger areas on the sky are sampled in each annulus. ### The WIRCam Data The $(K, J-K)$ and $(K, H-K)$ CMDs constructed from the WIRCam observations, which sample roughly one third of the M81 disk, are shown in Figures 3 and 4. As in the preceeding section, stars that belong to Ho IX have been excised from the CMDs at large R$_{GC}$. The most prominent feature in the near-infrared CMDs is the cloud of AGB stars and older RSGs that have $K > 19$, or M$_K > -9$. This feature can be traced out to R$_{GC} \sim 16$ kpc in the near-infrared CMDs. The most luminous RSGs and AGB stars form a vertical sequence centered near H–K $\sim 0.2$ and J–K $\sim 1$, that has $K > 17$, or M$_K > -11$. The bright RSG/AGB sequence is seen out to R$_{GC} = 16$ kpc. Only a modestly populated bright RSG sequence is seen in the $2 - 4$ kpc CMDs, in agreement with the low number of RSGs in the MegaCam CMDs of this radial interval. The majority of objects in the $16 - 18$ kpc WIRCam CMDs are foreground stars and background galaxies. The foreground stars occupy the sequence with $J-K \sim 0.7$ and $H-K \sim 0.2$ that has $K < 18$. Background galaxies populate the diffuse cloud of objects with $K > 17$ that is centered near $H-K \sim 0.8$ and $J-K \sim 1.6$, which are the approximate colors of normal galaxies at intermediate redshifts (e.g. Davidge 2007). Comparisons with Isochrones --------------------------- ### The M81 Disk at Intermediate and Large Radii The CMDs of stars with R$_{GC}$ between 8 and 10 kpc are representative of the disk of M81 at intermediate radii, and in Figure 5 the $(M_{i'}, r'-i')$ and $(M_{z'}, i'-z')$ CMDs of this interval are compared with isochrones from Girardi et al. (2004). Isochrones with log(t$_{yr}) = 7.0$, 7.5, 8.0, and 8.5 are shown for metallicities Z = 0.008 and Z = 0.019. The isochrones were constructed from a compilation of stellar evolution models generated by the Padova group, and those shown in Figure 5 rely on the evolutionary sequences discussed by Salasnich et al. (2000), with a thermally-pulsing AGB component as described by Marigo & Girardi (2001). The models include convective overshooting and metallicity-dependant mass loss in the input physics. The blue envelope of points on the $(i', r'-i')$ and $(z', i'-z')$ CMDs is well matched by the log(t$_{yr}$) = 7.0 isochrones, indicating that there has been very recent star formation in the 8 – 10 kpc interval. A modest number of stars fall above the log(t$_{yr}$) = 7.0 MS turn-off on both CMDs. While the majority of these are probably foreground stars, the possibility that some of these are either stars younger than log(t$_{yr}$) = 7 or blends can not be discounted with these data. The brightest RSGs define a distinct sequence in visible and near-infrared wavelength CMDs, and the isochrones predict that the color and shape of the RSG sequence is sensitive to metallicity. The Z = 0.008 models match the $r'-i'$ and $i'-z'$ colors of the RSG sequence in the 8 – 10 kpc interval, whereas the Z = 0.019 models predict $r'-i'$ and $i'-z'$ colors that are a few tenths of a mag redder than observed. In addition, the Z = 0.019 RSG sequences are more curved at the upper end than the Z = 0.008 sequences, and in this regard the Z = 0.008 isochrones again better match the observations than the Z = 0.019 isochrones. Thus, both the color and shape of the RSG sequences in the $(i', r'-i')$ and $(z', i'-z')$ CMDs are consistent with Z = 0.008. This is consistent with the metallicity calculated by Williams et al. (2008) for young stars in the outer disk of M81. RSGs in the disk of M81 and the disks of NGC 247 and NGC 2403 have similar metallicities (Davidge 2006b; 2007). The reader is cautioned that the calibration and metallicity-sensitivity of the portions of isochrones that track the post-MS stages of massive star evolution are uncertain, and the impact of mass loss, convective overshooting, and rotation on models of the RSG phase of evolution can be substantial. It is encouraging that the isochrones used here have been compared with the CMDs of fiducial star clusters with good results (e.g. Girardi et al. 2004; Bonatto, Bica, & Girardi 2004). However, while well-studied, the fiducial clusters are either old (e.g. Girardi et al. 2004), or have only modest numbers of highly evolved stars (Bonatto et al. 2004), so that the comparisons effectively probe only stars that have experienced modest amounts of evolution. The ratio of blue to red supergiants is a fundamental diagnostic of massive star models, and the inability of early models to reproduce this ratio (Langer & Maeder 1995), was attributed to not including rotation in the input physics (e.g. Maeder & Meynet 2001; Dohm-Palmer & Skillman 2002). This is a short-coming of the isochrones used here. Moreover, it is well-established that the mean metallicity of galaxies varies with system mass. It is thus worth noting that the calibration in Figure 5b of Asari et al. (2007) indicates that the metallicity of stars in M81 (M$_K \sim -24$) should be $\sim 0.2$ dex higher than of those in NGC 2403 (M$_K \sim -22$), whereas the mean colors of the RSG in these galaxies are similar. For comparison, the \[O/H\] values of M81 and NGC 2403 are offset by a few tenths of a dex (Figure 8 of Zaritsky et al. 1994). The WIRCam observations of the 8 – 10 kpc interval are compared with Z = 0.008 and Z = 0.019 isochrones from Girardi et al. (2002) in Figure 6. The isochrones have ages log(t$_{yr}$) = 7.5, 8.0, 8.5, and 9.0, and are based on the Salasnich et al. (2000) evolutionary sequences. Because of the diminished impact of line blanketing on photometric properties in the near-infrared, coupled with the reduced sensitivity to changes in effective temperature for all but the coolest stars, the AGB sequences of the oldest isochrones in the near-infrared are closer to vertical than at visible wavelengths. The isochrones suggest that stars evolving near the AGB-tip that are older than log(t$_{yr}$) = 9.0 are resolved in the WIRCam images. For comparison, assuming Z = 0.008, then the bulk of stars in the $(i', r'-i')$ CMDs have ages log(t$_{yr}) < 8.5$, while stars that are older than log(t$_{yr}$) = 8.5 are resolved in the $(z', i'-z')$ CMDs. ### Comparing the disks of M81 and M82 The stars in the disks of M81 and M82 form a fossil record that can be mined to investigate the impact of the interaction between these galaxies, and a comparison of their CMDs can be used to identify differences in their star-forming histories in a purely empirical manner. In Figure 7 the 8 – 10 kpc CMDs of M81 and the CMDs of the 4 – 6 kpc interval in M82 from Davidge (2008a) are compared with isochrones from Girardi et al. (2002; 2004). The 4 – 6 kpc M82 observations cover 10 arcmin$^2$, whereas the 8 – 10 kpc M81 observations cover 25 arcmin$^2$. A distance modulus of 27.95 has been adopted for M82 (Sakai & Madore 1999), as has a line of sight extinction A$_B = 0.12$ (Burstein & Heiles 1982). The $(i', r'-i')$ and $(z', i'-z')$ CMDs of these galaxies come from the same MegaCam image, and so these data have the same image quality. Incompleteness sets in at roughly the same magnitude in the MegaCam CMDs of both galaxies, because the stellar density in each field is relatively low, and completeness is defined by photon statistics, rather than crowding. The near-infrared CMDs are constructed from data with the same instrument recorded during the same run. The CMDs in Figure 7 indicate that the disks of M81 and M82 have experienced very different star-forming histories during the past $\sim 1$ Gyr. The upper envelope of points in the $(i', r'-i')$ and $(z', i'-z')$ CMDs of M81 indicates that a large population of stars with ages log(t$_{yr}$) $< 8.0$ is present. However, a corresponding population is clearly abscent in the M82 CMDs. Differences are also apparent when the AGB contents of the galaxies are compared, and this is evident from the $(M_K, J-K)$ CMDs of M81 and M82, which are compared in the bottom panel of Figure 7. While the majority of AGB stars in the disk of M81 have an age log(t$_{yr}) \geq$ 8.5, there is a clear concentration of objects in the M82 CMD that have log(t$_{yr})$ between 8.0 and 8.5. There is a modest spray of objects that extends to the right of the isochrones in the infrared CMDs, and these red objects hint at further differences between the stellar contents of the two disks. The majority of stars with $J-K > 1.4 - 1.5$ in nearby galaxies tend to be C stars (e.g. Hughes & Wood 1990; Demers, Dallaire, & Battinelli 2002; Davidge 2003; Batinnelli, Demers, & Mannucci 2007). The isochrones do not sample this region of the $(K, J-K)$ CMD, as they are based on models that assume O-rich atmospheres, and so do not track the photometric properties of stars with C-rich atmospheres. The region of the near-infrared CMD of the LMC discussed by Nikolaev & Weinberg (2000) that contains a distinct C star sequence is indicated in the $(M_K, J-K)$ CMDs in Figure 7. The upper envelope of sources with $J-K > 1.5$ in the M82 CMDs tracks reasonably well the bright limit of the LMC C star box, and this supports the notion that the red stars in M82 are C stars. While incompleteness prevents an exploration of the full population of C stars in these galaxies, it is clear that only a modest number of objects fall within the C star box in the M81 CMD, while the corresponding portion of the M82 CMD is richly populated. A visual comparison of the $(M_K, J-K)$ CMDs in Figure 7 further indicates that the faint limit of the M81 data may extend to larger magnitudes than that of M82, and so the relative number of C stars in M82 with respect to those in M81 may even be greater than seen in Figure 7. The differences between the C star contents of the disks of M81 and M82 are even more extreme than might be inferred from a simple visual comparison of their near-infrared CMDs, as a large fraction of the sources in the bright portion of the C star box in the M81 8 – 10 kpc $(M_K, J-K)$ CMD are foreground stars or background galaxies. To demonstrate this point the number of objects with M$_K$ between –8.5 and –9.0 and $J-K$ between 1.4 and 2.0 in the near-infrared CMDs of M81 and M82 were counted. These particular magnitude and color limits were adopted to sample an area of the CMD that contains C stars, while also avoiding regions where incompleteness occurs. Object counts were also made in a control area that is well removed from both galaxies to assess contamination from background galaxies and foreground stars. After subtracting the number counts from the control field, there remains only $8 \pm 8$ candidate bright C stars in the 8 – 10 kpc region of M81, confirming that most – if not all – of the sources in the brightest portion of the C star box in the lower left hand corner of Figure 7 are either foreground stars or background galaxies. There is thus no statistically compelling evidence of a large C star population in the 8 – 10 kpc interval of M81. In stark contrast, there are $221 \pm 16$ candidate bright C stars in the 4 – 6 kpc region of M82. The C star contents of galaxies are shaped by metallicity and star-forming history. If metallicity is too high then C is not dredged up during AGB evolution, and a C star is not formed. This metallicity sensitivity has been proposed to explain the tendency for lower C star fractions to occur in galaxies of progressively earlier morphological types (e.g. Battinelli & Demers 2005). However, the RGB sequences in the outer regions of M82 (Sakai & Madore 1999) and M81 (Williams et al. 2008) indicate that old and intermediate-age stars in these galaxies have comparable metallicities. Therefore, the difference in bright C star numbers is suggestive of differences in the star formation history of the M81 and M82 disks during intermediate epochs, in the sense that the SFR in M82 was higher than that in M81. In summary, the comparisons in Figure 7 indicate that while the disk of M81 has had a higher SFR during the past $\sim 100$ Myr than the disk of M82, the situation was likely very different prior to this. The richer population of the brightest AGB stars, including C stars, in M82 may be due to an elevated episode of star formation immediately following the interaction with M81, before the star burst activity settled into the central regions of the galaxy, where it is seen today. This being said, C star production occurs over an extended range of ages in simple stellar populations (e.g. Maraston 1998), and so some of the C star content in M82 may predate the interaction with M81. While the two fields considered here have different stellar densities and subtend different areas on the sky, these factors are not able to explain the difference in stellar contents see here. THE SPATIAL DISTRIBUTION OF DISK STARS ====================================== Early studies of the spatial distribution of stellar content often relied on integrated light measurements, and this remains the only option available for galaxies that are too distant to be resolved into stars. However, comprehensive surveys of the brightest stars have become practical for nearby galaxies with the recent deployment of large format imagers on telescopes that regularly deliver good image quality. The ability to assign ages using the standard clocks that can only be identified from CMDs, coupled with the potential to investigate the distribution of stars with a common evolutionary pedigree (e.g. RSGs), makes the use of resolved stars a more powerful means of probing the spatial distribution of stellar content than integrated light. The spatial distributions of the brightest MS stars and RSGs in M81 with respect to the unresolved light, which is dominated by older stars, can be used to search for areas that experienced systematically elevated or depressed episodes of star formation during the past few tens of Myr. A caveat is that tidal forces may have distorted the structure of M81 at large radii. Depending on the geometry of the interaction, material may also have been pulled out of the disk plane, rendering as problematic efforts to investigate the de-projected distribution of stars. These effects can be suppressed by investigating trends from azimuthally-averaged data. Mapping the Brightest Blue Stars and RSGs in the Young Disk of M81 ------------------------------------------------------------------ ### MS Stars and BSGs Young MS stars and BSGs can be identified from their locations on CMDs. Following the criteria defined by Davidge (2008b) to investigate bright blue stars in the M81–M82 debris field, objects that have $r'-i'$ between –0.5 and 0 and $i'$ between 24 and 22 are identified as massive MS stars and BSGs. Davidge (2008b) demonstrated that contamination from background galaxies in this part of the CMD is modest, making blue objects a powerful probe of the low-density outer regions of disks, where the contrast between objects that belong to a galaxy and those that do not is of prime importance. The distribution of bright blue stars in M81 is shown in Figures 8 (observed distribution) and 9 (de-projected distribution). The absence of bright MS stars at very small radii in Figures 8 and 9 is a consequence of the difficulties resolving even the brightest stars within the central $\sim 2$ kpc of M81 during natural seeing conditions. The algorithim used to construct the de-projected stellar distribution assumes that the disk is a plane with negligible thickness. Objects that are in parts of the disk that are warped out of the plane or that are in the tidal debris field, and hence may not be coplanar with the M81 disk, will not be correctly located in the de-projected image. Bright blue stars tend to fall along a ring in the inner regions of M81, and Davidge (2006a) identified the northern end of this feature as an area of long-term star-forming activity. The blue objects outside of this ring are concentrated along the spiral arms, with the highest density of bright blue stars in the north east spiral arm. The de-projected distribution of blue stars is consistent with morphological type Sbc (S. van den Bergh 2008, private communication), which is considerably later than the conventional classification of M81. Of course, there is a bias to a later morphological type if only the bluest, youngest objects are considered, and features such as the bulge and inter-disk light are neglected. Structures in the tidal debris field, including Ho IX, BK3N, the Arp Loop, M81 West, and the Southern Tidal Arm are prominent features in the bright blue star maps, as might be expected given that they are sites of recent star formation (Davidge 2008b). The diffuse stellar groupings identified by Davidge (2008b) and Durrell et al. (2004) fall outside of the area shown in Figure 8. The modest density of foreground stars and background galaxies with blue colors is demonstrated in the small number of sources in the blue star maps outside of the main body of M81 and away from known tidal structures (Davidge 2008b). Massive MS stars produce much of the UV emission in galaxy disks, and so it is not surprising that there is good agreement between the distribution of resolved blue stars and the GALEX 1516 Å image of M81 from Gil de Paz et al. (2007), which is also shown in Figure 8. Not only does the GALEX image show a ring of UV emission around the nucleus, but the relative strengths of the spiral arms in the UV are consistent with the observed density of MS stars. More specifically, the north east spiral arm is a source of stronger UV emission than the south west arm. Areas of recent star formation are sources of photons that heat dust to temperatures log(T$_{eff}) \sim 2 - 3$, and so a correlation between the distribution of MS stars and infrared emission might also be expected. The Spitzer 24$\mu$m and 160$\mu$m images of M81 from the Sings Fifth Data release are shown in Figure 8, and the correlation between the overall distributions of infrared light and bright blue stars is not as tight as that between UV emission and bright blue stars. The looser spatial correlation between blue stars and infrared emission is because sources other than young stars (e.g. AGB stars) can also heat dust to these temperatures. The northern end of the eastern spiral arm, which contains the densest concentration of MS stars in M81, is only moderately bright at $24\mu$m, while the strongest $24\mu$m emission tends to occur in both spiral arms along the major axis of the galaxy. Gordon et al. (2004) argue that the areas of most intense $24\mu$m emission may be regions of obscured star formation. Tidal structures such as the Arp Loop, Ho IX, M81 West, BK 3N, and the Southern Tidal Arm, which are clearly visible in the blue star map, are either weakly defined or not visible in the $24\mu$m image. However, the 160$\mu$m image shows that these are either sites of, or are close to areas of, emission from cool dust. The cool dust temperatures may indicate that dust has been displaced from the areas of active star formation. Simulations predict that tidal dwarfs likely do not have substantial dark matter halos (e.g. Barnes & Hernquist 1992; Wetzstein, Naab, & Burkert 2007), and so their gravitational potential wells should be shallower than those of galaxies that formed within dark matter halos. At a given SSFR then tidal dwarfs should be less able to retain interstellar material than dark matter-dominated systems. In addition, systems with shallow mass profiles may also only convert a minimal amount of gas into stars before stability of the interstellar component against star formation is restored (e.g. Kaufmann, Wheeler, & Bullock 2007; Taylor & Webster 2005), and so tidal dwarfs might also contain inherently large reservoirs of cool gas and dust until they are disrupted. ### RSGs RSGs form a prominent sequence in the CMDs of systems with ages $\geq 10$ Myr, and the bright portion of the RSG sequence forms a distinct finger that rises out of the complex of stars that dominates the faint end of the $(i', r'-i')$ and $(z', i'-z')$ CMDs (§3). Comparisons with isochrones in Figure 5 suggest that this RSG finger is populated by stars with ages $\leq 100$ Myr. Fainter, and older, RSGs are mixed with the brightest AGB stars in the complex of objects near the faint limit of the CMDs. Given the broad range of ages covered by RSGs in the MegaCam data, it was decided to investigate the spatial distribution of RSGs in three separate magnitude and color intervals, with the goal of probing how the distribution of RSGs changes with age. The bright half of the RSG finger, which contains stars with approximate ages between 10 Myr and 30 Myr, is sampled by stars with $i'$ between 20.0 and 21.5, and $r'-i'$ between 0.6 and 1.0. The lower portion of the RSG finger, which contains stars with approximate ages between 30 and 100 Myr, is sampled with stars having $i'$ between 21.5 and 23.0, and $r'-i'$ between 0.3 and 0.8. Finally, the spatial distribution of the oldest RSGs and the brightest AGB stars is sampled with objects having $i$ between 23 and 24, and $r'-i'$ between 0 and 1.0. These objects have ages that exceed $\sim 100$ Myr. The observed and de-projected distributions of RSGs in M81 are shown in Figures 8 and 9. For clarity, only RSGs with $i' > 23$ (i.e. those that are on the RSG finger above the mix of fainter RSGs and AGB stars) are shown in Figure 8. Contamination from background galaxies, which tend to have red colors in the magnitude range of interest, can be significant in the RSG samples, especially amongst the faintest RSGs. Indeed, a large number of objects, many of which are probably unresolved background galaxies, are seen outside of the M81 disk in the RSG maps in the lower row of Figure 9, in direct contrast with the modest number of objects in the outer regions of the blue star map. The distribution of RSGs in the top right hand corner of Figure 9 shows that – like the brightest blue stars – the brightest RSGs in M81 are good tracers of spiral structure. Like the brightest blue stars, a number of the brightest RSGs are also located in the young stellar ring in the inner disk. There is a distinct clumping of AGB stars and the faintest RSGs in the portion of the M81 disk that is closest to Ho IX, indicating that this was an area of active star formation at least 100 Myr in the past. Similar concentrations are also seen in the distribution of blue stars and brighter RSGs in the spiral arm that is closest to Ho IX. Given that the disk of M81 is rotating, and that the stellar concentration in the M81 disk that is presumably associated with Ho IX contains stars that span a range of ages, then it is likely that Ho IX is co-rotating with the M81 disk, which is consistent with it being of tidal origin. Differences between the spatial distributions of MS stars and RSGs become more apparent as fainter (and hence older) RSGs are considered, in the sense that spiral structure becomes more blurred as progressively older RSGs are mapped, due to the random velocities that are imparted to stars as they interact with gas clouds. It can also be seen in Figure 9 that the ratio of blue stars to fainter RSGs varies across the disk, in the sense that the number of blue stars with respect to RSGs decreases towards smaller radii in the inner regions of M81; this trend is quantified in §4.2. Differences between the distribution of blue stars and RSGs are perhaps most obvious in tidal debris field objects. Whereas M81 West and the Southern Tidal HI arm appear as distinct concentrations of MS stars (§4.1.1), they contain only a modest number of RSGs. This is consistent with the notion that these structures are dominated by very young stars. The Radial Distribution of Blue Stars and RSGs ---------------------------------------------- ### Star counts The radial behaviour of MS stars and RSGs can be investigated in a quantative manner using star counts that are azimuthally averaged over the entire disk. Such averaging allows global radial trends in stellar content to be examined while suppressing the impact of structures such as spiral arms. The influence of tidal features, which may be restricted to only a portion of the disk, on global radial trends are also reduced. The luminosity functions (LFs) of blue stars and RSGs are shown in Figures 10 and 11; the entries plotted in these figures are the number of stars per 0.5 magnitude interval per arcmin$^2$ in each annulus. A single range of colors, with $r'-i'$ between 0.2 and 1.0, was used to generate the RSG LF. The LFs have been corrected for contamination from foreground stars and background galaxies by subtracting the LFs of objects in the blue star and RSG color intervals in a control field outside of the HI tidal debris field. After correcting for this contamination, statistically significant numbers of blue stars and RSGs are detected out to R$_{GC} \sim 18$ kpc. The young stellar disk of M81 thus extends to $\sim 10$ disk scalelengths, which is comparable to the NGC 300 disk (Bland-Hawthorn et al. 2005). The number densities of blue stars and RSGs do not change substantially throughout much of the central 10 kpc of M81. This is demonstrated in Figure 12, where the radial behaviour of the mean number of sources with $i'$ between 22.25 and 23.75 in Figure 10 and $i'$ between 21.25 and 22.75 in Figure 11 are examined. The differences between the radial distribution of blue stars and RSGs are highlighted in the lower panel of Figure 12, where the ratio of blue to red star counts is shown. This ratio is constant to within $\pm 0.1$ dex between R$_{GC} =$ 4 and 12 kpc. However, the ratio of blue stars to RSGs drops by 0.5 dex when R$_{GC} < 4$ kpc, and there is also a steady decline in this ratio with increasing radius when R$_{GC} > 12$ kpc. M81 is not the only galaxy to show such trends, as previous studies of the brightest resolved stars in galaxies have found that stars in galaxy disks with different ages may have different radial distributions (e.g. Davidge 2006b; de Jong et al. 2007). The differences in the number densities of MS stars and RSGs in Figure 12 suggest that the SSFR across the disk of M81 was not uniform during the past few tens of Myr, but varied with radius such that the brightest blue stars are deficient with respect to RSGs at both small and large radii. ### Specific frequency measurements and evidence for an age gradient The mix of young and old stars throughout the M81 disk can be investigated by computing the number of blue stars and RSGs per unit integrated total brightness in each annulus, which is a measure of their specific frequency (SF). For this study, the SF measurements use the $K-$band surface brightness profile of M81 from Jarrett et al. (2003), and are normalized to M$_K = -16$. Light in the $K-$band is dominated by stars that formed during intermediate and early epochs (e.g. Maraston 1998). Therefore, systematic radial variations in the SFs of blue stars and RSGs measured with respect to $K-$band light are a signature of population gradients due to departures of the recent SFR from the mean SFR measured over Gyr or longer timescales. The Jarrett et al. (2003) data trace the $K-$band light of M81 out to 750 arcsec (i.e. R$_{GC} = 13$ kpc). $K-$band surface brightnesses in the radial interval 13 – 15 kpc were computed from the $H-$band profile by adopting the $H-K$ color of M81 at smaller radii. The $H-$band profile was extrapolated to compute surface brightnesses in the 15 – 18 kpc interval. The SF measurements are thus very uncertain when R$_{GC} > 15$ kpc. The SFs of blue stars and RSGs are compared in Figure 13. There is a tendency for the SFs of both stellar types to increase with increasing R$_{GC}$, indicating that the ratio of young to old$+$intermediate-age stars increases with increasing radius. This trend starts at relatively small radii, and so is not a manifestation of the extrapolation of the near-infrared surface brightness profile at large radii. Thus, the SF measurements further confirm that young stars are not uniformly mixed throughout the M81 disk, but tend to appear in larger numbers as radius increases. The SFs of blue stars and RSGs climb dramatically when R$_{GC} > 14$ kpc, which is where the SF measurements are most uncertain. Still, there is evidence from both the star counts and integrated light photometry that the radial stellar distribution in M81 changes near R$_{GC} \sim 13 - 14$ kpc. In particular, the $J$ and $H$ profiles of M81 in the 2MASS Large Galaxy Atlas are similar out to 800 arcsec ($\sim 14$ kpc), with both showing a downward break in surface brightness at 750 arcsec ($\sim 13$ kpc). The relations between the mean densities of MS stars and RSGs with radius in Figure 12 also both show a kink near 750 arcsec. The SF trend at large radius can be checked with deeper photometric measurements. RGB stars are among the brightest members of the populations that dominate the integrated $K-$band light, and are tracers of old/intermediate age stars. The SF curves in Figure 13 predict that the ratio of blue stars and/or RSGs to RGB stars should increase with progressively larger R$_{GC}$ in M81, and that the trend will steepen when R$_{GC} > 14$ kpc. Deep wide-field images in which RGB stars can be resolved will thus allow the behaviour of the SF curves at large radii in Figure 13 to be checked in an unambiguous manner. ### Differences in the SSFRs of M81 and NGC 2403 SF measurements can be used to compare the SSFRs of galaxies in an empirical manner. This is demonstrated in Figure 14 where the SFs of blue stars and RSGs located between R$_{GC} = 4$ and 10 kpc in M81 are compared with the SFs of similar stars located between R$_{GC} = 6$ and 12 kpc in the Sc galaxy NGC 2403. The NGC 2403 SFs are taken from the MegaCam observations discussed by Davidge (2007), and the magnitudes of stars in NGC 2403 have been shifted to account for the different distance modulus of M81. The SF curves of both galaxies are power laws that have similar exponents. However, the SF measurements of M81 consistently fall $\sim 1$ dex below those of NGC 2403. To the extent that both galaxies have similar underlying stellar contents that contribute the bulk of the $K-$band light, then this offset indicates that the SSFR in M81 during the past $\sim 30$ Myr was a factor of ten lower than in NGC 2403. If this is representative of the mean difference in SSFRs between these galaxies throughout the age of the Universe then M81 is dominated by an older population of stars than NGC 2403. The difference in the SF curves is consistent with the relative $24\mu$m fluxes of these galaxies, which gauges the recent SFR, and their total $K-$band brightnesses, which measures total stellar mass. The $24\mu$m fluxes of M81 and NGC 2403 are comparable (Gil de Paz et al. 2007), indicating that they have comparable [*total*]{} SFRs. However, the integrated $K$ brightnesses of M81 and NGC 2403 differ by 2.7 magnitudes, in the sense that M81 is brighter (Jarrett et al. 2003), indicating that their total stellar masses differ by roughly a factor of ten. Thus, the ratio of $24\mu$m flux per unit mass in NGC 2403 is an order of magnitude higher than in M81, in agreement with the difference in SFs shown in Figure 14. The SFs of RSGs and bright MS stars in M81 and NGC 2403 indicate that, despite the cosmologically-recent interaction with M82, the present-day SFR of M81 is not abnormally high. Brinchmann et al. (2004) and Asari et al. (2007) investigate the relation between SFR and galaxy mass, and their results suggest that the SSFRs of galaxies with masses that differ by 1 dex typically differ by a few tenths to 1 dex, with the large dispersion reflecting the wide range of star-forming histories seen among Sb and later galaxies (e.g. Figure 6 of Kennicutt, Tamblyn, and Congdon 1994). Thus, the SSFR of M81 is comparable to that of a field galaxy. Woods & Geller (2007) investigate the impact of interactions on galaxy pairs, and find that while the SSFRs in galaxies in pairs are higher than of those in the field, the enhancement in the SSFR is small for galaxies with M$_z \leq -22$. Given that M$_{z'} < -22$ for M81, then a substantial increase in the SSFR of M81 with respect to non-interacting galaxies would not be expected. It has also been $\sim 0.2$ Gyr since M81 interacted with M82, and this is twice the typical timescale for sustained starburst activity (e.g. Marcillac et al. 2006). DISCUSSION & SUMMARY ==================== Data obtained with the CFHT MegaCam and WIRCam imagers have been used to survey the brightest stars in the disk of M81. With the exception of the central $\sim 2$ kpc of the galaxy, where crowding prevents the detection of individual stars, and the gaps between CCDs, the MegaCam data are spatially complete to a radial distance of 18 kpc; at larger radii the southern portions of M81 are outside of the MegaCam field. The WIRCam images cover roughly one third of the M81 disk out to R$_{GC} \sim 18$ kpc. The key results of this study are discussed and summarized below. The Outer Disk of M81 --------------------- Young stars have been traced out to R$_{GC} \sim 18$ kpc in M81, and the density distribution of these objects does not follow a single exponential profile. This is reminiscent of what is seen in the integrated light profiles of disks, which show a diverse array of behaviours at large radii. In the majority of cases the exponential light profile that is the structural hallmark of disks changes slope at large radii, typically breaking downwards (i.e. the exponential decline steepens), although the light profile breaks upwards (i.e. the exponential decline becomes shallower) in roughly a third of spiral galaxies (e.g. Pohlen & Trujillo 2006). The near-infrared light profiles of M81 constructed from data in the 2MASS Large Galaxy Atlas steepen near 14 kpc, and this break radius is near the high end for galaxies in general (Pohlen & Trujillo 2006). The tendency for the light profile to steepen, rather than flatten, at large radius is not common among Sb galaxies (Pohlen & Trujillo 2006). In contrast, the rate of decline of RSGs and bright MS stars flattens near R$_{GC} \sim 14$ kpc, indicating that the radial distribution of old stars, which dominate the near-infrared light, and young stars in M81 differ, in that the latter have a flatter distribution than the former. Young stars thus account for a progressively larger fraction of the total stellar content in the outer disk of M81 as R$_{GC}$ increases. Kong et al. (2000) and Jiu-Li et al. (2004) investigate the stellar content of M81 using images recorded through narrow-band filters, and also conclude that mean age decreases with increasing R$_{GC}$ in M81. Age gradients, in the sense of younger mean ages towards larger radii, are common in the disks of spiral galaxies (e.g. Bell & de Jong 2000). The SF measurements in §4 suggest that the outermost regions of the M81 disk may contain a particularly rich population of young stars, as the contribution made by young stars increases significantly near 14 kpc. A preponderance of resolved young stars with respect to older populations at large radii is also seen in the disks of NGC 247 (Davidge 2006) and NGC 4244 (de Jong et al. 2007). The disk of NGC 4244 is of particular interest as it is viewed edge-on, so that height above the disk plane provides another dimension for structure studies that can be used to shed light on the nature of disk truncation. de Jong et al. (2007) find that the radius at which star counts in the disk NGC 4244 drop off is not a function of stellar age or height above the disk plane, and argue that disk truncation in this galaxy is probably due to recent or on-going dynamical processes that must affect the spatial distribution of stars in both the thin and thick disks. In the particular case of M81, the spatial distributions of the young and old$+$intermediate age populations indicate that their angular momentum distributions differ, such that the young star distribution is skewed to a higher mean value than that of old$+$intermediate age stars. One rationalization of this is that torques exerted by M82 could distort the angular momentum distribution of the interstellar medium of M81, and this legacy would be passed down to the kinematic properties of stars that subsequently form from this material. Younger at el. (2007) discuss the formation of extended exponential disks during minor mergers. Their simulations predict that stars move outward in the disk as gas is funneled inward. In the absence of external gas sources this would result in older mean ages towards larger radii, which is contrary to what is seen in M81. The local SSFR measured from UV images appears to increase with radius in many nearby galaxies (e.g. Munoz-Mateos et al. 2007). Still, the flatter distribution of young stars with respect to the older stellar body found in studies of resolved stars may be in conflict with the results from visible integrated light studies. Bakos, Trujillo, & Pohlen (2008) find that the color profiles of disks usually reverse at the point where the light profile changes, in the sense that $g'-r'$ increases towards larger radii. While it it difficult to compute reliable integrated colors at very large radii in M81, the change in the SF of RSGs and MS stars at large radii in Figure 13 suggests that the integrated $g'-r'$ color of M81 probably becomes progressively bluer (i.e. smaller $g'-r'$) with radius when R$_{GC} > 14$ kpc. The Metallicity of Young Stars in M81 ------------------------------------- Secular processes and galaxy-galaxy interactions re-distribute material in galaxy disks, and thereby alter gradients that may have been imprinted early-on. The tidal HI features near M81 are consistent with the large-scale redistribution of gas in this galaxy. If gas in the M81 disk was stirred by tidal interactions then this would flatten any metallicity gradients that were in place prior to this event. In fact, the color of the RSG plume in M81 does not change when R$_{GC} > 4$ kpc, suggesting that the metallicity of the material from which RSGs formed may not vary with radius, remaining near one half solar. The absence of a significant abundance gradient is consistent with previous studies, which find at most only minor abundance gradients when R$_{GC} > 4$ kpc. While various studies have found that \[O/H\] measured in HII regions tends to drop with increasing radius in M81 (e.g. Stauffer & Bothun 1984; Garnett 1986; Zaritsky, Kennicutt, & Huchra 1994), the relation between \[O/H\] and radius does not follow a simple power-law. Rather, while there is a pronounced radial \[O/H\] gradient in the inner disk, Figure 8 of Zaritsky et al. (1994) indicates that \[O/H\] is constant in the disk outside of one half the isophotal radius. Using images recorded through narrow-band filters, one of which is centered on the near-infrared Ca II triplet, Kong et al. (2000) find that the mean stellar metallicity in M81 does not change with radius, remaining fixed near a luminosity-weighted metallicity that is solar or higher. Re-examining the same data, Jiu-Li et al. (2004) find that there may be only a very mild metallicity gradient, in the sense that the mean metallicity drops by only 0.1 dex from the bulge (Z = 0.022) to the outer disk (Z = 0.016). A caveat is that the metallicity measured from such integrated light studies is a luminosity-weighted mean that is dominated by stars that are markedly older than RSGs. The Local Group galaxy M31 has a morphology and mass that is similar to M81, and likely experienced an interaction during intermediate epochs. It is thus also of interest to consider the radial distribution of metals in the disk of that galaxy. The relation between \[O/H\] and radius in the disk of M31 shows substantial scatter, with no obvious trend defined by HII regions at intermediate radii (e.g. Figure 8 of Zaritsky et al. 1994). Abundance measurements from the spectra of F and B supergiants are consistent with no metallicity gradient (Venn et al. 2000; Trundle et al. 2002). Moreover, while not sampling the inner disk of M31, the fields studied by Bellazzini et al. (2003) also suggest that the mean metallicity of RGB stars in the outer disk of that galaxy does not change with radius. These data hint that – as in M81 – interstellar material in M31 may have been tidally stirred. Reconciling a Post-Interaction Starburst in M81 with an Old Integrated Light Age Estimate ========================================================================================= The CMDs of the 2 – 4 kpc interval indicate that the present-day SSFR in this portion of M81 is relatively low when compared with the disk at larger radii. This may seem surprising given that star formation in interacting galaxies tends to be centrally concentrated (e.g. Iono, Yun, & Mihos 2004; Kewley, Geller, & Barton 2006). In fact, evidence that there was a central burst of star formation comes in the form of the population of x-ray binaries discussed by Swartz et al. (2003). Still, the mass of stars that formed during intermediate epochs was not large enough to affect greatly the present-day integrated visible photometric properties of the central regions of M81. Indeed, the ‘old’ age measured by Kong et al. (2000) and Jiu-Li et al. (2004) in the central few kpc of M81 indicates that the stars that formed a few hundred Myr in the past can not account for more than a few percent of the total stellar mass (e.g. Serra & Trager 2007). The evidence of elevated star-forming activity in the inner regions of M81 and the old age deduced from narrow-band photometry are not necessarily inconsistent, as a star-forming episode that involved a significant fraction of the gas in M81 would not contribute significantly to the mass in the central few kpc of the galaxy. The combined mass of HI and H$_2$ in a galaxy with a luminosity comparable to that of M81 is $\sim 10^{10}$ M$_{\odot}$ (Boselli, Lequeux, & Gavazzi (2002). Adopting a star formation efficiency of $\sim 5\%$ (Inoue, Hirashita, & Kamaya 2000), then $4 \times 10^8$M$_{\odot}$ would form from $10^{10}$ M$_{\odot}$ of gas in a single star-forming event. The integrated brightness of M81 in the central 2 kpc is $K = 4.6$, so that M$_K = -23.2$. Assuming $M/L_K = 1$, then the total stellar mass in this part of the galaxy is $4 \times 10^{10}$M$_{\odot}$. The mass of stars that would form if all of the ISM were channeled into the central few kpc of M81 would then only amount to $\sim 1\%$ of the original stellar mass, and so would not have a large impact on the integrated photometric properties a few hundred Myr later. Even if there were two or three distinct episodes of star formation of this nature then the total stellar mass formed would be only a few percent of that initially present. Infrared Photometry of M81 Globular Clusters ============================================ As the nearest large spiral galaxy outside of the Local Group, M81 is a prime laboratory for investigating the globular cluster system of a large early-type disk galaxy. While numerous cluster candidates have been identified from photometric data (e.g. Perelmuter & Racine 1995; Davidge & Courteau 1999), only a modest fraction of these have been confirmed spectroscopically as globular clusters. Even fewer of these have the near-infrared photometric measurements that are required to more fully characterize their spectral energy-distributions. The impact of interstellar extinction, which will be significant for clusters that are viewed through dust in disks, is also lower in the near-infrared than at visible wavelengths. Finally, from a more pragmatic perspective, many of the globular clusters that will be studied with the next generation of large ground-based telescopes in more distant galaxies will be investigated in greatest detail at near-infrared wavelengths, where adaptive optics (AO) systems will deliver near diffraction-limited image quality. It is thus important to characterize globular clusters in nearby galaxies at near-infrared wavelengths to provide a benchmark for studies of more distant cluster systems. The majority of spectroscopically confirmed globular clusters in M81 are to the north and west of the galaxy center (e.g. Figure 1 of Schroder et al. 2002), and there are only five confirmed clusters in the WIRCam field. All of these are viewed against the main body of the stellar disk, and so extinction might be significant for some. The properties of the clusters in the WIRCam field with spectra discussed by Perelmuter, Brodie, & Huchra (2005) and Schroder et al. (2002) are listed in Table 1. The source of the $V$ and \[Fe/H\] measurements are given in the last column. The near-infrared brightnesses and colors of clusters \# 50401 and 50415 were also measured by Davidge (2006a). The photometric properties of the CFHTIR and WIRCam measurements agree to within 0.01 – 0.02 mag for cluster 50401, and to within 0.1 mag for cluster 50415, indicating that there is reasonable photometric consistency. The clusters in Table 1 are not representative of the entire M81 globular cluster system. If M31 were viewed at the same distance as M81, then the $K-$band measurements discussed by Barmby, Huchra, & Brodie (2001) indicate that its GCLF would peak near $K = 17.5$, with the majority of clusters having $K$ between 16 and 18.5. The clusters in Table 2 of Davidge (2006a) and Table 1 of the present study are in the bright tail of the GCLF. This is not unexpected, given the observational bias against spectroscopic studies of the faint members of the M81 cluster system. In addition, the majority of M31 clusters have $V-K$ between 2.0 and 2.5 and $J-K$ between 0.55 and 0.80 (Barmby et al. 2000). For comparison, 3 of the seven clusters in M81 that have IR photometry have $V-K > 2.5$. This is almost certainly not due to fundamental differences in the cluster system properties; rather, it is probably due to dust extinction in the disk of M81. =0.0cm [cccccccc]{} Cluster \# & $K$ & $H-K$ & $J-K$ & $V$ & $V-K$ & \[Fe/H\] & Reference\ 50255 & 16.101 & 0.098 & 0.604 & 18.43 & 2.33 & –0.04 & P1995\ 50401 & 17.111 & 0.161 & 0.751 & 19.93 & 2.82 & –0.04 & P1995\ 50415 & 17.116 & 0.090 & 0.658 & 19.24 & 2.12 & –1.90 & P1995\ 50418 & 16.287 & 0.194 & 0.887 & 18.45 & 2.16 & –1.09 & S2002\ 50787 & 16.343 & 0.115 & 0.774 & 19.12 & 2.78 & –1.06 & S2002\ Asari, N. V., Cid Fernandes, R., Stasinska, G., Torres-Papqui, J. P., Mateus, A., Sodre, L. Jr., Schoenell, W., & Gomes, J. M. 2007, MNRAS, 381, 263 Bakos, J., Trujillo, I., Pohlen, M. 2008, ApJ, 683, L103 Barden, M. et al. 2005, ApJ, 635, 959 Barmby, P., Huchra, J. P., & Brodie, J. P. 2001, AJ, 121, 1482 Barmby, P., Huchra, J. P., Brodie, J. P., Forbes, D. A., Schroder, L. L., & Grillmair, C. J. 2000, AJ, 119, 727 Barnes, J. E., & Hernquist, L. 1992, Nature, 360, 715 Battinelli, P., & Demers, S. 2005, A&A, 434, 657 Battinelli, P., Demers, S., & Mannucci, F. 2007, A&A, 474, 35 Bekki, K., & Chiba, M. 2001, ApJ, 558, 666 Bell, E. F., & de Jong, R. S. 2000, MNRAS, 312, 497 Bellazzini, M., Cacciari, C., Federici, L., Fusi Pecci, F., & Rich, M. 2003, A&A, 405, 867 Bland-Hawthorn, J., Vlajic, M., Freeman, K. C., & Draine, B. T. 2005, ApJ, 629, 239 Bonatto, Ch., Bica, E., & Girardi, L. 2004, A&A, 415, 571 Boselli, A., Lequeux, J., & Gavazzi, G. 2002, A&A, 384, 33 Boulade, O. et al. 2003, Proc. SPIE, 4841, 72 Boyce, P. J., et al. 2001, ApJ, 560, L127 Brinchmann, J., Charlot, S., White, S. D. M., Tremonti, C., Kauffmann, G., Heckman, T., & Brinkmann, J. 2004, MNRAS, 351, 1151 Brouillet, N., Baudry, A., Combes, F., Kaufman, M., & Bash, F. 1991, A&A, 242, 35 Bullock, J. S., & Johnston, K. V. 2005, ApJ, 635, 931 Burstein, D., & Heiles, C. 1982, AJ, 87, 1165 Chandar, R., Tsvetanov, Z., & Ford, H. C. 2001, AJ, 122, 1342 Cignoni, M., DegI’Innocenti, S., Prada Moroni, P. G., & Shore, S. N. 2006, A&A, 459, 783 Cutri, R. M., et al. 2003, 2MASS All-Sky Catalog of Point Sources (Amherst: Univ. Massachusetts Press). Davidge, T. J. 2003, ApJ, 597, 289 Davidge, T. J. 2006a, PASP, 118, 1626 Davidge, T. J. 2006b, ApJ, 641, 822 Davidge, T. J. 2007, ApJ, 664, 820 Davidge, T. J. 2008a, AJ, 136, 2502 Davidge, T. J. 2008b, PASP, 120, 1145 Davidge, T. J. 2008c, ApJ, 678, L85 Davidge, T. J., & Courteau, S. 1999, AJ, 117, 2781 de Grijs, R., O’Connell, R. W., & Gallagher, J. S. III 2001, AJ, 121, 768 de Jong, R. S., et al. 2007, ApJ, 667, L49 de Mello, D. F., Smith, L. J., Sabbi, E., Gallagher, J. S., Mountain, M., & Harbeck, D. R. 2008, AJ, 135, 548 Demers, S., Dallaire, M., & Battinelli, P. 2002, AJ, 123, 3428 Dohm-Palmer, R. C., & Skillman, E. D. 2002, AJ, 123, 1433 Durrell, P. R., DeCesar, M. E., Ciardullo, R., Hurley-Keller, D., & Feldmeier, J. J. 2004, in IAU Symp. 217, Recycling Intergalactic and Interstellar Matter, eds. P.-A. Duc, J. Braine, & E. Brinks (Cambridge: Cambridge Univ. Press), 90 Garnett, D. R. 1986, PASP, 98, 1041 Gil de Paz, A., et al. 2007, ApJS, 173, 185 Girardi, L., Bertelli, G., Bressan, A., Chiosi, C., Groenewegen, M. A. T., Marigo, P., Salasnich, B., & Weiss, A. 2002, A&A, 391, 195 Girardi, L., Grebel, E. K., Odenkirchen, M., & Chiosi, C. 2004, A&A, 422, 205 Gordon, K. D., et al. 2004, ApJS, 154, 215 Governato, F., et al. 2007, MNRAS, 374, 1479 Hammer, F., Flores, H., Elbaz, D., Zheng, X. Z., Liang, Y. C., & Cesarsky, C. 2005, A&A, 430, 115 Hammer, F., Puech, M., Chemin, L., Flores, H., & Lehnert, M. D. 2007, ApJ, 662, 322 Ho, L. C., Filippenko, A. V., & Sargent, W. L. 1996, ApJ, 462, 183 Hughes, S. M. G., & Wood, P. R. 1990, AJ, 99, 784 Hughes, S. M. G., et al. 1994, ApJ, 428, 143 Inoue, A. K., Hirashita, H., & Kamaya, H. 2000, AJ, 120, 2415 Iono, D., Yun, M. S., & Mihos, J. C. 2004, ApJ, 616, 199 Jarrett, T. H., Chester, T., Cutri, R., Schneider, S. E., & Huchra, J. P. 2003, AJ, 125, 525 Jiu-Li, L., Zhou, X., Ma, J., & Chen, J-S 2004, CJA&Ap, 4, 143 Kaufmann, T., Wheeler, C., & Bullock, J. S. 2007, MNRAS, 382, 1187 Kazantzidis, S., Bullock, J. S., Zentner, A. R., Kravstov, A. V., & Moustakas, L. A. 2008, ApJ, 688, 254 Kennicutt, R. C. Jr., Tamblyn, P., & Congdon, C. W. 1994, ApJ, 435, 22 Kewley, L. J., Geller, M. J., & Barton, E. J. 2006, AJ, 131, 2004 Kong, X., et al. 2000, AJ, 119, 2745 Langer, N., & Maeder, A. 1995, A&A, 295, 685 Maeder, A., & Meynet, G. 2001, A&A, 373, 555 Makarova, L. N., et al. 2002, A&A, 396, 473 Maraston, C., 1998, MNRAS, 300, 872 Marcillac, D., Elbaz, D., Charlot, S., Liang, Y. C., Hammer, F., Flores, H., Cesarsky, C., & Pasquali, A. 2006, A&A, 458, 369 Marigo, P., & Girardi, L. 2001, A&A, 377, 132 Mayya, Y. D., Bressan, A., Carrasco, L., & Hernandez-Martinez, L. 2006, ApJ, 649, 172 Mihos, J. C., & Hernquist, L. 1994, ApJ, 431, L9 Munoz-Mateos, J. C., Gil de Paz, A., Boissier, S., Zamorano, J., Jarrett, T., Gallego, J., & Madore, B. F. 2007, ApJ, 658, 1006 Naab, T., & Ostriker, J. P. 2006, MNRAS, 366, 899 Nikolaev, S., & Weinberg, M. D. 2000, ApJ, 542, 804 Perelmuter, J-M, & Racine, R. 1995, AJ, 109, 1055 Perelmuter, J-M, Brodie, J. P., & Huchra, J. P. 1995, AJ, 110, 620 Pohlen, M., & Trujillo, I. 2006, A&A, 454, 759 Puget, P., et al. 2004, Proc. SPIE, 5492, 978 Read, J. I., Lake, G., Agertz, O., & Debattista, V. P. 2008, MNRAS, 389, 1041 Robertson, B., Bullock, J. S., Cox, T. J., Di Matteo, T., Hernquist, L., Springel, V., & Yoshida, N. 2006, ApJ, 645, 986 Roskar, R., Debattista, V. P., Stintson, G. S., Quinn, T. R., Kaufmann, T., & Wadsley, J. 2008, ApJ, 675, L65 Sabbi, E., Gallagher, J. S., Smith, L. J., de Mello, D. F., & Mountain, M. 2008, ApJ, 676, L113 Sakai, S., & Madore, B. F. 1999, ApJ, 526, 599 Salasnich, B., Girardi, L., Weiss, A., & Chiosi, C. 2000, 361, 1023 Schroder, L. L., Brodie, J. P., Kissler-Patig, M., Huchra, J. P., & Phillips, A. C. 2002, AJ, 123, 2473 Sellwood, J. A., & Binney, J. J. 2002, MNRAS, 336, 785 Serra, P., & Trager, S. C. 2007, MNRAS, 374, 769 Stauffer, J. R., & Bothun, G. D. 1984, AJ, 89, 1702 Stetson, P. B. 1987, PASP, 99, 191 Stetson, P. B., & Harris, W. E. 1988, AJ, 96, 909 Sun, W.-H., et al. 2005, ApJ, 630, L133 Swartz, D. A., Ghosh, K. K., McCollough, M. L., Pannuti, T. G., Tennant, A. F., & Wu, K. 2003, ApJS, 144, 213 Taylor, E. N., & Webster, R. L. 2005, ApJ, 634, 1067 Tikhonov, N. A., Galazutdinova, O. A., & Drozdovsky, I. O. 2005, A&A, 431, 127 Trujillo, I., & Aguerri, J. A. L. 2004, MNRAS, 355, 82 Trujillo, I., & Pohlen, M. 2005, ApJ, 630, L17 Trundle, C., Dufton, P. L., Lennon, D. J., Smartt, S. J., & Urbaneje, M. A. 2002, A&A, 395, 519 Venn, K. A., McCarthy, J. K., Lennon, D. J., Przybilla, N., Kudritzki, R. P., & Lemke, M. 2000, ApJ, 541, 610 Wetzstein, M., Naab, T., & Burkert, A. 2007, MNRAS, 375, 805 Williams, B. F., et al. 2008, submitted to AJ, astro-ph 0810.2557 Woods, D. F., & Geller, M. J. 2007, AJ, 134, 527 Younger, J. D., Cox, T. J., Seth, A. C., & Hernquist, L. 2007, ApJ, 670, 269 Yun, M. S., Ho, P. T. P., & Lo, K. Y. 1994, Nature, 372, 530 Zaritsky, D., Kennicutt, R. C. Jr., & Huchra, J. P. 1994, ApJ, 420, 87 Zentner, A. R., & Bullock, J. S. 2003, ApJ, 598, 49
--- abstract: 'Interaction between collective monopole oscillations of a trapped Bose-Einstein condensate and thermal excitations is investigated by means of perturbation theory. We assume spherical symmetry to calculate the matrix elements by solving the linearized Gross-Pitaevskii equations. We use them to study the resonances of the condensate induced by temperature when an external perturbation of the trapping frequency is applied and to calculate the Landau damping of the oscillations.' address: - '$^1$Dipartimento di Fisica, Università di Trento and Istituto Nazionale per la Fisica della Materia, I-38050 Povo, Italy' - '$^2$Kapitza Institute for Physical Problems, 117454 Moscow, Russian Federation' author: - 'M. Guilleumas$^1$, L.P. Pitaevskii$^{1,2}$' date: 'June 14, 1999' title: 'Temperature-induced resonances and Landau damping of collective modes in Bose-Einstein condensed gases in spherical traps' --- Introduction ============ Since the discovery of Bose-Einstein condensation in magnetically trapped Bose gases, the study of the low-energy collective excitations has attracted a big interest both from experimental and theoretical point of view. Mean field theory has proven to be a good framework to study static, dynamic and thermodynamic properties of these trapped gases. In particular, it provides predictions of the frequencies of collective excitations that very well agree with the observed ones. Recently [@JILA; @MIT] the energy shifts and damping rates of these low-lying collective excitations have been measured as a function of temperature. However, these phenomena have not yet been completely understood theoretically. In this paper we study the influence of thermal excitations on collective oscillations of the condensate in the collisionless regime. Previous papers on this subject has been devoted mainly to calculation of the Landau damping by means of perturbation theory. In Refs.[@Liu1; @PLA; @Liu2; @Giorgini1] only the uniform system has been considered, whereas in Refs.[@Fedichev1; @Fedichev2] Landau damping in trapped Bose gases has also been studied but using the semiclassical approximation for thermal excitations and the hydrodynamic approximation for collective oscillations. An important point of Refs.[@Fedichev1; @Fedichev2] is that the authors discuss the possible chaotic behavior of the excitations in an anisotropic trap. The frequency shift has also been studied for a trapped condensate in the collisionless regime in Ref.[@Fedichev2]. In the present work we study the interaction between collective and thermal excitations using the Gross-Pitaevskii equation and perturbation theory. We consider spherically symmetric traps, since in this case the spectrum of excitations is easily calculated, avoiding the use of further approximations. Even though the case of anisotropic traps can be significantly different in the final results, a detailed investigation of spherical traps is instructive. We explore, in particular, the properties of monopole oscillations by studying the temperature-induced resonances that occur in the condensate when an external perturbation of the trapping frequency is applied and, also, the Landau damping associated with the interaction with thermally excited states. This paper is organized as follows. In section II we introduce the general equations that describe the elementary excitations of the condensate within the Bogoliubov theory [@bog]. In Sec. III we recall the perturbation theory for a trapped Bose-condensed gas in order to study the interaction between elementary excitations. In Sec. IV we introduce the linear response function formalism and calculate the response function of the condensate when a small perturbation of the trapping frequency is applied. We derive analytic equations for the response function at zero temperature and treat perturbatively the contribution of the elementary excitations, which is related to Landau damping. In Sec. V we discuss the main results. Elementary excitations of an isotropic trap =========================================== We consider a weakly interacting Bose-condensed gas confined in an external potential $V_{\rm ext}$ at $T=0$. The elementary excitations of a degenerate Bose gas are associated with the fluctuations of the condensate. At low temperature they are described by the time dependent Gross-Pitaevskii (GP) equation for the order parameter [@G; @P]: $$i\hbar {\partial \over \partial t} \Psi ({\bf r},t) = \left( - { \hbar^2 \nabla^2 \over 2m} + V_{\rm ext}({\bf r}) + g \mid \!\Psi({\bf r},t) \!\mid^2 \right) \Psi({\bf r},t) \; , \label{TDGP}$$ where $\int \! d{\bf r} |\Psi|^2= N_0$ is the number of atoms in the condensate. At zero temperature it coincides with the total number of atoms $N$, except for a very small difference $\delta N \ll N$ due to the quantum depletion of the condensate. The coupling constant $g$ is proportional to the $s$-wave scattering length $a$ through $g=4\pi \hbar^2 a/m$. In the present work we will discuss the case of positive scattering length, as for $^{87}$Rb atoms. The trap is included through $V_{\rm ext}$, which is chosen here in the form of an isotropic harmonic potential: $V_{\rm ext}(r)= (1/2) m \omega_{\rm ho}^2 r^2$. The harmonic trap provides a typical length scale for the system, $a_{\rm ho}= (\hbar/m\omega_{\rm ho})^{1/2}$. So far experimental traps have axial symmetry, with different radial and axial frequencies, but experiments with spherical traps are also feasible [@KetVa]. The choice here of a spherical trap has two different reasons. First, it greatly reduces the numerical effort and will allow us to study the interaction of oscillations with elementary excitations without any further approximations. Second, the energy spectrum of the excitations in such a trap is well resolved yielding to the appearance of well-separated resonances. In anisotropic traps, conversely, the spectrum of excitations is much denser. The normal modes of the condensate can be found by linearizing equation (\[TDGP\]) , i.e., looking for solutions of the form $$\Psi({\bf r},t) = e^{-i\mu t/\hbar} \left[ \Psi_0 ({\bf r}) + u({\bf r}) e^{-i \omega t} + v^*({\bf r}) e^{i \omega t} \right] \label{linearized}$$ where $\mu$ is the chemical potential and functions $u$ and $v$ are the “particle" and “hole" components characterizing the Bogoliubov transformations. After inserting in Eq. (\[TDGP\]) and retaining terms up to first order in $u$ and $v$, one finds three equations. The first one is the nonlinear equation for the order parameter of the ground state, $$\left( H_0 + g \Psi_0^2 ({\bf r}) \right) \Psi_0({\bf r}) = \mu \Psi_0({\bf r}) \, , \label{groundstate}$$ where $H_0= - (\hbar^2/2m) \nabla^2 + V_{\rm ext}({\bf r})$; while $u({\bf r})$ and $v({\bf r})$ obey the following coupled equations [@P]: $$\begin{aligned} \hbar \omega u({\bf r}) &=& [ H_0 - \mu + 2 g \Psi_0^2] u ({\bf r}) + g \Psi_0^2 v ({\bf r}) \label{coupled1} \\ - \hbar \omega v({\bf r}) &=& [ H_0 - \mu + 2 g \Psi_0^2] v ({\bf r}) + g \Psi_0^2 u ({\bf r}) \; . \label{coupled2}\end{aligned}$$ Numerical solutions of these equations have been found by different authors [@Edwards; @Singh; @Esry; @Zaremba; @You; @PRA]. In the present work, we use them to calculate the response function of the condensate under an external perturbation and the Landau damping of collective modes. When the adimensional parameter $N a/a_{\rm ho}$ is large, the time-dependent GP equation reduces to the hydrodynamic equations [@Stringari]: $$\frac{\partial \rho}{\partial t}+\nabla({\bf v}\rho)=0 \label{hydron}$$ $$m\frac{\partial}{\partial t}{\bf v}+\nabla(V_{\rm ext}+g\rho+\frac{mv^2}{2})=0\,, \label{hydrov}$$ where $\rho({\bf r},t)=\mid \Psi({\bf r},t) \mid^2$ is the particle density and the velocity field is ${\bf v}({\bf r},t)=(\Psi^* \nabla \Psi-\Psi \nabla \Psi^*)\hbar/(2mi\rho)$. The static solution of equations (\[hydron\])-(\[hydrov\]) gives the Thomas-Fermi ground state density, which in the spherical symmetric trap reads $$\rho(r)=g^{-1} [\mu-V_{\rm ext}(r)] \label{TF}$$ in the region where $\mu>V_{\rm ext}(r)$, and $\rho=0$ elsewhere. The chemical potential $\mu$ is fixed by the normalization of the density to the number of particles $N_0$ in the condensate. The density profile (\[TF\]) has the form of an inverted parabola, which vanishes at the classical turning point $R$ defined by the condition $\mu=V_{\rm ext}(R)$. For a spherical trap, this implies $$\mu=\frac{m \omega_{\rm ho}^2 R^2}{2} \,. \label{TFR}$$ It has been shown [@Stringari] that the hydrodynamic equations (\[hydron\]) and (\[hydrov\]) correctly reproduce the low-lying normal modes of the trapped gas in the linear regime when $N a/a_{\rm ho}$ is large (see however Ref.[@hydro]). Perturbation theory =================== Let us briefly recall the perturbation theory for the interaction between collective modes of a condensate and thermal excitations as it was developed in Ref.[@PLA]. Suppose that a certain mode of the condensate has been excited and, therefore, it oscillates with the corresponding frequency $\Omega_{\rm osc}$. We assume that this oscillation is classical, i.e. the number of quanta of oscillation ($n_{\rm osc}$) is very large. Then, the energy of the system associated with the occurrence of this classical oscillation can be calculated as $E=\hbar\Omega_{\rm osc}\, n_{\rm osc}$ with $n_{\rm osc}\gg1$. Due to interaction effects, the thermal bath can either absorb or emit quanta of this mode producing a damping of the collective oscillation. The energy loss can be written as $$\dot{E}=-\hbar \Omega_{\rm osc} (W^{(a)}-W^{(e)}) \,, \label{eq1}$$ where $W^{(a)}$ and $W^{(e)}$ are the probabilities of absorption and emission of one quantum $\hbar \Omega_{\rm osc}$, respectively. The interaction between excitations is small, so one can use perturbation theory to calculate the probabilities for the transition between a $i$-th excitation and a $k$-th one, available by thermal activation $$W=\pi \sum_{i,k} \mid \langle k \mid V_{\rm int} \mid i \rangle \mid^2 \,. \label{prob}$$ Let $E_{i}$ and $E_{k}$ be the corresponding energies and assume $E_{k}> E_{i}$. Since energy is conserved during the transition process, one has $E_{k}=E_{i}+\hbar\Omega_{\rm osc}$. The interaction term in second quantization is given by $$V_{\rm int}=\frac{g}{2} \int d{\bf r}\, \hat{\Psi}^{\dag} \hat{\Psi}^{\dag} {\hat \Psi}{\hat \Psi} \,. \label{Vint}$$ In the framework of Bogoliubov theory, the field operator $\hat \Psi$ can be written as the sum of the condensate wave function $\Psi_0$, which is the order parameter at equilibrium, and its fluctuations $\delta \hat{\Psi}$, where $\hat{\Psi}=\Psi_0+\delta \hat{\Psi}$ \[see Eq. (\[linearized\])\]. The fluctuations can be expressed in terms of the annihilation ($\alpha$) and creation ($\alpha^{\dag}$) operators of the elementary excitations of the system: $$\delta\hat{\Psi}=\sum_{j}[u_j({\bf r})\alpha_j + v_j^*({\bf r}) \alpha_j^{\dag}] \,, \label{uv}$$ where the functions $u$ and $v$ are properly normalized solutions of equations (\[coupled1\])-(\[coupled2\]). In the sum (\[uv\]) one can select a low energy collective mode, for which we use the notation $u_{\rm osc},v_{\rm osc},\alpha_{\rm osc}, \alpha^{\dag}_{\rm osc}$, and investigate its interaction with higher energy single-particle excitations, for which we use the indices $i,k$ as in (\[prob\]). These latter excitations are assumed to be thermally excited. Inserting expression (\[uv\]) into Eq. (\[Vint\]) one rewrites the interaction term $V_{\rm int}$ in terms of the annihilation and creation operators. Since we want to study the decay process in which a quantum of oscillation $\hbar \Omega_{\rm osc}$ is annihilated (created) and the $i$-th excitation is transformed into the $k$-th one (or viceversa), we will keep only terms linear in $\alpha_{\rm osc}\,(\alpha^{\dag}_{\rm osc})$ and in the product $\alpha_k^{\dag} \alpha_i \,(\alpha_k \alpha_i^{\dag})$. And the energy conservation during the transition process will be ensured by the delta function $\delta(E_k-E_i-\hbar\Omega_{\rm osc})$. This mechanism is known as Landau damping [@Beliaev]. Assuming that at equilibrium the states $i,k$ are thermally occupied with the usual Bose factor $f_i=[\exp(E_i/k_{B}T)-1]^{-1}$, the rate of energy loss can be calculated as [@PLA] $$\dot{E}=-2 \pi \frac{E}{\hbar} \sum_{ik} \mid A_{ik}\mid^2 \delta(E_k-E_i-\hbar\Omega_{\rm osc})(f_i-f_k) \,, \label{dE/dt}$$ where $$\begin{aligned} A_{ik}&=&2g\int d{\bf r}\, \psi_{0}[(u_k^* v_i+v_k^* v_i+u_k^* u_i) u_{\rm osc} \nonumber \\ & & + (v_k^* u_i+v_k^* v_i+u_k^* u_i)v_{\rm osc}]. \label{matrixel}\end{aligned}$$ Let us define the dissipation rate $\gamma$ through the following relation between the energy of the system $E$ and its dissipation $\dot{E}$: $$\dot{E}=-2\gamma E \,. \label{dampdef}$$ Using expression (\[dE/dt\]) $\gamma$ can be calculated as $$\frac{\gamma}{\Omega_{\rm osc}}=\sum_{ik} \gamma_{ik} \, \delta(\omega_{ik}-\Omega_{\rm osc})\,, \label{damping2}$$ where the transition frequencies $\omega_{ik}=(E_k-E_i)/\hbar$ are positive. The “damping strength" $$\gamma_{ik}=\frac{\pi}{\hbar^2\Omega_{\rm osc}} \mid A_{ik}\mid^2(f_i-f_k) \label{dampingik}$$ has the dimensions of a frequency. In this work we calculate the quantities $\gamma_{ik}$ by using the numerical solutions $u$ and $v$ of Eqs. (\[linearized\]-\[coupled2\]) into the integrals (\[matrixel\]). The results will be discussed in section V. response function ================= The results of the previous section can be used also to study the effect that an external perturbation of the trap has on the collective excitations of the condensate. Let us assume the trapping frequency in the form \[$\omega_{\rm ho} +\delta \omega_{\rm ho}(t)$\], where $\delta \omega_{\rm ho} \sim \exp(-i \omega t)$ is a time-dependent modulation. Assuming that the perturbation is small, one can use the response function formalism to describe the fluctuations of the system. Let us briefly recall the basic formalism [@Landau]. The behaviour of a system under an external perturbation can be described by studying the fluctuations that may generate the external interaction to a certain physical quantity of the system. An external perturbation acting on the system is described by a new term in the Hamiltonian of the type $$\hat{V}=-\hat{x}f(t)\,, \label{V}$$ where $\hat{x}$ is the quantum operator of the physical quantity that may fluctuate, and $f(t)$ is the “perturbing force". The mean value $\langle x \rangle $ is zero in the equilibrium state, in absence of perturbation, and is not zero when it is present. For a periodic perturbation $f(t) \sim \exp(-i\omega t)$, the relation between $\langle x \rangle $ and $f(\omega)$ is $$\langle x \rangle =\alpha(\omega)f \,, \label{respons}$$ where $\alpha(\omega)$ is the response function also called generalised susceptibility. In general $\alpha(\omega)$ is a complex function. It can be seen that the imaginary part of the susceptibility determines the absorption of energy $Q$ of the external force $f$ by the system through the following relation: $$Q=\frac{\omega}{2} Im[\alpha(\omega)]\, |f|^2 \,, \label{dissipation}$$ and that the real and imaginary parts of $\alpha(\omega)$ satisfy the Kramers-Krönig relation $$Re[\alpha(\omega)]=\frac{2}{\pi} \,P \int_{0}^{\infty} \frac{Im[\alpha(\xi)]}{\xi^2-\omega^2}\,\xi\, d\xi \,, \label{KramersKronig}$$ where $P$ means the principal value of the integral. The time-dependent external drive $\delta \omega_{\rm ho}$ induces oscillations of the condensate density $\delta \rho $ with frequency $\omega$; $\rho(r,t)=\rho(r,0)+\delta\rho$. Expanding the energy due to the confining potential, $E_{\rm ho}=\int V_{\rm ext} \,\rho \, d{\bf r}$, with respect to $\delta \omega_{\rm ho}$ and $\delta \rho $ one obtains the “mixed” term, corresponding to the Hamiltonian (\[V\]): $$V=m \omega_{\rm ho} \delta\omega_{\rm ho} \int r^2 \delta \rho \,d{\bf r}\,. \label{V2}$$ Comparing it with Eq. (\[V\]) one can identify the perturbing force and the corresponding coordinate as $$f=-m \omega_{\rm ho} \delta \omega_{\rm ho} \,\,, \,\,\, x=\int r^2 \delta \rho(r,t) d{\bf r}\, . \label{f}$$ Note that the first order term $m\omega _{\rm ho}\delta \omega_{\rm ho} \int r^2 \rho(r,0)d{\bf r}$ can be omitted because it gives an additive shift in the Hamiltonian which does not contribute to the equations of motion of the system. Once we have identified $f$ and $x$, we can calculate the response function of the condensate $\alpha(\omega)$. According to the definition one has $$x=\alpha(\omega) f \,. \label{alphadef}$$ Let us present the response function in the form $\alpha(\omega)=\alpha_0(\omega)+\alpha_1(\omega)$, where $\alpha_0(\omega)$ corresponds to the response function of the condensate at $T=0$, i.e., calculated without elementary excitations, and $\alpha_1(\omega)$ is the contribution of the excitations. At low temperatures it can be assumed that $\alpha_1(\omega) \ll \alpha_0(\omega)$ and then $\alpha_1$ can be treated as a perturbation. We proceed as follows. First, we use the hydrodynamic approximation to obtain the response function at $T=0$. Then, within a perturbation theory, we introduce the contribution of the elementary excitations at finite $T$ to obtain $\alpha_1(\omega)$. Calculation of $\alpha_0(\omega)$ at $T=0$ ------------------------------------------ For a spherically symmetric breathing mode [@Lev], one can easily prove that the hydrodynamic equations of motion (\[hydron\]) and (\[hydrov\]) admit analytic solutions of the form [@Concetta1] $$\rho(r,t)=a_0(t)-a_r(t)r^2\,\,, \,\,\,v(r,t)=\alpha_r(t) r\,. \label{n(r,t)}$$ These equations are restricted to the region where $\rho\geq 0$. Notice that they include the ground state solution (\[TF\]) in the Thomas-Fermi limit. This is recovered by putting $\alpha_r=0$, $a_r=m\omega_{\rm ho}^2/(2g)$, and $a_0=\mu/g$. Inserting Eqs. (\[n(r,t)\]) into the hydrodynamic equations, one obtains two coupled differential equations for the time dependent coefficients $a_r(t)$ and $\alpha_r(t)$, while at any time $a_0=-(15 N/8\pi)^{2/5} a_r^{3/5}$ is fixed by the normalization of the density to the total number of atoms. The form (\[n(r,t)\]) for the density and velocity distributions is equivalent to a scaling transformation of the order parameter. That is, at each time, the parabolic shape of the density is preserved, while the classical radius $R$, where the density (\[n(r,t)\]) vanishes, scales in time as [@Dalfovo] $$R(t)=R(0)\, b(t)=\sqrt{\frac{2\mu}{m\omega_{\rm ho}^2}} \, b(t)\,, \label{R(t)}$$ where the unperturbed radius $R(0)$ is given by Eq. (\[TFR\]). The relation between the scaling parameter $b(t)$ and the coefficient $a_r(t)$ is $a_r=m\omega_{\rm ho}^2/(2gb^5)$. Inserting it into Eq. (\[n(r,t)\]) we obtain $$\rho(r,t)=-\frac{1}{g}\left[m\omega_{\rm ho}^2 r^2 \frac{1}{2b^5}- \mu \frac{1}{b^3} \right] \,. \label{n(r,t)2}$$ And the hydrodynamic equations then yield $a_r=\dot{b}/b$ and $$\ddot{b}+[\omega_{\rm ho}+\delta \omega_{\rm ho}(t)]^2 \,b -\frac{\omega_{\rm ho}^2}{b^4}=0\,. \label{b(t)}$$ The second and third terms of (\[b(t)\]) give the effect of the external trap and of the interatomic forces, respectively. From (\[R(t)\]) and (\[b(t)\]) it follows that at equilibrium $b=1$ and $\dot{b}=0$. For a small driving strength $\delta \omega_{\rm ho}$, one can assume that the radius of the cloud is perturbed around its equilibrium value, so $$R(t)=R(0)+\delta R(t) \,\,, \,\,\,b(t)=1+\delta b(t)\,, \label{dR}$$ where $$\delta b(t)=\frac{\delta R(t)}{R(0)}\,. \label{fract}$$ It means that $\delta b$ is the fractional amplitude of oscillations of the radius and, therefore, it is a measurable quantity. In the small amplitude limit, one can linearize Eq. (\[b(t)\]) with respect to $\delta\omega_{\rm ho}$ and $\delta b$ yielding the following equation $$\delta \ddot{b}+5\omega_{\rm ho}^2 \delta b=-2\omega_{\rm ho} \delta\omega_{\rm ho} \,.$$ The solution is $$\delta b(t)=\frac{-2 \omega_{\rm ho}}{\Omega_{\rm M}^2-\omega^2}\, \delta\omega_{\rm ho} \,, \label{db}$$ where $\Omega_{\rm M}=\sqrt{5} \, \omega_{\rm ho}$ corresponds to the frequency of the normal mode of monopole in the hydrodynamic limit [@Stringari]. Keeping only the lowest order in the small perturbation $\delta b$, Eq. (\[n(r,t)2\]) yields $$\rho(r,t)=\rho(r,0)+ \frac{1}{g}\left[5 \frac{1}{2}m\omega_{\rm ho}^2 r^2-3\mu\right]\delta b\,, \label{n(r,t)3}$$ and using the Thomas-Fermi radius at equilibrium (\[TFR\]), it follows that the density fluctuation is given by $$\begin{aligned} \delta \rho(r,t)&=&\rho(r,t)-\rho(r,0) \nonumber \\ & =&\frac{5\mu}{g}\left[\left(\frac{r}{R(0)}\right)^2-\frac{3}{5}\right] \delta b(t) \,. \label{dn}\end{aligned}$$ We can calculate now $x$ using (\[f\]) and (\[dn\]), finding $$x=C \, \delta b(t) \,, \label{dx2}$$ where $C=16 \pi \mu R(0)^5/(35 g)$. Then, from Eq. (\[db\]), one gets $$x= \frac{-2 \, C \,\omega_{\rm ho}}{\Omega_{\rm M}^2-\omega^2}\,\delta \omega_{\rm ho} \,. \label{dx3}$$ At $T=0$ there are no thermally excited states and, hence, $\alpha(\omega)=\alpha_0(\omega)$. By comparing the definition (\[alphadef\]) with (\[dx3\]) one has $$\alpha_0(\omega)=\frac{2 C}{m(\Omega_{\rm M}^2-\omega^2)}\,. \label{alpha0}$$ This is the response function at zero temperature without including any dissipation. Therefore $\alpha_0(\omega)$ is real, i.e., the induced oscillations at $T=0$ are undamped. The energy of oscillation can be calculated as twice the mean kinetic energy associated to the mode, $E=\int d{\bf r}\, \rho(r,0) v^2$. For a monopole mode in an isotropic trap, the calculation [@Lev; @Concetta1] gives $E=\frac{15}{7}N\mu|\delta b|^2$, where $|\delta b|$ is the amplitude of the oscillation of the cloud (\[fract\]). Using Eqs. (\[dx2\]) and (\[alphadef\]) at $T=0$, it follows that $$E=\frac{15}{7 C^2} \mu N |\alpha_0(\omega) f|^2 \,. \label{Eosc}$$ Calculation of $\alpha_1(\omega)$ --------------------------------- Now we want to calculate the contribution of the thermally excited states to the response function. We study the low temperature regime, where $\alpha_1 \ll \alpha_0$ and the energy of oscillation (\[Eosc\]) can be estimated using $\alpha_0$ instead of $\alpha(\omega)=\alpha_0(\omega)+ \alpha_1(\omega)$. The effect of $\alpha_1$ will be introduced within a perturbation theory. We have already seen that the thermal excitations can either absorb or emit quanta of oscillation $\hbar \omega$ and thus they will dissipate energy. The contribution of the elementary excitations to the susceptibility will be a complex function, $\alpha_1(\omega)= Re[\alpha_1]+{\rm i} Im[\alpha_1]$, whose imaginary part is related to the absorption of energy $Q$ of the external perturbation. However, in a stationary solution which is the case under consideration, the absorption $Q$ must be compensated by the energy dissipation (\[dampdef\]) due to the interaction with the elementary excitations. Therefore, $$Q+\dot{E}=0 \,. \label{Q}$$ Let us rewrite the definition of the damping rate (\[dampdef\]) by using (\[damping2\]) and (\[dampingik\]) with a generic oscillation frequency $\omega$: $$\dot{E}=-2\omega \sum_{ik} \gamma_{ik} \, \delta(\omega_{ik}-\omega) \, E \,. \label{Edot}$$ Inserting Eq. (\[Eosc\]) and defining $\beta(\omega)=\alpha_0(\omega)/C= 2/[m(\Omega_{\rm M}^2-\omega^2)]$ one obtains the energy dissipation $$\dot{E}=-2\omega \frac{15 \mu N}{7}\sum_{ik} \gamma_{ik} \, \delta(\omega_{ik}-\omega) \, |\beta(\omega)|^2 |f|^2 \,. \label{Edot2}$$ Let us recall that the energy dissipation according to Eqs. (\[dissipation\]) and (\[Q\]) can be calculated also from the imaginary part of the response function $\alpha(\omega)=\alpha_0(\omega)+\alpha_1(\omega)$. Since $\alpha_0(\omega)$ is real, Eq. (\[dissipation\]) becomes $$Q=\frac{\omega}{2} Im[\alpha_1(\omega)]\, |f|^2=-\dot{E} \,. \label{dissipation2}$$ Comparing Eqs. (\[Edot2\]) and (\[dissipation2\]) one can calculate the imaginary part of $\alpha_1(\omega)$ as $$Im[\alpha_1(\omega)]= 4 \,\frac{15 \mu N}{7} \sum_{ik} \gamma_{ik} \,\delta(\omega_{ik}-\omega) \, |\beta(\omega)|^2\,. \label{Ima1}$$ And using the Kramers-Krönig relation (\[KramersKronig\]) one finds the real part $$Re[\alpha_1(\omega)]=\frac{8}{\pi} \sum_{ik} \frac{\omega_{ik} \gamma_{ik}}{\omega_{ik}^2-\omega^2} \frac{15 \mu N}{7} |\beta(\omega_{ik})|^2 \,. \label{Rea1}$$ Now we have all the ingredients to calculate the response function of a spherically symmetric trapped condensate when the monopole mode is excited and a small perturbation of the trapping frequency $\delta \omega_{\rm ho} \sim \exp(-i \omega t)$ is applied. It can be calculated within first order perturbation as $\alpha(\omega)=\alpha_0(\omega)+Re[\alpha_1(\omega)] +{\rm i} Im[\alpha_1(\omega)]$, by using Eqs. (\[alpha0\]), (\[Rea1\]) and (\[Ima1\]), respectively. It is worth stressing that the real part of the susceptibility diverges at $\Omega_{\rm M}$ (resonance of the condensate at $T=0$) but also at $\omega_{ik}$ which are the frequencies of the thermal excited modes that due to the interaction are coupled with the monopole. Actually, the resonances of the condensate can be found by measuring the fractional amplitude of oscillations of the cloud radius $\delta b$ at different perturbing frequencies. This measurable quantity can be easily related to the response function $\alpha(\omega)$ from Eqs. (\[dx2\]) and (\[alphadef\]) $$\delta b=-\alpha(\omega)\,\frac{m \Omega_{\rm ho}}{C}\, \delta \omega_{\rm ho}\,. \label{db2}$$ Note that the perturbation theory we have used is valid when $\mid \alpha _1\mid \ll \mid \alpha _0\mid$. This condition becomes very restrictive at $\omega $ near $\omega _M $. However it is not difficult to improve the approximation in this region by taking benefit of the analogy between the response function and the Green function $G$. It is well known that the Green function obeys the Dyson equation [@Abrikosov] which relates the perturbed quantity ($G$) and the unperturbed one ($G_0$) through the inversed functions ($G^{-1}$ and $G_0^{-1}$) in such a way that a perturbation theory for $G^{-1}$ has a more wide applicability that for $G$. Analogously, we will find a relation between the inverse response functions, perturbed ($\alpha^{-1}$) and unperturbed ($\alpha_0^{-1}$). One has $$\frac{1}{\alpha}=\frac{1}{(\alpha_0+\alpha_1)}= \frac{1}{\alpha_0 (1+\alpha_1/\alpha_0)} \, \label{alpha-1}$$ and formally with the same accuracy $$\begin{aligned} \frac{1}{\alpha}&\simeq &\frac{1}{\alpha_0} (1-\frac{\alpha_1}{\alpha_0}) \nonumber \\ &=&\frac{m}{2 C}(\Omega_{\rm M}^2-\omega ^2)- \frac{8}{\pi} \sum_{ik} \frac{\omega_{ik} \gamma_{ik}}{\omega_{ik}^2-\omega^2} \frac{15 \mu N}{7C^2} \,. \label{alpha-1b}\end{aligned}$$ Now the applicability of (\[alpha-1b\]) is restricted only by the condition that the second term is small with compare to $\frac{m}{2 C} \Omega_{\rm M}^2$. It is worth noting that according to the equation (\[alpha-1b\]) the poles of $\alpha (\omega) $ related to the resonances are shifted with compare to frequencies $\omega_{ik} $ and are given by the equation: $\alpha_1(\omega_R')/\alpha_0(\omega_R')=1$. However these shifts are very small. RESULTS ======= In order to present numerical results we choose a gas of $^{87}$Rb atoms (scattering length $a=5.82 \cdot 10^{-7}$ cm). For the spherical trap we fix the frequency $\omega_{\rm ho}= 2\pi 187$ Hz, which is the geometric average of the axial and radial frequencies of Ref. [@JILA], and corresponds to the oscillator length $a_{\rm ho}=0.791 \cdot 10^{-4}$ cm. We solve the linearized Gross-Pitaevskii equations Eqs. (\[linearized\]-\[coupled2\]) at zero temperature, to obtain the ground state wave function $\Psi_0$ and the spectrum of excited states $E_i$ as well as the corresponding functions $u_i({\bf r}),v_i({\bf r})$. In spherically symmetric traps the eigenfunctions are labeled by $i=(n,l,m)$, where $n$ is the number of nodes in the radial solution, $l$ is the orbital angular momentum and $m$ its projection. The eigenfunctions are $u_{nlm}({\bf r})=U_{nl}(r) Y_{lm}(\theta,\psi)$, the energies $E_{nl}$ are $(2 l+1)$ degenerate and the occupation of the thermally excited states is fixed by the Bose factor. For a fixed number of trapped atoms, $N$, the number of atoms in the condensate, $N_0$, depends on temperature $T$. At zero temperature all the atoms are in the condensate, except a negligible quantum depletion [@PRA]. At finite temperature the condensate atoms coexists with the thermal bath. In the thermodynamic limit [@Giorgini3] the $T$-dependence of the condensate fraction is $N_0(T)= N[1-(T/T_c^0)^3]$. We consider the collective excitations in the collisionless regime. This regime is achieved at low enough temperature. The excitation spectrum at low temperature can be safely calculated by neglecting the coupling between the condensate and thermal atoms [@Popov]. It means that the excitation energies at a given $T$ can be obtained within Bogoliubov theory at $T=0$ normalizing the number of condensate atoms to $N_0(T)$. We investigate the monopole mode ($l=m=0$ and $n=1$). The functions $u_{\rm osc}$ and $v_{\rm osc}$ do not present angular dependence, and from Eq. (\[matrixel\]) it is straightforward to see that the matrix element $A_{ik}$ couples only those energy levels ($i,k$) with the same quantum numbers $l$ and $m$. That is, the selection rules corresponding to the monopole-like transition are $\Delta l=0$ and $\Delta m=0$. It is obvious, also, that different pairs of levels with the same quantum numbers $n$ and $l$ but different $m$ give the same contribution. Therefore, only the integration of the radial part has to be done numerically. Fixed $N_0$ and at a given temperature, we calculate the damping strengths (\[dampingik\]) for the transitions $\omega_{ik}$ coupled with the monopole. In Figure 1 we show the values of $\gamma_{ik}$ (in units of the frequency of the monopole $\Omega_{\rm M}$) for $N_0=50000$ $^{87}$Rb atoms at $k_B T = \mu$. The arrow points to the frequency of the breathing mode $\Omega_M=2.231 \,\omega_{\rm ho}$, and the chemical potential is $\mu=15.69 \,\hbar \omega_{\rm ho}$ \[these values are numerical results of the linearized Gross-Pitaevskii equations (\[linearized\])-(\[coupled2\]) for $N_0=50000$ rubidium atoms\]. The position of the bars correspond to the allowed transition frequencies $\omega_{ik} $ (in units of $\omega_{\rm ho}$) whereas their height defines the numerical value of $\gamma_{ik}$ [@dipole]. One can see that there are two different types of allowed transitions $\omega_{ik}$. The damping strength associated to most of them is very small. Conversely, there are a few transitions which give relatively large values of $\gamma_{ik}$. The latter correspond to transitions between the lowest levels ($n_k=1, n_i=0$) for different values of $l$ ($l= 2, 3, 4, 5$). The main reason for these “strong transitions” is that the temperature occupation factor for these low-lying levels is large. Moreover the calculation shows that the matrix elements are also enhanced compared to other transitions. This is due to the fact that the radial wave functions involved in the integration have either one ($n_k=1$) or no node ($n_i=0$), differently from the oscillating character of the radial wave functions associated to higher levels [@hydro]. The contribution of the other transitions is like a small “background” which is difficult to resolve in the scale of the Figure. A close-up view of the damping strengths of the transition frequencies around the monopole is displayed in the inset of Fig. 1 in order to show the dense background. It is worth stressing that such a distinction between “background” and “strong” transitions depends on the number of condensed atoms in the system and, of course, on temperature. When the number of atoms in the condensate increases, the number of excited states available by thermal excitations also increases, leading to a denser and less resoluble background. In Figure 2 we present the same as in Fig. 1 but for $N_0=5 000$ atoms of rubidium at $k_B T = \mu$, where here $\mu=6.25 \hbar \omega_{\rm ho}$. In this case, one can see that the difference between the “strong” and “weak” transitions is not so impressive as in a bigger condensate since all damping strengths can be appreciated in the same scale. We can conclude that at large $N_0$ we have actually two different phenomena. The strong transitions create temperature induced resonances which can be observed in direct experiments. The background transitions give rise to Landau damping of the collective oscillations (See subsection B). Temperature-induced resonances ------------------------------ Using the transition frequencies $\omega_{ik}$ and the corresponding damping strength $\gamma_{ik}$, we have calculated the response function $\alpha(\omega)$. At zero temperature, the response function $\alpha_0(\omega)$ given by Eq. (\[alpha0\]), gives a resonance at the monopole frequency $\Omega_{\rm M}=\sqrt 5 \omega_{\rm ho}$ evaluated in the hydrodynamic regime. Due to interaction, thermal excited modes are coupled with the monopole. It means that when one excites the breathing mode of the condensate, the elementary excitations can give rise to other resonances at $\omega_{ik}$, which are the frequencies where $Re[\alpha_1(\omega)]$ diverges \[see Eq. (\[Rea1\])\]. We will now discuss the conditions for the observation of these effects in actual experiments. In particular, we calculate the contribution of these resonances to the response function and estimate the associated strengths. Let us study the resonances at $k_B T= \mu$ for $N_0=150 000$ atoms of $^{87}$Rb. The behavior of the damping coefficients $\gamma_{ik}$ is analogous to the one for $50 000$ condensate atoms (see Fig. 1) but in this case the difference between “ strong” resonances and small background is even bigger: the dense background is not more resoluble in the scale of the strong resonances. There are five resonances that stand out the others, and that we label as $\omega_{R}$ and $\gamma_{R}$ the corresponding damping strength (see table 1 for numerical values). For perturbing frequencies close to the monopole $\omega \sim \Omega_{\rm M}$, the monopole susceptibility , Eq. (\[alpha0\]), can be approximated to $$\alpha_0(\omega)=\frac{2C}{m(\Omega_{\rm M}-\omega)(\Omega_{\rm M}+\omega)} \simeq A_0\frac{1}{(\Omega_{\rm M}-\omega)}\,, \label{aproxalpha0}$$ where $A_0=C/(m \Omega_{\rm M})$. Analogously, $\alpha _1(\omega)$ near each resonance $\omega \sim \omega_{R}$ can be presented in the form $\alpha _1(\omega) \simeq A_{1}/(\omega _{R}-\omega)$. The ratio $A_{1}/A_0$ is a measure of the relative intensity between temperature-induced and monopole resonance. Table 1 displays the numerical values of the relative intensity for each temperature-induced resonance $\omega_{R}$ with respect to the monopole one, for $N_0=150 000$ atoms in the condensate at $k_B T= \mu$. The relative strenght of the response function ($A_{1}/A_0$) at $\omega_{R}$ depends not only on the damping coefficient $\gamma_{R}$ but also on $(\Omega_{\rm M}^2-\omega_{R}^2)^{-1}$. It means that one mode $\omega_{R}$ will be easier to excite, i.e., the strength of the response will be bigger, when it is close to the frequency of the monopole. Note also that the resonance strength increases with temperature through $\gamma_{R}$. From table 1 one can see that the biggest resonance occurs at $\omega_R=2.2576\, \omega_{\rm ho}$ which is resoluble from the monopole frequency $\Omega_{\rm M}=2.234\, \omega_{\rm ho}$ and has a large enough relative strength to be observed. It means that, tuning the perturbation frequency $\omega$ to this value, a fluctuation of the fractional amplitude of oscillations can be observed. In Figure 3 we have plotted the frequency dependence of the real part of response function $\alpha (\omega)$ calculated according to equation (\[alpha-1b\]) for $N_0=150 000$. The response function is given in arbitrary units, and frequency is in units of $\omega_{\rm ho}$. The dashed line shows the monopole resonance at $\Omega_{\rm M}$, whereas the other divergences of $\alpha(\omega)$ correspond to the temperature-induced resonances at $\omega_{R}$. From this figure one can see that the thermal induced resonances are quite distinct one from the other and from the monopole one. Therefore, temperature-induced resonances could be observed in experiments with good enough frequency resolution and good accuracy in the measurement of the radius fluctuations. We would like to stress that the phenomenon we have discussed is related to quite delicate features of interaction between elementary excitations, and therefore, its observation would give rich information about properties of Bose-Einstein condensed gases at finite temperature. Landau damping of collective modes ---------------------------------- From Fig. 1 one can see that the weak background transitions $\omega_{ik}$ have, generally speaking, very small frequency separation. To estimate this distance quantitatively let us renominate the resonances by an index $i$ in the order of increasing value of $\omega$. Then, one can define the average distance between resonances $\overline{\Delta \omega }$ according to: $$\overline{\Delta \omega} = \frac{\sum_{i} \gamma _i(\omega _{i+1}-\omega_i)}{\sum_{i} \gamma _i} \,. \label{av}$$ In a small interval around the collective oscillation $0.82\, \Omega_{\rm M}< \omega_{ik}<1.18 \,\Omega_{\rm M} $, we sum up all the transition frequencies allowed by the monopole selection rules and find the following values for the average distance between two consecutive transition frequencies: $\overline{\Delta \omega} /\omega _{\rm ho}\simeq 0.0006, 0.001$ and $0.006$ for $N_0=150 000$, $50 000$ and $5 000$, respectively. It is hopeless, of course, to try to resolve these resonances. Actually, there are reasons to believe that these resonances are smoothed and overlapped. First of all, because a real trap cannot be exactly isotropic. It means that levels with different $m$ have slightly different energies, only levels with $m=\pm \mid m \mid$ are exactly degenerated. Therefore, each energy level with a given $l$ will be splitted on $l+1$ closer sublevels making more dense the energy spectrum. Furthermore, all excitations at finite temperature have associated a finite life time. Excitations with $E \sim \mu $ which are the ones that mainly contribute in the “background” transitions, have the shortest life time. This can be accounted for phenomenologically by assuming that these levels have a finite lorentzian width $\Delta $. That is, instead of delta functions in the equation for the damping rate (\[damping2\]), we will consider a Lorentzian distribution centered at $\omega_{ik}$ with a fixed width $\Delta$: $f_L(\omega_{ik},\Delta)=\Delta/(2 \pi \hbar) [(\omega_{ik}-\Omega_{\rm osc})^2+\Delta^2/4]$. In this case the damping rate becomes a smooth function of $\Omega _{\rm osc}$ and its value when $\Omega_{\rm osc}=\Omega_{\rm M}$ defines the Landau damping of the monopole oscillations. At conditions $$\overline{\Delta \omega} \ll \Delta \ll \omega_{\rm ho}\,, \label{Del}$$ the damping rate will have only a weak dependence on the exact value of $\Delta $. In Figure 4 we plot the the dimensionless damping rate $\gamma/\Omega_{\rm M} $ as a function of the lorentzian width $\Delta $ (in units of $\omega_{\rm ho}$) for $N_0=50 000$ at different temperatures. The summation in (\[damping2\]) has been done over all resonances excluding of course the “strong resonances" presented in Figures 1, 2 and 3. One can see that the $\Delta $-dependence is weak indeed in the interval $\Delta/\omega_{\rm ho}=0.05 \div 0.2 $ and $\gamma $ can be reliable extrapolated from this interval to the value $\Delta =0 $. We take as Landau damping this extrapolated value of $\gamma $. One can estimate the accuracy of this extrapolation procedure to be of the order of 10 % according to the change of $\gamma $ over this interval. In Figure 5 we plot the damping rate versus $k_B T/\mu$ for $N_0=150 000$ and $50 000$ atoms in the condensate. As expected, Landau damping increases with temperature since the number of excitations available at thermal equilibrium is larger when $T$ increases. One can distinguish two different regimes in Fig. 5 one at very low $T$ ($k_B T \ll \mu$) and the other at higher $T$. The behaviour of the damping rate becomes linear at relatively small temperature ($k_B T \sim \mu$) in comparison to the homogeneous system [@PLA] where this regime occurs at $k_B T \gg \mu$. Moreover, the damping rate increases for larger number of condensed atoms because the density of states available to the system also increases. It is interesting to note that the order of magnitude of the damping rate is the same as the one previously estimated for an uniform gas [@Liu1; @PLA; @Liu2; @Giorgini1] and for anisotropic traps [@Fedichev1; @Fedichev2]. SUMMARY ======= We have considered the monopole oscillation of a Bose-condensed dilute atomic gas in an isotropic trap. First of all, we have calculated the normal modes of the condensate by solving the time-dependent Gross-Pitaevskii equation within Bogoliubov theory [@PRA] and then we have used the formalism developed in Ref.[@PLA] to calculate the matrix elements associated with the transitions between excited states allowed by the monopole selection rules. Within a first order perturbation theory we have studied the Landau damping of collective modes due to the coupling with thermal excited levels. We have developed the response function formalism to study the fluctuations of the system due to an external perturbation. The contribution of the elementary excitations has been introduced also perturbatively as in the calculation of the damping strength, and we have derived analytic equations for the response function at zero temperature and at low temperature regime. We have seen that when the condensate oscillates with the monopole mode and a small perturbation to the trap frequency is applied, one can excite new resonances at the transition frequencies. These thermal-induced resonances are coupled with the monopole due to interaction effects. One cannot exclude [*a priori*]{} the possibility to observe such resonances also in anisotropic traps. This problem deserves further investigation. Observation of these resonances would give important and unique information about the interaction between elementary excitations in Bose-Einstein condensed gases. We thank F. Dalfovo, P. Fedichev and S. Stringari for helpful discussions. M.G. thanks the Istituto Nazionale per la Fisica della Materia (Italy) for financial support. D.S. Jin, M.R. Matthews, J.R. Ensher, C.E. Wieman and E.A. Cornell, Phys. Rev. Lett. [**78**]{} (1997) 764. M.-O. Mewes, M.R. Anderson, N.J. van Druten, D.M. Kurn, D.S. Durfee, C.G. Townsend and W. Ketterle, Phys. Rev. Lett. [**77**]{} (1996) 988. W.V. Liu and W.C. Schieve, cond-mat/9702122 preprint. L.P. Pitaevskii and S. Stringari, Phys. Lett. A [**235**]{} (1997) 398. W.V. Liu, Phys. Rev. Lett. [**79**]{} (1997) 4056. S. Giorgini, Phys. Rev. A [**57**]{} (1998) 2949. P.O. Fedichev, G.V. Slyapnikov, J.T.M. Walraven, Phys. Rev. Lett. [**80**]{} (1998) 2269. P.O. Fedichev, G.V. Slyapnikov, Phys. Rev. A [**58**]{} (1998) 3146. N.N. Bogoliubov, J. Phys. USSR, [**11**]{}, 23 (1947). E.P. Gross, Nuovo Cimento [**20**]{}, 454 (1961); E.P. Gross, J. Math. Phys. [**4**]{}, 195 (1963) L.P. Pitaevskii, Zh. Eksp. Teor. Fiz. [**40**]{}, 646 (1961) \[Sov. Phys. JETP [**13**]{}, 451 (1961)\]. W. Ketterle, D.S. Durfee, and D.M. Stamper-Kurn, in [*Proceedings of Int. School E. Fermi*]{}, Varenna 1998, cond-mat/9904034. M. Edwards, P. A. Ruprecht, K. Burnett, R. J. Dodd, and C. W. Clark, Phys. Rev. Lett. [**77**]{}, 1671 (1996); P. A. Ruprecht, Mark Edwards, K. Burnett, and Charles W. Clark, Phys. Rev. A [**54**]{}, 4178 (1996) ; M. Edwards, R. J. Dodd, C. W. Clark, and K. Burnett, J. Res. Natl. Inst. Stand. Technol. [**101**]{}, 553 (1996). K.G. Singh and D.S. Rokhsar, Phys. Rev. Lett. [**77**]{}, 1667 (1996) B.D. Esry, Phys. Rev. A [**55**]{}, 1147 (1997) D.A.W. Hutchinson, E. Zaremba, and A. Griffin, Phys. Rev. Lett. [**78**]{}, 1842 (1997) L. You, W. Hoston, and M. Lewenstein, Phys. Rev. A [**55**]{}, R1581 (1997) F. Dalfovo, S. Giorgini, M. Guilleumas, L.P. Pitaevskii and S. Stringari, Phys. Rev. A [**56**]{} (1997) 3840. S. Stringari, Phys. Rev. Lett. [**77**]{} (1996) 2360. When $Na/a_{\rm ho}$ is large one can use the hydrodynamic approximation for the functions $u$ and $v$ of the low-energy excitations \[see L.P. Pitaevskii, [*Recent Progress in Many-Body Theories*]{}, Ed. D. Nielson and R. Bishop, World Scientific (Singapore, 1998), p.3\]. However, we have found that the matrix elements $A_{ik}$, defined in section III, are quite sensitive to the accuracy of these functions. Hydrodynamics can give values for $\gamma_{ik}$ an order of magnitude smaller than the values calculated numerically by using the Bogoliubov functions (\[coupled1\],\[coupled2\]). On the contrary, it is completely safe to use hydrodynamics to calculate the response function at zero temperature $\alpha _0(\omega)$ (see section IV), since the difference between hydrodynamics and exact values of the monopole frequency $\Omega_{\rm M}$ is very small. Beliaev decay of an elementary excitation into a pair of excitations \[S.T. Beliaev, Soviet Phys. JETP [**34**]{} (1958) 323\] is not active for the lowest energy modes in the case of trapping potential because of the discretization of levels. L.D. Landau and E.M. Lifshitz, [*Statistical Physics, Course of theoretical Physics*]{} (Vol. 5), Pergamon Press (1970). L.P. Pitaevskii, Phys. Lett. A [**229**]{} (1997) 406. F. Dalfovo, C. Minniti and L.P. Pitaevskii, Phys. Rev. A [**56**]{} (1997) 4855. F. Dalfovo, S. Giorgini, L.P. Pitaevskii and S. Stringari, Rev. Mod. Phys. [**71**]{} (1999) 463. A.A. Abrikosov, L.P. Gorkov and I. Ye. Dzyaloshinskii, [*Quantum Field Theoretical Methods in Statistical Physics*]{} (Pergamon Press, 1965) S. Giorgini, L.P. Pitaevskii and S. Stringari, J. Low Temp. Phys. [**109**]{} (1997) 309. In this paper we do not include the so-called Popov’s self-consistent correction in equation (\[TDGP\]). See, for example, Ref. [@Giorgini3]. We have neglected all the transitions that involve the lowest dipole mode ($l=1$, $n=0$) because in an external harmonic potential this mode, corresponding to the oscillation of the center of mass, is unaffected by the interatomic forces and then the transition probability due to interaction effects (\[prob\]) must be zero [@Stringari; @Stoof]. However, the perturbation formalism we have presented in section III does not take into account this physical consideration automatically and therefore, we have omitted by hand all transitions with the dipole mode. H. Stoof, J. Low Temp. Phys. , [**114**]{}, 11 (1999). -------------- -------------- --------------- $\omega_{R}$ $\gamma_{R}$ $|A_{1}/A_0|$ 1.9115 0.009268 0.063 2.0252 0.004147 0.063 2.1432 0.002102 0.127 2.2576 0.001097 0.298 2.3655 0.000545 0.020 -------------- -------------- --------------- : Damping coefficients $\gamma_R$ (in units of $\omega_{\rm ho}$) of the “strong resonances" $\omega_{R}$ (in units of $\omega_{\rm ho}$) and relative intensities $|A_{1}/A_0|$ between temperature-induced the monopole resonance, for $N_0=150 000$ condensate atoms of $^{87}$Rb in a spherical trap with $a_{\rm ho}=0.791 \times 10^{-4}$ cm at $k_B T = \mu$. \[table1\]
--- abstract: 'For a given lattice, we establish an equivalence involving a closed zone of the corresponding Voronoi polytope, a lamina hyperplane of the corresponding Delaunay partition and a rank 1 quadratic form being an extreme ray of the corresponding $L$-type domain.' author: - | Michel Deza\ Ecole Normale Supérieure, Paris\ - | Viatcheslav Grishukhin\ CEMI, Russian Academy of Sciences, Moscow title: 'Rank 1 forms, closed zones and laminae' --- An $n$-dimensional lattice determines two normal partitions of the $n$ space ${\bf R}^n$ into polytopes. These are the Voronoi partition and the Delaunay partition. These partitions are dual, i.e. a $k$-dimensional face of one partition is orthogonal to an $(n-k)$-dimensional face of the other partition. Besides, a vertex of one partition is the center of a polytope of the other partition. The Voronoi partition consists of Voronoi polytopes with its centers in lattice points. Moreover, any polytope of the Voronoi partition is obtained by a translation of the Voronoi polytope with the center in the origin (=the zero lattice point). Call this polytope [*the*]{} Voronoi polytope. It consists of those points of ${\bf R}^n$ that are at least as closed to 0 as to any other lattice point. The Delaunay partition consists of Delaunay polytopes which are, in general, not congruent. The set of all Delaunay polytopes having 0 as a vertex is called the [*star*]{} of Delaunay polytopes. Each Delaunay polytope is the convex hull of all lattice points lying on an [*empty*]{} sphere. This sphere is called empty, since no lattice point is an interior point of the sphere. The Voronoi polytope and the Delaunay polytopes of the star are tightly related to minimal vectors of cosets $2L$ in $L$. A coset $Q$ is called [*simple*]{} if it contains, up to sign, only one minimal vector. For a Delaunay polytope $P_D$, the lattice vector between any two vertices of $P_D$ is a minimal vector of a coset of $L/2L$. A lattice vector is an edge of a Delaunay polytope of the star (and then, by duality, it defines a facet of the Voronoi polytope) if and only if it is the minimal vector of a simple coset of $L/2L$. All minimal vectors of a non-simple coset are diagonals of a symmetric face of a Delaunay polytope of the star. The set ${\cal P}(P)$ of all faces of all dimensions of a polytope $P$ is partially ordered by inclusion. Call it [*face poset of $P$*]{}. The face poset of the Voronoi polytope $P_V$ of a lattice $L$ determines uniquely the combinatorial structure of $P_V$ and the [*$L$-type*]{} of the Voronoi and Delaunay partitions. The notion of an $L$-type was introduced by G.Voronoi in [@Vo]. One says that a lattice $L$ [*belongs to*]{} or [*is of*]{} an $L$-type if its Voronoi partition has this $L$-type. In other words, two lattices (and their Voronoi and Delaunay partitions) belong to the same $L$-type if the corresponding partitions are combinatorially and topologically equivalent, or, equivalently, the face posets of their Voronoi polytopes are isomorphic. If we reverse the order of ${\cal P}(P_V)$, we obtain the poset of those faces of Delaunay polytopes of the star that contain the point 0. There is a small perturbation of a basis of $L$ that does not change the $L$-type of $L$. Suppose that a perturbation of the basis changes the $L$-type. Then the Delaunay partition changes, and there is an empty sphere such that a lattice point either leaves or comes onto the sphere. There is the following simple but important test of emptiness of a sphere. Let $S \subset {\bf R}^n$ be an $(n-1)$-dimensional sphere with the origin point 0 on it. Let $v_1,v_2,...,v_n$ be $n$ linearly independent lattice vectors with endpoints on $S$. Let $u \in {\bf R}^n$ be an arbitrary vector and $u=\sum_{i=1}^nz_iv_i$. (We denote by $(p,q)$ the scalar product of vectors $p$ and $q$, and set $p^2=(p,p)$). \[uS\] [(Proposition 4 of [@BG]).]{} The endpoint of $u$ is not an interior point of $S$ if and only if the following inequality holds $$\label{uv} u^2 \ge \sum_{i=1}^n z_iv_i^2.$$ The endpoint of $u$ lies on $S$ if and only if (\[uv\]) holds as equality. [**Proof**]{}. Let $c$ be the center of $S$. Since the endpoint of $v_i$ lies on $S$, we have $(v_i-c)^2=c^2$, $1 \le i \le n$, i.e. $v_i^2=2(v_i,c)$. Multiplying this equality by $z_i$, summing over $i$ and taking in attention that $u=\sum z_iv_i$, we obtain $$\sum_{i=1}^nz_iv_i^2=2(u,c).$$ Since the endpoint of $u$ is not an interior point of $S$, $(u-c)^2\ge c^2$, i.e. $u^2\ge 2(u,c)$. Using the above equality, we obtain (\[uv\]). It is easy to see that (\[uv\]) holds as equality if and only if the inequality $u^2\ge 2(u,c)$ holds as equality, i.e. if and only if the endpoint of $u$ lies on $S$. $\Box$ Any basis ${\cal B}=\{b_i,1\le i\le n\}$ of an $n$-dimensional lattice $L$ determines uniquely a positive definite quadratic form $$f(x)=(\sum_1^n b_ix_i)^2=\sum_{1 \le i,j \le n}a_{ij}x_ix_j.$$ The symmetric matrix $a_{ij}$ of the coefficients of this form is the Gram matrix of the basis $\cal B$, i.e. $a_{ij}=(b_i,b_j)$. The matrix $a_{ij}$ can be considered as a point of an $N$-dimensional space, where $N={n+1 \choose 2}$. In this space, all positive definite forms form an open cone. The closure of this cone is the cone ${\cal P}_n$ of all positive semi-definite quadratic forms of order $n$. One says that a quadratic form belongs to or is of an $L$-type if its lattice belongs to this $L$-type. Hence the cone ${\cal P}_n$ is partitioned into $L$-type domains of forms of the same $L$-type. Of course, the cone ${\cal P}_n$ has many domains of the same $L$-type corresponding to distinct choices of a basis. Voronoi proved that each $L$-type domain is an open polyhedral cone of dimension $k$, $1 \le k \le N$. An $N$-dimensional $L$-type domain is called [*general*]{}. Domains of other dimensions are called [*special*]{}. Any face of the closure of a general $L$-type domain is the closure of a special $L$-type domain. One-dimensional $L$-type domains are extreme rays of the closure of a general $L$-type domain. Call a form an [*edge form*]{} if it belongs to a one-dimensional $L$-type domain. In [@BG], an edge form is called [*rigid form*]{}, since the only transformation of the corresponding lattice that does not change its $L$-type is a scaling. A typical edge form is the square of a linear form: $f(x)=(\sum_{i=1}^n p_ix_i)^2$, i.e. a rank 1 form. But, for $n \ge 4$, there are edge forms of full rank $n$. A polyhedral domain of quadratic forms is called [*dicing domain*]{} if all extreme rays of its closure are forms of rank 1. Dicings were defined and studied by Erdahl and Ryshkov [@ER]. In [@ER], they give conditions when a dicing domain is an $L$-type domain and prove the following theorem (Theorem 4.3 of [@ER]): [*An $L$-type domain is a dicing domain if and only if all the edge forms are rank 1 forms*]{}. Return to the Voronoi polytope $P_V$ of an $n$-dimensional lattice $L$. The Voronoi polytope $P_V$ itself and its facets (i.e. faces of dimension $n-1$) are centrally symmetric, so as its vertices and edges. But faces of other dimensions are, in general, not centrally symmetric. For example, there are many types of two-dimensional faces: hexagons, and others that are degenerated cases of a hexagon when some of its edges are compressed to a point. The set of edges of $P_V$ is partitioned into classes of mutually parallel edges. These classes are called [*zones*]{}. There are two types of zones: closed and open. A zone is called [*closed*]{} if every two-dimensional face contains either two edges of the zone or else none. Otherwise the zone is called [*open*]{}. The notions of closed and open zones was introduced by P.Engel in [@En]. A closed zone has the following property. Let $l$ be the minimal length of edges of a closed zone $Z$. Let us shorten all edges of $Z$ onto a value $\varepsilon \le l$. If $\varepsilon<l$, then $Z$ remains closed, and the new polytope $P'_V$ (with shortened edges) is a Voronoi polytope with the same face poset as $P_V$. If $\varepsilon= l$, then $Z$ transforms into an open zone, and $P'_V$ has another face poset, since at least one edge vanishes. Since the Voronoi partition is dual to the Delaunay partition, each edge of a Voronoi polytope is orthogonal to a facet of a Delaunay polytope. A facet $F$ of a Delaunay polytope of a lattice $L$ generates an affine hyperplane $H$ in ${\bf R}^n$, namely the hyperplane, where $F$ lies. Obviously $F$ contains $n$ affinely independent lattice points. Hence the intersection $L\cap H$ is an $(n-1)$-dimensional sub-lattice of $L$. The Delaunay partition of $L$ generates a partition of the hyperplane $H$ into Delaunay polytopes of the lattice $L\cap H$. It may be that all $(n-1)$-dimensional Delaunay polytopes of the partition of $H$ are facets of Delaunay polytopes of the original Delaunay partition of $L$. In this case $H$ is called a [*lamina*]{} of the lattice $L$. The notion of lamina was introduced and extensively used by Ryshkov and Baranovskii in [@RB] (see §9.4). If $L$ belongs to a general $L$-type and if a hyperplane $H$ is not a lamina, then it intersects in an interior point an edge of at least one Delaunay polytope $P_D$ of the star (Lemma 9.3 of [@RB]). In other words, there are two vertices of $P_D$ that lie in distinct half-spaces determined by $H$. We reformulate Lemma 9.3 of [@RB] for a lattice of an arbitrary $L$-type. \[lRB\] A hyperplane is a lamina if and only if it does not intersect any Delaunay polytope of the star in an interior point. Obviously, a lamina determines a family of parallel laminae that partitions the lattice $L$ into parallel layers, each of them spanning a lamina. Lemma \[lRB\] implies the following corollary. [**Corollary**]{} [*Every Delaunay polytope of a lattice with a lamina lies between two neighboring laminae with vertices on these two laminae.*]{} The main property of this partition of $L$ into layers, spanning laminae, is that the distances between layers may be changed without changing the $L$-type of $L$. Let $H$ be a hyperplane spanning an $(n-1)$-dimensional sub-lattice of $L$. Let $e$ be a unit vector orthogonal to $H$. Then we can define an $\epsilon$-extension along $e$ of the space and of the lattice $L$ as follows. Any vector $v \in {\bf R}^n$ is uniquely decomposed as $v=v_e+v_H$, where $v_e=(e,v)e$ and $v_H$ are the projections of $v$ onto $e$ and $H$, respectively. An $\epsilon$-extension of ${\bf R}^n$ along $e$ transforms every vector $v$ into the vector $$\label{ext} v'=(1+\epsilon)v_e+v_H=\epsilon(e,v)e+v.$$ Here the $\epsilon$-extension is in fact a contraction if $\epsilon<0$. In particular, for the norm (=squared length) ${v'}^2$ of the extended vector, we obtain the following expression, where we set $\lambda=\epsilon(2+\epsilon)$: $$\label{v2} {v'}^2=v^2+\lambda(e,v)^2.$$ Of course, an $\epsilon$-extension of ${\bf R}^n$ along $e$ transforms a lattice $L \subset {\bf R}^n$ into an [*extended lattice*]{} $L^{\epsilon}$. There is the following relation between the above introduced notions of a rank 1 form, a closed zone, a lamina and the extended lattice. Let $L$ be an $n$-dimensional lattice, $H$ be a hyperplane spanning an $(n-1)$-dimensional sub-lattice of $L$, $e$ be an $n$-dimensional unit vector orthogonal to $H$. Let $f(x)$ be the quadratic form corresponding to a basis $\{b_i:1 \le i \le n\}$ of $L$ and $D(f)$ be the $L$-type domain of $f$. The following assertions are equivalent: \(i) $H$ is a lamina of the Delaunay partition of $L$; \(ii) the Voronoi polytope $P_V$ of $L$ has a closed zone $Z_e$ of edges parallel to the vector $e$; \(iii) the $\epsilon$-extended along $e$ lattice $L^{\epsilon}$ has the same $L$-type as $L$ for all $\epsilon>0$; \(iv) the rank 1 form $f_e(x)=(e,\sum_1^n b_ix_i)^2$ lies on an extreme ray of the closure of $D(f)$, i.e. $f+\lambda f_e \in D(f)$ for all nonnegative $\lambda$. [**Proof**]{}. (i)$\Rightarrow$(ii). Let $H$ be a lamina partitioned into facets of Delaunay polytopes of $L$. We can suppose that $H$ contains the origin 0. Consider the edges of the Voronoi polytope $P_V$ that are orthogonal to the lying in $H$ facets of the star. Obviously these edges are parallel to $e$ and form the zone $Z_e$. We show that $Z_e$ is closed. If not, there is a 2-face $T$ of $P_V$ containing exactly one edge $u_1 \in Z_e$. The edges of $T$ form a polygon. Let $u_1, u_2,...,u_k$ be consecutive edges of this polygon. Let $F_i$ be the facet of the star that is orthogonal to the edge $u_i$, $1 \le i \le k$. The set of facets $\{F_i:1 \le i \le k\}$ has a common $(n-2)$-dimensional face of the star that is orthogonal to $T$ and lies in the lamina $H$. Since, for $2\le i \le k$, the edge $u_i$ is not parallel to $u_1$, the facet $F_i$ does not lie in the lamina $H$. Hence there is an index $j$ such that the facets $F_j$ and $F_{j+1}$ lie in distinct halfspaces separated by $H$. Obviously, $F_j$ and $F_{j+1}$ are facets of a same Delaunay polytope $P_j$ of the star, and $H$ intersects $P_j$. This contradicts to definition of a lamina. Hence $Z_e$ cannot be open. (ii)$\Rightarrow$(i). Let $u_1 \in Z_e$. Consider the facet $F_1$ of the star that is orthogonal to $u_1$ and contains $0 \in L$. $F_1$ spans a hyperplane $H$ that is orthogonal to $e$ and contains $0 \in L$. Let $T_1$ be a 2-face of $P_V$ containing $u_1$, and $u_2$ be the second edge from $Z_e$ contained in $T_1$. Let $F_2$ be the facet of the star that is dual to $u_2$. $F_2$ intersects $F_1$ by an $(n-2)$-face that is dual to $T_1$ and contains 0. Hence $F_2$ contains 0 and is orthogonal to $e$. This implies that $F_2$ lies in $H$. Similarly, for $i=3,4,...$, we consider the 2-face $T_i$ containing $u_i,u_{i+1} \in Z_p$, and prove that the facet $F_i$ of the star dual to $u_i$ lies in $H$. Since $P_V$ is a polytope, it has a finite number of edges. Hence there is $i_0$ such that $u_{i_0}=u_1$. We obtain a set of facets of the star lying in $H$ such that the facet $F_i$ intersects $F_{i-1}$ and $F_{i+1}$. Therefore the intersection of the star with $H$ consists of facets of the star. Since this is true for all Voronoi polytopes having centers in $H$, the hyperplane $H$ is partitioned into facets of Delaunay polytopes. This means that $H$ is lamina. (i)$\Rightarrow$(iii). It is sufficient to prove that no lattice point comes onto or leaves the empty sphere $S$ of a Delaunay polytope $P_D$. Without loss of generality, we can suppose that $P_D$ belongs to the star. Let $v_1, v_2,...,v_n$ be $n$ linearly independent lattice vectors with endpoint in vertices of $P_D$, i.e. they lie on the sphere $S$. Let $u$ be any lattice vector and let $u=\sum_{i=1}^nz_iv_i$ be its decomposition by $v_i$, $1 \le i \le n$. According to (\[ext\]), after an $\epsilon$-extension of the space along $e$, $u$ and $v_i$ are transformed into the vectors $$u'=u+\epsilon (u,e)e, \mbox{ }v'_i=v_i+\epsilon (v_i,e)e.$$ By Lemma \[uS\], it is sufficient to prove that the inequality ${u'}^2 \ge \sum_{i=1}^n z_i{v'_i}^2$ is strict or is an equality according to the inequality $u^2 \ge \sum_{i=1}^n z_iv_i^2$ is strict or is an equality. Recall that $H$ contains an ($n-1$)-dimensional sub-lattice. Hence $(v,e)=k(v)\alpha$ for any lattice vector $v$, where $k(v)$ is an integer and $\alpha$ does not depend on $v$. Let $k=k(u)$ and $k_i=k(v_i)$. Multiplying the equality $u=\sum_{i=1}^nz_iv_i$ by $e$, we obtain the equality $k=\sum_{i=1}^nz_ik_i$. By Corollary of Lemma \[lRB\], the vertices of $P_D$ lie on two neighboring laminae. Hence $|k_i|=0,1$. Without loss of generality we can suppose that $k_i\ge 0$, i.e. $k_i=0,1$ and therefore $(v_i,e)^2=k_i^2=k_i$. Hence, using (\[v2\]), we have $${v'_i}^2=v_i^2+\lambda (v_i,e)^2=v_i^2+\lambda k_i \alpha^2.$$ Since $\sum_{i=1}^n z_ik_i=k$, we obtain the equality $$\label{uzv} \sum_{i=1}^nz_i{v'_i}^2=\sum_{i=1}^n z_iv_i^2+ \lambda \alpha^2 \sum_{i=1}^n z_ik_i= \sum_{i=1}^n z_i v_i^2+\lambda\alpha^2 k.$$ Suppose that $u^2>\sum_{i=1}^n z_iv_i^2$. We show that then $\sum_1^n z_i{v'_i}^2<{u'}^2=u^2+\lambda(u,e)^2=u^2+\lambda \alpha^2 k^2$. For an integer $k$, we have $k \le k^2$ with equality if and only if $k=0,1$. The above inequalities and the equality (\[uzv\]) imply $$\sum_1^n z_i{v'_i}^2<u^2+\lambda \alpha^2 k^2={u'}^2.$$ Now let $u^2=\sum_1^n z_iv_i^2$. This means that $u$ has the endpoint in a vertex of $P_D$. Hence $k=0,1$, i.e. $k^2=k$. Hence the equality (\[uzv\]) implies the equality $$\sum_1^n z_i{v'_i}^2=u^2+\lambda \alpha^2 k^2={u'}^2.$$ (iii)$\Rightarrow$(iv). We prove in the implication (i)$\Rightarrow$(iii) that $f(L^{\epsilon}) \in D(f)$ for every $\epsilon \ge 0$. Now we show that $$f(L^{\epsilon})=f(L)+\lambda f_e,$$ where $\lambda=\epsilon(2+\epsilon)$. The basic vectors of the extended lattice $L^{\epsilon}$ have the form $$b'_i=b_i+\epsilon(e,b_i)e, \hspace{3mm}1 \le i \le n.$$ Hence the coefficients $a'_{ij}$ of the quadratic form $f^{\epsilon}=f(L^{\epsilon})$ are as follows $$a'_{ij}=(b'_i,b'_j)=(b_i,b_j)+\lambda(e,b_i)(e,b_j).$$ Hence we obtain $$f^{\epsilon}(x)=f(x)+\lambda(\sum_{i=1}^n(e,b_i)x_i)^2= f(x)+\lambda f_e(x).$$ This means that the ray $\{\lambda f_e: \lambda \ge 0\}$ belongs to the closure of $D(f)$. Since this ray is a one-dimensional $L$-type domain, it is an extreme ray of cl$D(f)$. (iv)$\Rightarrow$(iii). If $f_e$ lies on an extreme ray of cl$D(f)$, then the quadratic function $f^{\epsilon}=f+\epsilon(2+\epsilon)f_e$ belongs to $D(f)$ for all $\epsilon \ge 0$. The matrix $a'_{ij}$ of $f^{\epsilon}$ is $$a_{ij}+\epsilon(2+\epsilon)(e,b_i)(e,b_j)=(b'_i,b'_j).$$ Hence $f^{\epsilon}$ is a quadratic form of the $\epsilon$-extended lattice $L^{\epsilon}$. So, $L^{\epsilon}$ has the same $L$-type as $L$ for all $\epsilon \ge 0$. (iii)$\Rightarrow$(i). We show that if the extended along $e$ lattice $L^{\epsilon}$ has the same $L$-type as $L$ for all $\epsilon>0$, then the hyperplane $H$ which is orthogonal to $e$ is a lamina. Suppose $H$ is not a lamina. Then $H$ intersects a Delaunay polytope $P_D$ of the star in an interior point. Hence there are two vectors $v_1$ and $v_2$ with endpoints in vertices of $P_D$ such that $k_1=k(v_1)\ge 1$ and $k_2=k(v_2) \le -1$. Consider the lattice vector $$u=q(k_1v_2-k_2v_1),$$ where the integer $q$ is chosen such that the endpoint of $u$ does not lie on the empty sphere $S$ circumscribing $P_D$. Expand the pair of vectors $v_1$, $v_2$ up to a set of $n$ independent vectors with endpoints in vertices of $P_D$. Then the above expression for $u$ is the representation of $u$ as a linear combination of these $n$ independent vectors. Since the endpoint of $u$ does not lie on $S$, by Lemma \[uS\], $\Delta \equiv u^2-q(k_1v_2^2-k_2v_1^2)>0$. Consider an $\epsilon$-extension of the space along $e$. Let $u'$, $v'_1$, $v'_2$ be the $\epsilon$-extended vectors. Consider the difference $\Delta'={u'}^2-q(k_1{v'_2}^2-k_2{v'_1}^2)$. Using (\[v2\]), we obtain $$\Delta'=\Delta+\lambda(e,u)^2-q(k_1\lambda(e,v_2)^2- k_2\lambda(e,v_1)^2).$$ Since $(e,u)=\alpha k(u)=0$, $(e,v_1)=\alpha k_1$, $(e,v_2)=\alpha k_2$ and $k_2<0$, we have $$\Delta'=\Delta-\lambda qk_1 |k_2|(|k_2|+k_1).$$ Let $p=qk_1 |k_2|(k_1+|k_2|)>0$. Then for $\lambda=\frac{\Delta}{p}$, we obtain $\Delta'=0$. Lemma \[uS\] implies that, for this $\lambda$, the endpoint of $u$ lies on $S$. This means that there is $\epsilon>0$ such that the $\epsilon$-extended lattice $L^{\epsilon}$ has the $L$-type distinct from the $L$-type of $L$. We obtain a contradiction. $\Box$ . The rank 1 function $f_e(x)=(e,\sum_1^n b_ix_i)^2$ depends on the basis corresponding to the function $f(x)$. But this dependence is not essential in the following sense. Recall that $(e,b_i)=\alpha k(b_i)$, where $k_i=k(b_i)$ is an integer. If we slightly move $f(x)$ in the domain $D(f)$, then we slightly change the basis ${\cal B}=\{b_i:1 \le i \le n\}$ and the scalar products $(e,b_i)= \alpha k_i$. But, since $k_i$ is an integer, it cannot change slightly. Hence only $\alpha$ slightly changes. This implies that this movement of $f(x)$ inside of the $L$-type domain $D(f)$ causes a movement of $f_e(x)=\alpha^2 (\sum_1^n k_i x_i)^2$ along the ray $\{\lambda(\sum_1^n k_i z_i)^2: \lambda \ge 0 \}$. The collections of the integers $\{k_i:1\le i \le n\}$ is an invariant of the $L$-type domain $D(f)$. . The equivalence (i)$\Leftrightarrow$(iv) of Theorem 1 is mentioned in the paper [@ER]. After the proof of Theorem 4.3, the authors of [@ER] write: > The ideas used in this proof can be extended to cover the case in which only a portion of the edge forms have rank 1. For such an $L$-type domain each rank 1 edge form can be associated with a $D$-family of parallel hyperplanes $G$, and the $L$-partitions $\cal S$ of lattices on this domain are refinement of the partition determined by $G$. In the other direction, any hyperplane which does not intersect the interior of any $L$-polytope of an $L$-partition can be associated with a rank 1 edge form of the corresponding $L$-type domain. Such hyperplanes are members of a $D$-family of parallel hyperplanes. Note that here an $L$-partition and an $L$-polytope mean a Delaunay partition and a Delaunay polytope, and a $D$-family is a family of parallel laminae. In fact, the authors asserts that the ideas used in the proof of Theorem 4.3 of [@ER] can be extended to the proof of the equivalence (i)$\Leftrightarrow$(iv). But it seems to us that the proof given above is not complete. [99]{} E.P.Baranovskii, V.P.Grishukhin, [*Non-rigidity degree of a lattice and rigid lattices*]{}, 2000, (submitted) P.Engel, [*Investigations of parallelohedra in ${\bf R}^d$*]{}, in: P.Engel, H.Syta eds., Voronoi’s impact on modern science, Institute of Mathematics, Kyiv 1998, vol. 2, 22–60. R.M.Erdahl, S.S.Ryshkov, [*On lattice dicing*]{}, Europ.J. Combinatorics [**15**]{} (1994) 459–481. S.S.Ryshkov, E.P.Baranovskii, [*C-types of n-dimensional lattices and 5-dimensional primitive parallelohedra (with application to the theory of covering).*]{} Trudy of Steklov’s Mathematical Institute, vol.133, 1976. (Translated as: Proceedings of Steklov Institute of Mathematics 1978, No 4.) G.F.Voronoi, [*Nouvelles applications des paramètres continus à la théorie de formes quadratiques - Deuxième mémoire*]{}, J. für die reine und angewandte Mathematik, [**134**]{} (1908) 198–287, [**136**]{} (1909) 67–178.
--- abstract: 'The possibility for detuned spins to display synchronous oscillations in local observables is analyzed in the presence of collective dissipation and incoherent pumping. We show that there exist two distinct mechanisms that can give rise to synchronization, that is, subradiance and coalescence. The former, known as transient synchronization, is here generalized in the presence of pumping and is due to long-lasting coherences. In the same set-up, even if under different conditions, coalescence and exceptional points are found which can lead to regimes where the relevant Liouvillian sectors have a single oscillation frequency. We show that synchronization can be established after steady phase-locking occurs. Distinctive spectral features of synchronization by subradiance and by coalescence are reported for two-time correlations.' author: - Albert Cabot - Gian Luca Giorgi - Roberta Zambrini title: 'Synchronization and coalescence in a dissipative two-qubit system' --- Introduction {#sec1} ============ Open quantum systems exhibit features beyond dissipation of energy and decoherence that can not generally be found in the absence of losses [@Breuer]. An example studied in the last decade is spontaneous synchronization emerging among different interacting quantum systems, reaching a synchronized dynamics determined by the coupling to some external environments [@SyncRev1]. Different approaches have been proposed to define and describe this phenomenon in the quantum regime, also considering a variety of systems such as harmonic oscillators [@Giorgi1; @manzano; @cabot_npj], spins [@praspins], biological [@olaya] or optomechanical [@mari; @marquardt; @Cabot_NJP] systems, quantum Van der Pol oscillators [@lee; @tilley1; @walter] or micromasers [@tilley2], also exploring the effects for different system-bath configurations [@praprob; @Bellomo; @Cabot_PRL]. Synchronization signatures between mesoscopic ensembles of quantum systems have also been discussed in [@Holland1; @Holland2] using a mean-field approach. In particular, quantum synchronization can be induced by dissipation when time-scale separation occurs between the modes governing the dynamics [@SyncRev2], due to the presence of a dominant collective excitation. Depending on the lifetime of this excitation, this synchronization can be either observed in a transient regime prior to thermalization, or found in the stationary dynamics in presence of decoherence-free channels [@manzano; @cabot_npj; @dieter1; @dieter2]. From a mathematical point of view, when describing the open quantum system through a master equation, this dominant collective excitation emerges if one eigenvalue of the Liouvillian has a decay rate much smaller than any other eigenvalue. This analysis provides a clear criterion to predict transient synchronization, even if other scenarios can occur, as the recently reported band synchronization [@Cabot_PRL], where a bunch of weakly damped eigenmodes are almost degenerate. Then, synchronization is associated to the presence of a spectral gap that makes the long-time dynamics almost monochromatic. As reviewed in [@SyncRev1], it can be quantified either through temporal correlations of local observables or directly looking at the properties of the Liouvillian spectrum. Another very interesting phenomenon displayed by open systems is the existence of spectral singularities, the so-called exceptional points (EPs) [@heiss]: in such points, two or more eigenvalues, and their corresponding eigenvectors, simultaneously coalesce (i.e. one or more eigenvectors disappear) making the dynamics not diagonalizable. The presence of these singularities has been mainly studied, among other contexts, in the framework of $PT $-symmetric quantum mechanics [@bender] and non-Hermitian Hamiltonians [@El-Ganainy; @feng; @stefano; @miri; @ozdemir], nontrivial transmission and fluctuation spectra [@Cabot_EPL], anomalous decay dynamics [@Cabot_EPL; @Longhi1], characterization of topological materials [@ghatak], enhanced sensing [@chen; @stefano3]. The study of the dynamical behavior near EPs has attracted interest especially in integrated photonics [@peng; @miao; @stefano2; @hodaei], acoustics [@fleury; @ding; @shi], and optomechanics [@lu; @xu; @verhagen]. A common feature shared by transient synchronization and eigenvalue coalescence is to enable the reduction of the number of modes with different frequencies observed in the dynamics. The existence of common dynamical signatures, such as the presence of a single frequency in the temporal evolution of coupled systems, allows for the achievement of a synchronous dynamics in both cases. For instance, in Ref. [@Holland1], the dynamics of two detuned atomic clouds interacting with a cavity mode and externally pumped was studied at the mean-field level. The identified regime in which the system displays only one frequency is indeed an example of synchronization by coalescence, as we will discuss here. The aim of this work is to make a deep analysis and comparison between transient synchronization and eigenvalue coalescence, as defined above, in order to establish their relation and distinctive signatures. Both phenomena can be displayed in a simple system of two spins interacting through a common bath. By means of an explicit diagonalization of the Liouvillian superoperator governing the dynamics, we will be able to fully characterize the regimes where (some of) the eigenmodes can coalesce and compare them with the synchronization diagram, which can be drawn either looking at temporal correlations between local observables or at the presence of a gap in the Liouvillian spectrum. We will show that, at difference from transient synchronization, in presence of coalescence a monochromatic oscillation is present from the beginning. Nevertheless synchronization occurs after a transient anyway, as phase-locking emerges only when all (frequency-degenerate) eigenmodes but one have decayed out. This behavior is compared to the emergence of both frequency- and phase-locking in transient synchronization that has been shown to be due to long-lasting coherences between the ground and the subradiant eigenmode [@Bellomo]. The paper is organized as follows. In Sec. \[model\] we present the model of an open system of two coupled qubits. In Sects. \[EPs\] and \[sync\] we analyze the presence of EPs in the Liouvillian, and compare it with transient synchronization. The distinctive signatures of both phenomena in the correlation spectrum are analyzed in Sec. \[spectrum\], while the conclusions are presented in Sec. \[conclusions\]. Some mathematical details and supplemental results are presented in three appendices \[appA\], \[appB\], \[appC\]. The model {#model} ========= We consider a dissipative system of two qubits described by the following Born-Markov master equation for their density matrix $\hat{\rho}$ ($\hbar=1$) $$\label{ME} \dot{\hat{\rho}}=-i[\hat{H},\hat{\rho}]+2\gamma\mathcal{D}[\hat{L}]+w(\mathcal{D}[\hat{\sigma}_1^+]+\mathcal{D}[\hat{\sigma}_2^+]),$$ where we have introduced dissipative superoperators in the Lindblad form [@Breuer] $\mathcal{D}[\hat{o}]=\hat{o}\hat{\rho}\hat{o}^\dagger-\hat{o}^\dagger\hat{o}\hat{\rho}/2-\hat{\rho}\hat{o}^\dagger\hat{o}/2$, the rising and lowering operators $\hat{\sigma}_j^\pm$ for spin $j=1,2$ are defined as usual from the Pauli matrices $\hat{\sigma}_j^{x,y,z}$, and $\hat{L}=(\hat{\sigma}_1^-+\hat{\sigma}_2^-)/\sqrt{2}$. The Hamiltonian part of this model reads as $$\label{Ham} \hat{H}=\frac{\omega_1}{2}\hat{\sigma}_1^z+\frac{\omega_2}{2}\hat{\sigma}_2^z+s_{12}(\hat{\sigma}_1^-\hat{\sigma}_2^++\hat{\sigma}_1^+\hat{\sigma}^-_2).$$ and describes two detuned spins with $\delta=\omega_1-\omega_2$, and central frequency $\omega_0=(\omega_1+\omega_2)/2$, which interact coherently through the exchange term with rate $s_{12}$. Notice that two types of incoherent processes are taken into consideration: the qubits dissipate collectively through $\hat{L}$ and with rate $2\gamma$, and a local incoherent pumping acts on each spin with rate $w$. Possible realizations of this phenomenological model can be found in systems of interacting two-level systems as trapped atoms [@atoms2; @atoms3] and ions [@ions1], color centers in diamond [@diamond1] and superconducting qubits [@Wallraff1; @Wallraff2]. Collective dissipation can have different origins, as for instance: the coupling to a common cavity mode in the bad cavity limit [@Holland1], the coupling to a common structured bath [@Fernando; @Tudela] or to an effective 1D bath as in waveguides [@waveguideCB], photonic nanostructures [@Asenjo] or microwave transmission lines [@Blais]. Moreover, tailored local incoherent processes such as the incoherent pumping can be realized addressing auxiliary energy levels of the spin system [@Cirac1; @Holland1]. An important remark on the parameter values is that we consider them to follow a hierarchy given by $\omega_0\gg \delta,\gamma,s_{12},w$ and $w,\delta,s_{12}\sim \gamma$, as it is a usual requirement for this kind of phenomenological models to have a microscopic origin [@Bellomo; @Marco1]. It is also important to notice that, depending on the microscopic origin of the model, some mutual dependencies between the values of the parameters might exist, however, in the spirit of exploring the full model, we do not consider these particular constraints in this work and we enable the parameters to vary independently from each other. Exceptional points in the Liouvillian {#EPs} ===================================== For our purposes, it is convenient to describe the evolution of the two-spin density matrix within the Liouville formalism. Indeed, an isomorphism can be adopted which maps $\hat{\rho}$ into the $16$-dimensional vector ${\vert \rho \rrangle}$ and the Liouville super-operator into a $16\times 16$ matrix $\mathcal{L}$ [@Bellomo]. The time evolution of the density matrix can then be rewritten as a vector equation ${\vert \dot\rho \rrangle}=\mathcal{L}{\vert \rho \rrangle}$. How to explicitly build $\mathcal{L}$ is detailed in appendix \[appA\], where we generalize the results of [@Bellomo] to the case of incoherent driving (see also Ref. [@Marco2] for a general discussion about symmetries). This matrix is block diagonal, $\mathcal{L}=\bigoplus_\mu \mathcal{L}_\mu$, with $\mu \in\{a,b,c,d,e\}$, the different blocks being related to the dynamics of different observables (in appendix \[appA\] we give the explicit expressions of such matrices). For instance, the dynamics of populations $\langle \hat{\sigma}^z_j\rangle$ is entirely described by $\mathcal{L}_a$, while the dynamics of coherences $\langle \hat{\sigma}^{x,y}_j\rangle$ by $\mathcal{L}_b$ and $\mathcal{L}_c=\mathcal{L}_b^*$. In the study of synchronization we focus on the oscillatory dynamics of the coherences, and thus the analysis of the eigenspectrum of $\mathcal{L}_b$ and $\mathcal{L}_b^*$ yields the necessary information to assess the emergence of this phenomenon [@Bellomo; @Giorgi1; @Cabot_PRL]. Within this formalism, the general solution of the master equation at time $t$ can be formally written as $$\label{rhot} {\vert \rho(t) \rrangle}=\sum_{\mu}\sum_{k} p_{0\, k}^{\mu}\, {\vert \tau^\mu_k \rrangle} \, \mathrm{e}^{\lambda_k^\mu t},$$ where $\mu$ runs over the five blocks of $\mathcal{L}$ and $k$ between $1$ and the dimension of the corresponding block. In Eq. (\[rhot\]), we have introduced the right (left) eigenvectors of the Liouvillian ${\vert \tau^\mu_k \rrangle}$ (${\vert \bar{\tau}^\mu_k \rrangle}$), their respective eigenvalues $\lambda_k^\mu$, defined through $\mathcal{L} {\vert \tau_k^\mu \rrangle}=\lambda_k^\mu {\vert \tau_k^\mu \rrangle}$ ($\mathcal{L}^\dagger {\vert \bar{\tau}_k^\mu \rrangle}=\lambda_k^{\mu*} {\vert \bar{\tau}_k^\mu \rrangle}$) and the weight of the initial conditions $p_{0\, k}^{\mu}=\frac{\llangle \bar{\tau}^{\mu}_k {\vert \rho(0) \rrangle}}{\llangle \bar{\tau}^{\mu}_k{\vert \tau^{\mu}_k \rrangle}}$, where we use the Bra-Ket notation. Notice that left and right eigenvectors form a biorthogonal basis: $\llangle \bar{\tau}_j^{\mu } {\vert \tau_k^{\nu } \rrangle}\propto\delta_{\mu \nu}\delta_{jk}$. Being the system open, $\mathcal{L}_b$ ($\mathcal{L}$) is non-Hermitian, so it is actually possible to have points in parameter space in which several eigenvalues and the corresponding eigenvectors coalesce, making the matrix non-diagonalizable [@Longhi1]. These are the exceptional points (EPs) introduced in Sec. \[sec1\], whose order is defined as the number of eigenvalues and eigenvectors that coalesce. As anticipated, in this work we focus on the EPs occurring in $\mathcal{L}_{b(c)}$, as they are relevant for the emergence of synchronization. However, we notice that $\mathcal{L}_a$ is also able to display EPs as reported in appendix \[appA\]. We first show particular examples of the EPs of $\mathcal{L}_b$ by tuning $\delta/\gamma$ in Fig. \[EP\_w\] and $w/\gamma$ in \[EP\_d\] respectively. Then, in Fig. \[map\_freqs\], the overall picture is presented as a function of both detuning and pumping, showing the parameter regions where the Liouvillian displays from one to four frequencies: single-frequency regime (SFR), and similarly for three (TFR) and four (FFR). ![(a) Imaginary part of the eigenvalues (eigenfrequencies) of $\mathcal{L}_b$, varying $\delta/\gamma$, for $w/\gamma=0.25$, $s_{12}/\gamma=0$ and $\omega_0/\gamma=20$. In solid red and dashed blue the two different pairs of eigenvalues that coalesce. (b) The real part of the corresponding eigenvalues (decay rates). (c) Product of the corresponding pair of eigenvectors that coalesce.[]{data-label="EP_w"}](F1){width="0.95\columnwidth"} ![Same as in Fig. \[EP\_w\], but fixing $\delta/\gamma=0.4$ and varying $w/\gamma$. Notice that the smallest (in absolute value) decay rate is not zero for $w/\gamma=0$ as it can be checked from Eq. (\[eigs\_b1\]). Here only a pair of eigenvectors coalesce (twice).[]{data-label="EP_d"}](F2){width="0.95\columnwidth"} In Figs. \[EP\_w\] and \[EP\_d\], we plot the imaginary part of the eigenvalues (eigenfrequencies), their real part (decay rates), and the absolute value of the product of the coalescing (normalized) eigenvectors $|\llangle\tau^b_j|\tau^b_k\rrangle|$ that is going to reach value 1 in presence of coalescence. Both EPs appearing in $\mathcal{L}_{b(c)}$ are second order; two eigenvalues become the same and the corresponding eigenvectors become linearly dependent, which makes the matrix non-diagonalizable. In Fig. \[EP\_w\], increasing $\delta/\gamma$ we observe a common trend as the number of frequencies (decay rates) increases (decreases). While in Fig. \[EP\_w\] the two EPs appear for different detunings, notice that for $w/\gamma=0$ these arise for the same value $\delta=\gamma$ \[Eq. (\[eigs\_b1\]) with $s_{12}=0$\] where the term $V=\sqrt{\gamma^2-\delta^2}$ present in all eigenvalues vanishes. In this special case, $w/\gamma=0$, the emerging frequencies are degenerate and given by $\omega_0\pm\text{Im}(V)/2$. The physical intuition in this case is that the detuning needs to overcome the dissipation in order to induce the oscillatory behavior of the system, somehow analogously to an overdamped to underdamped transition, but keeping in mind that here $\omega_0/\gamma\gg1$. While there was a common trend in the emergence of EPs for increasing detuning, the number of frequencies and the related appearance of EPs is more complex for increasing pumping. For small detuning (and still vanishing coupling $s_{12}$) only one frequency is present into the system; then increasing it beyond a first EP we find a TFR and then again SFR, as can be appreciated in both Figs. \[EP\_d\] and \[map\_freqs\]. In the former we also notice that is the same pair of eigenvectors that coalesce (twice). Furthermore, the pair of EPs disappears for vanishing detuning with the frequency separation (closed area) in Fig. \[EP\_d\]a closing at $w/\gamma=2/3$ \[Eq. (\[eigs\_b2\]) with $s_{12}=0$\]. We remark that the presence of different frequency regions and the related branching of frequencies are associated to the presence of EPs. For the sake of comparison in appendix \[appA\] in Fig. \[eigs\_fig\] we show the smooth eigenvalues variation with parameters in absence of coalescence phenomena. ![Diagram of eigenfrequencies of $\mathcal{L}_b$ for $s_{12}/\gamma=0$ and $\omega_0/\gamma=20$. The white lines stand for second order EPs and separate the regions with different number of eigenfrequencies and decay rates. Notice that the lines $\delta/\gamma=0$ and $w/\gamma=0$ are not resolved in this plot, but analytical expressions are available (appendix \[appA\]). In purple we have the SFR in which there is only one eigenfrequency ($-\omega_0$) and four decay rates. The two purple regions are connected as $\delta/\gamma=0$ $w/\gamma=2/3$ is not an EP \[Eq. (\[eigs\_b2\])\], while for $w/\gamma=0$ there are two second order EPs at the same point involving different decay rates \[Eq. (\[eigs\_b1\])\]. Moreover, there is an isolated EP for $w/\gamma=1$, $\delta/\gamma=0$ and $s_{12}/\gamma=0$ \[see Eq. (\[eigs\_b2\])\]. In salmon, the TFR where there are three eigenfrequencies and three decay rates. In yellow the FFR where four eigenfrequencies and two decay rates are found. The most prominent features are displayed in this range of detunings and pumping rates. []{data-label="map_freqs"}](F3){width="0.9\columnwidth"} EPs separate dynamical regimes characterized by a different number of frequencies and the richest scenario is found for $s_{12}/\gamma=0$ and varying $w/\gamma$ and $\delta/\gamma$ (Fig. \[map\_freqs\]) where three different regimes are found: SFR, TFR and FFR, all of them separated by lines of second order EPs (white lines). On the other hand, numerical analysis reveals that when $s_{12}/\gamma\neq0$ the system generally displays four frequencies and four decay rates as EPs are not present (as in Fig. \[eigs\_fig\]). A notable exception is the case of $w/\gamma=1$ in which up to three EPs can be found for $s_{12}/\gamma\geq0$ and $\delta/\gamma<2$. We start at $s_{12}/\gamma=0$ in which there are the two EPs that belong to the white lines of Fig. \[map\_freqs\], and an isolated EP at $\delta/\gamma=0$ \[see Eq. (\[eigs\_b2\])\]. As we increase $s_{12}/\gamma$ the two small detuning EPs approach each other until they annihilate at $\delta/\gamma\approx0.26$, $s_{12}/\gamma\approx0.21$, then only the large detuning EP remains. This last EP drifts to smaller $\delta/\gamma$ as the coupling is increased until it reaches $\delta/\gamma=0$ at $s_{12}/\gamma=\sqrt{2}$ \[Eq. (\[eigs\_b2\])\] and disappears for larger coupling strengths. This peculiar behavior is illustrated in Fig. \[EP\_s12\]. ![Eigenfrequencies for $w/\gamma=1$, $\omega_0/\gamma=20$, varying the detuning and for multiple coupling strengths. (a) $s_{12}/\gamma=0.1$, (b) $s_{12}/\gamma=0.2$, (c) $s_{12}/\gamma=0.3$ and (d) $s_{12}/\gamma=1.2$.[]{data-label="EP_s12"}](F4){width="0.9\columnwidth"} Synchronization of the coherences {#sync} ================================= In this section we analyze the synchronization in the dynamics of observables related to the spin coherences (living in the $\mathcal{L}_{b(c)}$ sectors). Synchronization emerges here, as a transient monochromatic oscillation in which the coherences of both qubits remain phase-locked until they reach the non-oscillatory stationary state of the system. In fact, as anticipated, in this system, we find that synchronous dynamics can appear due to two different mechanisms. The first we have reported above is coalescence, which occurs widely when $s_{12}/\gamma=0$ and enables the system to display just one frequency (SFR) (see Fig. (\[map\_freqs\])). As we show in Sec. \[sync\_coal\], despite the fact that the coherences oscillate monochromatically from the beginning, phase-locking emerges generally after a transient time related to the decay rates of the eigenmodes of $\mathcal{L}_{b(c)}$. The second mechanism is transient synchronization and corresponds to the presence of a subradiant eigenmode. In turn, the latter phenomenon can arise provided that $s_{12}/\gamma\neq0$ (Sec. \[sync\_trad\]). In this case, the coherences display in the early stage of the dynamics four different frequencies. However the slowly decaying subradiant eigenmode brings the system to a regime where both frequency- and phase-locking are present after a transient time in which the rest of the eigenmodes decay out. Synchronization due to coalescence {#sync_coal} ---------------------------------- To start with, let us consider the phenomenon of synchronization due to coalescence, emerging in the SFR regime in which $\mathcal{L}_b$ has just one eigenfrequency and four decay rates. We will analyze the dynamics of $\langle \hat{\sigma}_{1,2}^x\rangle$, which display an oscillatory decay towards the stationary state, and assess the emergence of synchronization with the use of the measures of synchronization introduced in appendix \[appB\], which are the Pearson factor (\[SyncMeasure\]) and its maximized version, optimized over all possible phase shifts. As we have anticipated, in spite of the presence of just one frequency, phase-locking between the coherences dynamics is not guaranteed. This is evident in Fig. \[traj1\] in which the phase between the trajectories slips from zero to almost $\pi$ at $\gamma t\approx4$, where it remains locked until the oscillation completely decays out. The Pearson factor accounting for delay (purple dashed line) is a good measure of the final synchronous oscillation, while we can appreciate the transient phase slip as signaled by the bare indicator (green solid line). ![Main panel: $\langle\hat{\sigma}_1^x\rangle$ (red solid line) and $\langle\hat{\sigma}_2^x\rangle$ (blue dashed line) in the SFR for the initial condition $|\phi_0\rangle=(|ee\rangle+|eg\rangle+|ge\rangle+|gg\rangle)/2$. Inset: $\mathcal{C}_{\langle \hat{\sigma}_1^x(\gamma t)\rangle,\langle \hat{\sigma}_2^x(\gamma t)\rangle}(\gamma \Delta t)$ (green solid line) and $\mathcal{C}_{\text{max}}$ (purple dashed line) with $\Delta t=1.2/\gamma$ and delay range $\delta\tau=0.35/\gamma$. Parameters $\omega_0/\gamma=20$, $s_{12}/\gamma=0$, $w/\gamma=0.1$, $\delta/\gamma=0.3$.[]{data-label="traj1"}](F5){width="0.95\columnwidth"} The slip of the relative phase can be understood by analyzing the semi-analytical solution of $\langle \hat{\sigma}_{1,2}^x\rangle$ (see appendix \[appA\]). Indeed we can particularize Eq. (\[solution\]) to the SFR in which $\text{Im}(\lambda^b_k)=-\omega_0$ $\forall k$ and hence $$\label{sol_SFR} \langle \hat{\sigma}_j^x(t)\rangle=\sum_{k=1}^4 2|p^b_{0k}\langle\tau_k^b\rangle_{xj}|e^{\text{Re}(\lambda_k^b)t}\cos[\psi_{k,xj}^b-\omega_0t],$$ the coefficients being defined in the appendix and $j=1,2$. Importantly, both the weight ($p^b_{0k}$) and phase ($\psi_{k,xj}^b$) associated to each eigenvalue depend on the initial condition. Then from Eq. (\[sol\_SFR\]) we find that there are multiple terms oscillating at the same frequency but with a different phase. The relative importance of each term changes in time due to the time dependent part of the weight factor $e^{\text{Re}(\lambda_k^b)t}$, where the eigenvalues of $\mathcal{L}_b$ are ordered such that $\lambda_4^b$ is the one with the smallest real part in absolute value. This makes the relative phase between the qubits to slip from the initial value determined by the initial condition to $\Delta \psi=\psi_{4,x1}-\psi_{4,x2}$ in a time scale related to $\text{Re}(\lambda_3^b)$, in which all terms in Eq. (\[sol\_SFR\]) except the less damped one are no longer significant. Notice that, the more similar $\text{Re}(\lambda_{3,4}^b)$ are, the more damped will be the oscillations when the relative phase eventually locks. The dependence of the weights on the initial condition can be illustrated considering the same parameters as in Fig. \[traj1\] but with the initial condition $|\phi_0\rangle=(|ee\rangle-|eg\rangle+|ge\rangle-|gg\rangle)/2$, in which is found that the relative phase is almost $\pi$ from the beginning (not shown here). It is also interesting to comment on the general effect of increasing the incoherent pumping rate $w/\gamma$. As we have shown in Fig. \[map\_freqs\] the SFR involves a wide range of values of $w/\gamma$, which implies that the same synchronization mechanism is present for large $w/\gamma$. Nevertheless, notice that the decoherence rate increases significantly with $w/\gamma$ (as also appreciated in Fig. \[EP\_d\]), damping strongly the coherent oscillations of $\langle \hat{\sigma}_{1,2}^x\rangle$. Thus, the amplitude of the synchronous oscillation decreases significantly with increasing incoherent pumping, which makes the phenomenon harder to be observed and finally hinders it. Synchronization due to subradiance {#sync_trad} ---------------------------------- Let us now tackle the case where more frequencies are present ($s_{12}/\gamma\neq0$) since the early stage of the dynamics. In this parameter regime, spontaneous synchronization can emerge leading to a monochromatic evolution and it is known to be related to the presence of a subradiant eigenmode [@Bellomo]. In this case $\mathcal{L}_b$ generally displays four frequencies and four different decay rates, and thus synchronization can only emerge in the presence of a slowly dissipating eigenmode [@SyncRev2], i.e. when $\text{Re}(\lambda^b_4)/\text{Re}(\lambda^b_3)\ll1$. This statement can be understood analyzing the semi-analytical solution of $\langle \hat{\sigma}_{1,2}^x\rangle$ given in Eq. (\[solution\]): at the beginning the four different frequencies are involved and thus the qubits oscillate irregularly, however, as each frequency component decays with a different rate given by $\text{Re}(\lambda^b_k)$, after a transient time, if $\text{Re}(\lambda^b_4)/\text{Re}(\lambda^b_3)\ll 1$, there is a significant oscillation governed by the eigenmode with smallest decay rate $\text{Re}(\lambda^b_4)$, making the qubits to oscillate synchronously with the phase difference locked to $\Delta \psi=\psi_{4,x1}-\psi_{4,x2}$. An example of such phenomenon is shown in Fig. \[traj3\](a), where we can observe that after a time of about $\gamma t\approx 4$ the two qubits oscillate synchronously with a difference of phase of about $\pi$. Both indicators of synchronization, the Pearson factor and the maximized one, reach a stationary value close to -1 or 1 respectively. This figure can be compared with Fig. \[traj1\], as they share the same initial condition. Notice that in both cases synchronization emerges after a transient of a similar duration and the lasting amplitudes are of similar magnitude. Nevertheless, in this case the transient to synchronization displays strong amplitude modulations related to the presence of multiple frequencies. ![(a) Main panel: $\langle\hat{\sigma}_1^x\rangle$ (red solid line) and $\langle\hat{\sigma}_2^x\rangle$ (blue dashed line) for the initial condition $|\phi_1\rangle=(|ee\rangle+|eg\rangle+|ge\rangle+|gg\rangle)/2$. Inset: $\mathcal{C}_{\langle \hat{\sigma}_1^x(\gamma t)\rangle,\langle \hat{\sigma}_2^x(\gamma t)\rangle}(\gamma \Delta t)$ (green solid line) and $\mathcal{C}_{\text{max}}$ (purple dashed line) with $\Delta t=1.2/\gamma$ and $\delta\tau=0.35/\gamma$. Parameters $\omega_0/\gamma=20$, $s_{12}/\gamma=1$, $w/\gamma=0.1$, $\delta/\gamma=2$. (b) Same as in (a) but fixing the incoherent pumping rate to $w/\gamma=0.75$. []{data-label="traj3"}](F6){width="0.95\columnwidth"} The influence of the different parameters on the synchronization behavior can be analyzed systematically by studying the ratio of the two smallest eigenvector decay rates [@SyncRev2]. Indeed the case with $w/\gamma=0$ was already studied in Ref. [@Bellomo], in which it was shown that the more detuned are the qubits, the more coherent coupling is needed for synchronization to emerge, analogously to the classical Arnold-tongue behavior. As a matter of fact, we find that a nonzero $w/\gamma$ preserves this overall behavior but decreases the capacity of the qubits to synchronize. This is illustrated in Fig. \[traj3\](b), where the increased incoherent pumping rate inhibits the emergence of synchronization, as indicated by the marked oscillatory behavior of the Pearson factor. The detrimental effect of the incoherent pumping can be understood by recalling that it constitutes an additional decoherence channel acting locally on each qubit, and thus as $w/\gamma$ is increased, the effect of the common environment is counteracted by local decoherence which decreases the disparity between the two smallest decay rates. This is explicitly shown in Fig. \[map\_ratio\] in which the ratio of the two smallest eigenvector decay rates is plotted varying $w/\gamma$ and $\delta/\gamma$. For small enough $w/\gamma$ we can see that there is one decay rate significantly smaller than the rest enabling the emergence of synchronization (as in Fig. \[traj3\]a). However, as $w/\gamma$ increases this ratio tends to one an synchronization no longer emerges (as in Fig. \[traj3\]b). Moreover, notice that the overall magnitudes of the decay rates increase with $w/\gamma$ causing also a faster damping of the coherent oscillations, as we have also commented in Sec. \[sync\_coal\]. ![In color: ratio of the two smallest decay rates $\text{Re}(\lambda^b_4)/\text{Re}(\lambda^b_3)$ varying $w/\gamma$ and $\delta/\gamma$, with the other parameters fixed to $\omega_0/\gamma=20$ and $s_{12}/\gamma=1$.[]{data-label="map_ratio"}](F7){width="0.9\columnwidth"} Finally it is interesting to highlight that in our system, the two kinds of synchronization cannot emerge in the same parameter regime. This is so, as when $s_{12}/\gamma=0$ and EPs are predicted, $\mathcal{L}_b$ either displays a single frequency (SFR) or displays several frequencies with the same decay rate (TFR and FFR). Moreover, it turns out that in the TFR the smallest decay rate is the one shared by two frequencies making not possible the emergence of synchronization by the second mechanism. Signatures of synchronization in the correlation spectrum {#spectrum} ========================================================= In this section we present a complementary view of the phenomenon of synchronization, analyzing its signatures in the two-time correlation spectrum, an indicator relevant when probing the system and accessible in many setups. This approach to characterize synchronization was taken for instance in Ref. [@Holland1]. The correlations considered here lie in the same Liouvillian sectors $\mathcal{L}_{b(c)}$ as the local observables considered in the previous section. Two-time correlations can be considered either for collective spin operators $\langle \hat{L}(t+\tau)\hat{L}(t)\rangle$ or for local ones, $\langle\hat{\sigma}_{j}^-(t+\tau)\hat{\sigma}^+_{j}(t)\rangle$. An important motivation behind considering both collective and local correlations comes from the master equation in Eq. (\[ME\]), in which both kind of operators are present in the dissipators $\mathcal{D}$, in form of collective dissipation or local pumping. Let us proceed as follows: first we will consider the case $w/\gamma=0$ in Sec. \[specw0\], where analytical results can be obtained and can be used to illustrate our main results, then, the role of incoherent pumping will be discussed \[specw\]. The mathematical details are presented in appendix \[appC\]. Case with $w/\gamma=0$ {#specw0} ---------------------- We consider the system in absence of pumping ($w/\gamma=0$) for both kinds of synchronization regimes discussed in the previous section. We consider both $\langle \hat{L}(\tau)\hat{L}(0)\rangle_{ss}$, and $\langle\hat{\sigma}_{j}^-(\tau)\hat{\sigma}^+_{j}(0)\rangle_{ss}$, where the subscript $ss$ indicates they are computed in the stationary state of the system, which in the absence of driving is $|gg\rangle\langle gg|$. This is the reason why the calculation can be done analytically, just considering the one excitation sector of $\mathcal{L}_b$ as shown in appendix \[appC\]. The Fourier transform of these two-time correlations \[Eq. (\[out\_spec\])\], or correlation spectrum, displays the relevant information about the collective excitations of the system, such as their frequency, decay rate and overlap of the correlators with the eigenmodes. We start considering the correlation spectrum for collective operators $\mathcal{S}_{\hat{L}\hat{L}^\dagger}(\omega)$ in the SFR \[Fig. \[specsw0\](a)\] induced by coalescence and in a case in which synchronization emerges due to a subradiant eigenmode \[Fig. \[specsw0\](b)\]. In both cases we clearly observe interference effects as the spectrum is not simply Lorentzian. However, in the SFR the interference occurs just at the resonance frequency $\omega_0/\gamma$, while in the subradiance case the interference occurs between two resonances of different frequency that correspond to $\omega_0\pm \text{Im}(V)/2$. Notice that in all these plots, when comparing the SFR with the subradiant regime, the frequency window of the plots is taken of the same magnitude such that the width of the peaks can be compared faithfully. ![Fourier transform of $\langle \hat{L}(\tau)\hat{L}(0)\rangle_{ss}$ (a) and (b) and of $\langle\hat{\sigma}_{1(2)}^-(\tau)\hat{\sigma}^+_{1(2)}(0)\rangle_{ss}$ (c) and (d) in red solid (blue dashed) lines. The parameters are fixed to $\omega_0/\gamma=20$, $\delta/\gamma=0.5$ with $w/\gamma=0.0$ in all figures. In (a) and (c) we have $s_{12}/\gamma=0$, in (b) and (d) $s_{12}/\gamma=1$. In the case $s_{12}/\gamma=1$, the broad resonance has frequency $-\omega_0-V_I/2=-21.025\gamma$ and width $(\gamma+V_R)/2=0.988\gamma$, while the narrow one $-\omega_0+V_I/2=-18.975\gamma$ and $(\gamma-V_R)/2=0.024\gamma$. Notice that these frequencies are indicated in panel (b) as the ticks without label. In panel (c) we have included in gray dashed lines the two terms of Eq. (\[spec\_S1\]) that when subtracted yield the red curve.[]{data-label="specsw0"}](F8){width="0.95\columnwidth"} Considering the exact expressions for $\mathcal{S}_{\hat{L}\hat{L}^\dagger}(\omega)$ we find that, in the SFR ($s_{12}/\gamma=0$) $$\label{spec_C1} \begin{split} \mathcal{S}_{\hat{L}\hat{L}^\dagger}(\omega)=\frac{2}{V}\bigg[\frac{(\omega+\omega_0)^2}{(\omega+\omega_0)^2+\frac{1}{4}(\gamma-V)^2}\\ -\frac{(\omega+\omega_0)^2}{(\omega+\omega_0)^2+\frac{1}{4}(\gamma+V)^2}\bigg]. \end{split}$$ This corresponds to two superposed (interfering) resonances, opposite in sign and each centered at the same frequency $\omega_0$ but with a different decay rate (in this case $V$ is real), which yield a broad peak with a transparency window whose width is given by the narrow resonance. Moreover, the prefactor $(\omega+\omega_0)$ implies that $\mathcal{S}_{\hat{L}\hat{L}^\dagger}(-\omega_0)=0$, as observed in the plots. In the case $s_{12}/\gamma\neq0$, $V$ becomes complex and we denote its real and imaginary parts as $V_R$ and $V_I$ respectively. The exact results now read as $$\label{spec_C2} \begin{split} \mathcal{S}_{\hat{L}\hat{L}^\dagger}(\omega)=\frac{2(\omega+\omega_0-s_{12})}{|V|^2}\bigg[\frac{\gamma\frac{V_I}{2}+V_R(\omega+\omega_0-V_I)}{(\omega+\omega_0-\frac{V_I}{2})^2+\frac{1}{4}(\gamma-V_R)^2}\\ -\frac{\gamma\frac{V_I}{2}+V_R(\omega+\omega_0+V_I)}{(\omega+\omega_0+\frac{V_I}{2})^2+\frac{1}{4}(\gamma+V_R)^2}\bigg], \end{split}$$ in which we observe again the interference of two resonances, but now centered at different frequencies $\omega=\omega_0\pm V_I/2$ and with different decay rates. Notice that here completely destructive interference occurs at $\omega=-\omega_0+s_{12}$. Moreover, for $s_{12}/\delta\gg1$, $V_R\approx\gamma$ while $V_I\approx2 s_{12}$, which implies that there is a significantly superradiant eigenmode and a significantly subradiant one, the latter being the one synchronizing the spins. This is clearly observed in Fig. \[specsw0\] (b), in which the superradiant eigenmode is centered around $\omega\approx-\omega_0-s_{12}$ and the subradiant one at around $\omega\approx-\omega_0+s_{12}$. We now compare these results with the ones for local correlation spectra (for each spin) $\langle\hat{\sigma}_{1(2)}^-(\tau)\hat{\sigma}^+_{1(2)}(0)\rangle_{ss}$ for the same two cases \[see Fig. \[specsw0\] (c),(d)\]. Focusing first in the SFR, we observe that $\mathcal{S}_{\hat{\sigma}_{1(2)}^-\hat{\sigma}_{1(2)}^+}(\omega)$ displays an asymmetric peak slightly displaced at the left (right) of $\omega_0$. This is still an interference effect as the exact results show: $$\label{spec_S1} \begin{split} \mathcal{S}_{\hat{\sigma}_{1(2)}^-\hat{\sigma}_{1(2)}^+}(\omega)=\frac{2}{V}\bigg[\frac{(\omega+\omega_0)[\omega+\omega_0\mp\frac{\delta}{2}]+\frac{\gamma}{4}(\gamma-V)}{(\omega+\omega_0)^2+\frac{1}{4}(\gamma-V)^2}\\ -\frac{(\omega+\omega_0)[\omega+\omega_0\mp\frac{\delta}{2}]+\frac{\gamma}{4}(\gamma+V)}{(\omega+\omega_0)^2+\frac{1}{4}(\gamma+V)^2}\bigg], \end{split}$$ where the upper sign corresponds to spin 1 and the lower sign to spin 2. Here we find the peaks of each spin to be centered at slightly shifted frequencies: the two time correlations of each spin are affected by the presence of the other one, that is detuned, and each spectrum experiences a pushing effect. Of course these self-correlations enter also in the collective spectra described above but there the cross-correlations between spins also play a major role. In this case we have plotted each term of Eq. (\[spec\_S1\]) in gray dashed lines in Fig. \[specsw0\](c) from which we can appreciate that the term with the small decay rate already accounts for the very asymmetric resonance, while the contribution from the other term is almost homogeneous. In the case of $s_{12}/\gamma\neq0$, Fig. \[specsw0\](d), we see that the self-correlations only display the sharp peak also present in the collective spectrum of correlations: the superradiant eigenmode is barely visible in this case while the subradiant one – which leads to synchronization – is the main contribution. The main reason for the difference between Figs. \[specsw0\](d) and \[specsw0\](b) is that the collective operator in the former is almost orthogonal to the subradiant eigenmode. In fact $\hat{L}$ is exactly the superradiant eigenmode in absence of detuning. Therefore the contributions of both eigenmodes acquire the same importance in the collective spectrum. In this case the analytical results are too cumbersome to provide additional insights. One of the main results discussed here, is that the two kinds of synchronization present different signatures in the correlation spectrum. In the case of synchronization due to coalescence, we find an interference effect at the resonance frequency, which manifests itself as a visible dip in the case of collective measurement or as asymmetric resonances when addressing each spin separately. On the other hand, the signature of synchronization due to subradiance is precisely the presence of a very narrow resonance in the correlation spectrum, which is visible both in collective measurements (when $\hat{L}$ is not orthogonal to the subradiant eigenmode) and in local correlations, where it turns out to dominate. Notice that the width of the subradiant eigenmode is significantly smaller than that of the peaks of the SFR, therefore sharper peaks are predicted for synchronization between detuned spins due to subradiance than in relation to coalescence. An interesting point is that, in general, we find that the interference effects introduce a fine structure in the spectrum of the system, which displays features of width smaller than the intrinsic one given by $\gamma$: as transparency windows, subradiant eigenmodes, or completely destructive interferences. Indeed, interference effects in the spectrum of quantum systems can be exploited, for instance, in laser cooling schemes as described in [@Morigi]. Case with $w/\gamma\neq0$ {#specw} ------------------------- In this section we address the effects of the incoherent pumping on the correlation spectrum. In this case the stationary state of the system is not the vacuum and involves in general all the density matrix elements of $\mathcal{L}_a$ [@Marco2; @Bellomo]. The main results are illustrated in Fig. \[specsw\], in which the spectrum of the collective and local correlations are plotted for two different values of $w/\gamma$. The results should be compared with those of the previous section, as we have just added a finite incoherent pumping rate. In the SFR \[panels (a) and (c)\] we see that the main effect of the incoherent pumping is to decrease the visibility of the interference effects; for the collective correlation the depth of the central dip decreases, while for local correlations the resonance becomes less asymmetric. In the case of $s_{12}/\gamma=1$ \[panels (b) and (d)\], we see that the width of the subradiant eigenmode increases significantly. Indeed, for the collective correlation the corresponding peak becomes barely visible, while for the local correlations it still dominates but with a significant decrease (increment) of the height (width) \[compare with Fig. \[specsw0\](d)\]. The increment of the width of the subradiant eigenmode is already found and well illustrated in the expressions for the eigenvalues with $\delta/\gamma=0$, Eq. (\[eigs\_b2\]), in which we see that the real part of $\lambda_4^b$ increases linearly with $w/\gamma$, being completely subradiant for $w/\gamma=0$. This is a clear manifestation of the fact, commented above, that this local incoherent process counteracts collective dissipation, the latter being the mechanism behind strong disparities in the decay rates of the eigenmodes. ![Fourier transform of $\langle \hat{L}(\tau)\hat{L}(0)\rangle_{ss}$ (a) and (b) and of $\langle\hat{\sigma}_{1}^-(\tau)\hat{\sigma}^+_{1}(0)\rangle_{ss}$ (c) and (d). The parameters are fixed to $\omega_0/\gamma=20$, $\delta/\gamma=0.5$ in all figures with $w/\gamma=0.05$ in red solid lines and $w/\gamma=0.1$ in blue dashed lines. In (a) and (c) we have $s_{12}/\gamma=0$, in (b) and (d) $s_{12}/\gamma=1$.[]{data-label="specsw"}](F9){width="0.9\columnwidth"} Conclusions =========== In analogy with what happens in classical systems, quantum synchronization is connected to the spontaneous emergence of a monochromatic phase locked oscillation among several coupled units. It is displayed by correlated local observables as well as in two-time correlation spectra [@Cabot_PRL]. In the framework of open quantum systems, this phenomenon can be seen as an ordered decay towards the stationary state of the system, and thus it is intimately related to the presence of certain structure in the Liouvillian eigenspectrum of the system [@SyncRev2]. This insight enables to establish a relation between synchronization and other phenomena such as subradiance [@Bellomo], the presence of EPs, or to find signatures of this phenomenon in the correlation spectrum of the system as shown here or in [@Cabot_PRL]. In this paper we have considered the case of two detuned spins in the presence of collective dissipation and incoherent pumping and made a detailed comparison between two different mechanisms that can bring the system dynamics to a (quasi-) monochromatic behavior. One of the mechanisms is transient synchronization related to subradiance and to the presence of a weakly damped, long lived, collective excitation, analogous to what already studied in a series of previous papers (see for instance Ref. [@Bellomo]), with the new ingredient of local incoherent pumping. This transient synchronization has been shown also to be intimately related to long-lived correlations [@Giorgi1; @manzano]. As for the second mechanism, far less explored, the system, exhibiting coalescence, can reach a regime in which it only displays one collective monochromatic oscillation (despite the interactions and the presence of several intrinsic frequencies in the system and multiple decay rates). We showed here that such a mechanism, reported in Ref. [@Holland1] for two clouds of atoms at the mean field level, is actually enabled by the presence of EPs, here in the Liouvillian spectrum, and in Ref. [@Holland1] in the (non-Hermitian) matrix governing the dynamics of the relevant two-time correlations \[Eqs. (8) to (10) in [@Holland1]\], in which an EP lies just at the synchronization transition point. If on the one side synchronization due to coalescence is intrinsically related to the presence of EPs, on the other side, such singularities can also be found (for special choices of the system parameters) in subradiance-induced synchronization. However, we have found here that the two mechanisms of synchronization are mutually exclusive in our system (this might not be the most general situation). While in the presence of subradiance both frequency and phase locking emerge after a transient time, in the SFR a single frequency is of course settled from the beginning while phase-locking emerges after a transient time due to the presence of multiple decay rates. The two aforementioned mechanisms of synchronization are found to have different signatures in the correlation spectrum. As found in other systems [@Cabot_EPL], the signature of coalescence is an interference exactly at the resonant frequency, due to the presence of multiple eigenmodes with the same frequency but different decay rate. Here we have also found that in the case of collective correlations, this interference appears as a symmetric dip just at resonance while for local correlations there is a strongly asymmetric peak. In the case of synchronization due to weak dissipation we find the spectral signature to be a significantly narrower peak, corresponding to a subradiant eigenmode. This signature turns out to be common to other systems [@Cabot_PRL], as it is just the signature of a slowly decaying eigenmode. Moreover, comparing the widths of the peaks in the SFR regime and in the synchronization regime, we find the one of the subradiant eigenmode to be the narrowest, which could be interesting for applications. Indeed, as a general fact, we find that for both coalescence and subradiance, synchronization is related to interference effects in the correlation spectrum that yield a fine structure in a frequency range smaller than the scale fixed by the rates of the intrinsic incoherent processes, here $\gamma$ and $w$. We finally notice that, while the overall effect of the incoherent pumping is detrimental for synchronization and for these interference effects, it is positive in the sense that enables a whole region of EPs in the absence of coherent coupling between the spins. Acknowledgments {#acknowledgments .unnumbered} =============== The authors acknowledge support from MINECO/AEI/FEDER through projects EPheQuCS FIS2016-78010-P, CSIC Research Platform PTI-001, the María de Maeztu Program for Units of Excellence in R&D (MDM-2017-0711), and funding from CAIB PhD and postdoctoral programs. Liouville formalism {#appA} =================== Liouville representation of the master equation ----------------------------------------------- The master equation (\[ME\]) describing the evolution of $\hat{\rho}$ can be rewritten as $\dot{\hat{\rho}}=\mathcal{L}\hat{\rho}$, where $\mathcal{L}$ is the Liouvillian superoperator. In the Liouville representation, the state of the system is represented by a vector of the Hilbert-Schmidt space $\mathcal{H}=\mathbb{C}^{16}$ and $\mathcal{L}$ is a non-Hermitian matrix (more details can be found in Refs. [@Bellomo; @Marco2]). The vector in the Hilbert-Schmidt space representing the state of the system is $|\rho\rrangle$, which is obtained through a mapping that corresponds to a row-major vectorization[^1]: $$\label{DefinitionRL} \hat{\rho} = \sum_{i,j=1}^4 \rho_{ij} |i\rangle \langle j| \rightarrow {\vert \rho \rrangle} = \sum_{i,j=1}^4 \rho_{ij} {\vert ij \rrangle} ,$$ with ${\vert ij \rrangle}=|i\rangle\otimes|j\rangle$. In this space, vectors are denoted as ${\vert \cdot \rrangle}$ while ${\llangle \cdot\vert}$ correspond to their conjugate transpose partners. The inner product is defined as $\llangle v_2|v_1\rrangle=\text{Tr}(\hat{v}_2^\dagger \hat{v}_1)$ where $\hat{v}_1(\hat{v}_2^\dagger)$ are the matrices obtained by mapping $|v_1\rrangle(\llangle v_2|)$ back into the Hilbert space. Then, the matrix representation of $\mathcal{L}$ is given by $$\label{m_L} \begin{split} \mathcal{L}=-i(\hat{H}\otimes \mathbb{I} -\mathbb{I}\otimes \hat{H}^\top)+\sum_{i,j=1}^2 \gamma \big[\hat{\sigma}_i^-\otimes(\hat{\sigma}_j^+)^\top -(\hat{\sigma}_j^+\hat{\sigma}^-_i)\otimes\frac{\mathbb{I}}{2}-\frac{\mathbb{I}}{2}\otimes(\hat{\sigma}_j^+\hat{\sigma}^-_i)^\top\big]\\ +\sum_{i=1}^2 w \big[\hat{\sigma}_i^+\otimes(\hat{\sigma}_i^-)^\top -(\hat{\sigma}_i^-\hat{\sigma}^+_i)\otimes\frac{\mathbb{I}}{2}-\frac{\mathbb{I}}{2}\otimes(\hat{\sigma}_i^-\hat{\sigma}^+_i)^\top\big]. \end{split}$$ An important feature for this kind of system is that the Liouvillian matrix takes a block-diagonal form [@Bellomo; @Marco2]: $\mathcal{L}=\bigoplus_\mu \mathcal{L}_\mu$, with $\mu \in\{a,b,c,d,e\}$. In the same way the Hilbert-Schmidt space $\mathcal{H}$ can be decomposed in these same blocks or subspaces $\mathcal{H}=\bigoplus_\mu \mathcal{H}_\mu$ each of which is spanned by the following basis elements: subspace $\mathcal{H}_a$ is spanned by $|eeee\rrangle$, $|egeg\rrangle$, $|egge\rrangle$, $|geeg\rrangle$, $|gege\rrangle$, and $|gggg\rrangle$; $\mathcal{H}_b$ by $|eeeg\rrangle$, $|eege\rrangle$, $|eggg\rrangle$, and $|gegg\rrangle $; $\mathcal{H}_c$ by $|egee\rrangle$, $|geee\rrangle$, $|ggeg\rrangle$, and $|ggge\rrangle$; $\mathcal{H}_d$ by $|eegg\rrangle$; and $\mathcal{H}_e$ by $|ggee\rrangle$. Then the different Liouvillian blocks read as $$\label{La} \mathcal{L}_a = \begin{pmatrix} -2\gamma & w & 0 & 0 & w & 0 \\ \gamma & -(\gamma+w) & -\frac{\gamma}{2}+is_{12} & -\frac{\gamma}{2}-is_{12} & 0 & w\\ \gamma & -\frac{\gamma}{2}+is_{12} & -(\gamma+w)-i\delta & 0 & -\frac{\gamma}{2}-is_{12} & 0 \\ \gamma & -\frac{\gamma}{2}-is_{12} & 0 & -(\gamma+w)+i\delta & -\frac{\gamma}{2}+is_{12} & 0 \\ \gamma & 0 & -\frac{\gamma}{2}-is_{12} & -\frac{\gamma}{2}+is_{12} & -(\gamma+w) & w\\ 0 & \gamma & \gamma & \gamma & \gamma & -2w \end{pmatrix},$$ \ \ $$\label{Lb} \mathcal{L}_b= \begin{pmatrix} -\frac{3\gamma+w}{2}-i(\omega_0-\frac{\delta}{2}) & -\frac{\gamma}{2}+is_{12} & 0 & w\\ -\frac{\gamma}{2}+is_{12} & -\frac{3\gamma+w}{2}-i(\omega_0+\frac{\delta}{2}) & w & 0\\ \gamma & \gamma & -\frac{\gamma+3w}{2}-i(\omega_0+\frac{\delta}{2}) & -\frac{\gamma}{2}-is_{12}\\ \gamma & \gamma & -\frac{\gamma}{2}-is_{12} & -\frac{\gamma+3w}{2}-i(\omega_0-\frac{\delta}{2}) \end{pmatrix},$$ $\mathcal{L}_c$ is the complex conjugate of $\mathcal{L}_b$, $\mathcal{L}_d=-(\gamma+w)-2i\omega_0$, and $\mathcal{L}_e=(\mathcal{L}_d)^*$. Analytical expressions for the eigenvalues ------------------------------------------ In the most general case in which all parameters are nonzero, the analytical expressions for the complete set of eigenvalues of these matrices $\lambda_k^\mu$ are very cumbersome and will not be reported here. Nevertheless, for some particular cases, useful analytical expressions can be found. In fact for $w/\gamma=0$ the full eigenspectrum can be obtained [@Bellomo]. The eigenvalues of $\mathcal{L}_b$, which are the relevant ones for our synchronization analysis, are: $$\label{eigs_b1} \begin{split} \lambda_1^b&=-\frac{1}{2}[3\gamma+V^*]-i\omega_0,\\ \lambda_2^b&=-\frac{1}{2}[3\gamma-V^*]-i\omega_0,\\ \lambda_3^b&=-\frac{1}{2}[\gamma+V]-i\omega_0,\\ \lambda_4^b&=-\frac{1}{2}[\gamma-V]-i\omega_0, \end{split}$$ ordered with increasing real part and $V=\sqrt{(\gamma+i2s_{12})^2-\delta^2}$. Notice that for $\delta=0$ the real part of $\lambda_4^b$ is zero. The appearance of purely imaginary eigenvalues corresponds to the existence of decoherence-free subspaces which enable the possibility of stationary synchronization [@manzano; @cabot_npj; @dieter2]. It is also useful (and possible) to write down the eigenvalues for the case with $\delta/\gamma=0$ and nonvanishing pumping, in which we have: $$\label{eigs_b2} \begin{split} \lambda_1^b&=-\frac{1}{2}[3\gamma+2w+\tilde{V}]-i\omega_0,\\ \lambda_2^b&=-\frac{1}{2}[3\gamma+2w-\tilde{V}]-i\omega_0,\\ \lambda_3^b&=-\gamma-\frac{w}{2}-i(\omega_0+s_{12}),\\ \lambda_4^b&=-\frac{3}{2}w-i(\omega_0-s_{12}), \end{split}$$ with $\tilde{V}=\sqrt{(w^2+\gamma^2+6w\gamma-4s_{12}^2)+i4s_{12}(w-\gamma)}$. Here we can find two EPs, one for $s_{12}=0$ and $w/\gamma=1$ in which $\lambda_3^b=\lambda_4^b$ and their respective eigenvectors coalesce, and the other at $s_{12}/\gamma=\sqrt{2}$ and $w/\gamma=1$ in which the ones coalescing are $\lambda_2^b=\lambda_1^b$. The behavior of the EPs for $w/\gamma=1$ is shown in Fig. \[EP\_s12\] in which, as mentioned in the main text, varying the coupling and the detuning up to three EPs appear. Finally notice that for $s_{12}=0$ and $w/\gamma=2/3$, we have $\lambda_4^b=\lambda_2^b$, but this kind of degeneracy is a trivial one and does not bring any coalescence, as can be seen looking at the eigenvector multiplicity across this point. In Fig. \[eigs\_fig\] we show the typical eigenvalue trajectory in absence of coalescence, and varying different parameters of the system. We highlight how the branching behavior of Figs. \[EP\_w\] and \[EP\_d\] disappears in absence of EPs. ![Eigenfrequencies (a,b) and decay rates (c,d) varying $\delta/\gamma$ (a,c), or $w/\gamma$ (b,d). In both cases $\omega_0/\gamma=20$ and $s_{12}/\gamma=1$, while in (a,c) $w/\gamma=0.25$ and in (b,d) $\delta/\gamma=0.5$.[]{data-label="eigs_fig"}](F10){width="0.9\columnwidth"} EPs in $\mathcal{L}_a$ ---------------------- In this section we show an example of EP in $\mathcal{L}_a$. In this sector and for the case $w\neq0$ and $\delta=0$ there are three eigenvalues with simple expressions: $$\label{eigs_a1} \begin{split} \lambda^a_1&=0,\\ \lambda^a_2&=-(w+\gamma)-2is_{12},\\ \lambda^a_3&=-(w+\gamma)+2is_{12}, \end{split}$$ while the remaining three are roots of the third order equation: $$\label{eigs_a2} \begin{split} \lambda^3+4\lambda^2(w+\gamma)+\lambda(5w^2+10w\gamma+4\gamma^2)\\ +2w^3+6w^2\gamma+8w\gamma^2=0. \end{split}$$ Notice that here the eigenvalues are not ordered. Without the need of finding the solutions of Eq. (\[eigs\_a2\]) we can readily obtain important information. First notice that for $w=0$ there is a second eigenvalue together with $\lambda^a_1$ which is zero, and thus the stationary state is not unique. In fact for $\delta=w=0$ we have shown that there are pure imaginary eigenvalues in $\mathcal{L}_{b(c)}$, which represent the non-decaying oscillating coherences between the two steady states, which attain the possibility of stationary synchronization [@manzano; @cabot_npj; @dieter1; @dieter2]. Second, notice that as a third order equation can have either three real roots or one real root and two complex conjugate ones, the corresponding branching of eigenvalues resembles what has been discussed for $\mathcal{L}_{b(c)}$ and thus there could be an EP at the branching point. This turns out to be the case, as we show in Fig. \[EpLa\] in which at the point in which two roots become complex, the corresponding eigenvectors become parallel. ![(a) Imaginary part of the eigenvalues (eigenfrequencies) of $\mathcal{L}_a$, varying $w/\gamma$, for $\delta/\gamma=0$, $s_{12}/\gamma=1$ and $\omega_0/\gamma=20$. In solid red the pair of eigenvalues that coalesce. (b) The real part of the corresponding eigenvalues (decay rates). (c) Product of the corresponding pair of eigenvectors that coalesce. Notice that not all eigenvalues are visible, as we have adjusted the range of the plots to display clearly the EP.[]{data-label="EpLa"}](F11){width="0.9\columnwidth"} Dynamics of $\langle \hat{\sigma}_j^x(t)\rangle$ ------------------------------------------------ Here we write down the formal solution for the dynamics of $\langle \hat{\sigma}_j^x(t)\rangle$ in terms of coefficients that depend on the eigenvalues and eigenvectors of $\mathcal{L}_{b(c)}$. Notice that as it depends on the diagonalization of $\mathcal{L}$, it is not valid at an EP (see for instance Ref. [@Longhi1]). As the eigenspectrum of the system cannot, in general, be obtained analytically, the following solution needs to be complemented by the numerical calculation of its coefficients. The semi-analytical solution is obtained proceeding as follows [@Bellomo]. We first notice that the density matrix at any time can be written as[^2] $$|\rho(t)\rrangle=\sum_{\mu}\sum_{k} p^\mu_{0k}|\tau_k^\mu\rrangle e^{\lambda_k^\mu t}$$ where the initial condition is encoded in the coefficients $p^\mu_{0k}$ with $\mu \in\{a,b,c,d,e\}$, defined as the overlap of $\hat{\rho}(0)$ with the right (left) eigenvectors of the Liouvillian $|\tau^\mu_k(\bar{\tau}^\mu_k)\rrangle$, i.e. $p^\mu_{0k}=\llangle \bar{\tau}^\mu_k|\rho(0)\rrangle/\llangle \bar{\tau}^\mu_k|\tau^\mu_k\rrangle$. Then from the definition of expected value we obtain $$\langle \hat{\sigma}_j^x(t)\rangle=\text{Tr}(\hat{\sigma}_j^x\hat{\rho}(t))=\sum_{\mu}\sum_{k} p^\mu_{0k} \langle\tau_k^\mu\rangle_{xj}e^{\lambda_k^\mu t},$$ with $\langle\tau_k^\mu\rangle_{xj}=\llangle \sigma_j^x|\tau_k^\mu\rrangle$ and, invoking the block structure of the Liouvillian, we find that $\langle\tau_k^\mu\rangle_{xj}$ are nonzero only for $\mu=b,c$. Finally, as $\mathcal{L}_c=\mathcal{L}_b^*$, then $\lambda^c_k=\lambda^{b*}_k$, $\langle\tau_k^c\rangle_{xj}=\langle\tau_k^b\rangle_{xj}^*$ and $p^c_{0k}=p^{b*}_{0k}$. Thus the formal solution can be written just in terms of $\mu=b$ as $$\label{solution} \langle \hat{\sigma}_j^x(t)\rangle=\sum_{k=1}^4 2|p^b_{0k}\langle\tau_k^b\rangle_{xj}|e^{\text{Re}(\lambda_k^b)t}\cos[\text{Im}(\lambda_k^b)t+\psi_{k,xj}^b],$$ with $\psi_{k,xj}^b=\text{arg}(p^b_{0k}\langle\tau_k^b\rangle_{xj})$. Synchronization measure {#appB} ======================= In this section we present the measure that we use to assess the presence of synchronization, which consists in a correlation function that quantifies the degree of similitude between two temporal trajectories [@SyncRev1; @SyncRev2]. In particular, these trajectories correspond to local observables of each system, as for instance $A_1(t)=\langle \hat{\sigma}^x_1(t)\rangle$ and $A_2(t)=\langle \hat{\sigma}^x_2(t)\rangle$, for some particular parameter choice and initial condition. The corresponding correlator is the Pearson factor defined as: $$\label{SyncMeasure} \mathcal{C}_{A_1(t),A_2(t)}(\Delta t)=\frac{\int_t^{t+\Delta t}ds[A_1(s)-\bar{A}_1][A_2(s)-\bar{A}_2] }{\sqrt{\prod_{j=1}^2 \int_t^{t+\Delta t}ds[A_j(s)-\bar{A}_j]^2}},$$ with $\bar{A}_j=\frac{1}{\Delta t}\int_t^{t+\Delta t}ds A_j(s)$. Then $\mathcal{C}_{A_1(t),A_2(t)}(\Delta t)\in[-1,1]$ by definition. This correlator is a function of time with a time window $\Delta t$, which for perfect synchronization or anti-phase synchronization is known to take the values 1 or -1, respectively. However, an important drawback is that it is not sensitive to synchronization at other phase-differences. For this reason, and in order to assess the emergence of synchronization with arbitrary [*locked*]{} phase differences, we consider the time delayed maximized Pearson factor. This is defined as $\mathcal{C}_{\text{max}}=\text{max}\big[\mathcal{C}_{A_1(t),A_2(t+\tau)}(\Delta t) \big]_{\tau\in[0,\delta t]}$, or in words: it is the maximum value that the Pearson factor takes considering two time delayed trajectories with a delay time in the range 0 to $\delta t$. This measure takes the value 1 for perfect synchronization. Notice that in this case, from the optimal $\tau$ we can obtain the locked phase difference between the synchronized trajectories. At this point we should remark that there are not universal prescribed values for $\delta t$ and $\Delta t$, rather there is a qualitative recipe for them to be meaningful: $\delta t$ should be of the order of a period of the synchronous oscillation, and $\Delta t$ should be of the order of few periods of the synchronous oscillation. Correlation spectrum for $w/\gamma=0$ {#appC} ===================================== In this section we outline the main steps involved in computing two-time correlations of the type $\langle\hat{\sigma}_{j}^-(t+\tau)\hat{\sigma}^+_{k}(t)\rangle$ in the stationary state of the system, that is $\langle\hat{\sigma}_{j}^-(\tau)\hat{\sigma}^+_{k}(0)\rangle_{ss}=\text{lim}_{t\to\infty}\langle\hat{\sigma}_{j}^-(t+\tau)\hat{\sigma}^+_{k}(t)\rangle$. In absence of pumping the stationary state of the system is the vacuum, $\rho_{ss}=|gg\rangle\langle gg|$. Using the quantum regression theorem [@Carmichael] we have $$\begin{aligned} \langle \hat{\sigma}_j^-(\tau)\hat{\sigma}_k^+(0)\rangle_{ss}&=\text{Tr}\big( \hat{\sigma}_j^-e^{\mathcal{L}\tau}(\hat{\sigma}_k^+|gg\rangle\langle gg|)\big)\nonumber\\ &=\text{Tr}\big( \hat{\sigma}_j^-e^{\mathcal{L}_b\tau}(\hat{\sigma}_k^+|gg\rangle\langle gg|)\big),\end{aligned}$$ for $\tau\geq0$. In the second equality we have used the fact that $\hat{\sigma}_k^+|gg\rangle\langle gg|$ yields either $|eg\rangle\langle gg|$ or $|ge\rangle\langle gg|$ whose dynamics is ruled by $\mathcal{L}_b$. Moreover, as $w/\gamma=0$, and as this type of initial condition belongs to the one excitation sector, the dynamics of these correlations can be obtained just considering the one excitation sector. Thus, considering a more general initial condition of this type, we have that $e^{\mathcal{L}_b\tau}(\rho_{eggg}(0)|eg \rangle\langle gg|+\rho_{gegg}(0)|ge \rangle\langle gg|)=\rho_{eggg}(\tau)|eg \rangle\langle gg|+\rho_{gegg}(\tau)|ge \rangle\langle gg|$, where these amplitudes follow a system of equations given by $\mathcal{L}_b$ that reads as $$\begin{split} \partial_\tau \rho_{eggg}(\tau)=-\big[\frac{\gamma}{2}+i(\omega_0+\frac{\delta}{2})\big]\rho_{eggg}(\tau)-(\frac{\gamma}{2}+is_{12})\rho_{gegg}(\tau),\\ \partial_\tau \rho_{gegg}(\tau)=-\big[\frac{\gamma}{2}+i(\omega_0-\frac{\delta}{2})\big]\rho_{gegg}(\tau)-(\frac{\gamma}{2}+is_{12})\rho_{eggg}(\tau). \end{split}$$ The solution in the Laplace domain, $\rho_{xxgg}(s)=\int_0^\infty \rho_{xxgg}(\tau)e^{-s\tau}d\tau$, is readily obtained $$\label{laplace_sol} \begin{split} \rho_{eggg}(s)=\frac{[s+\gamma/2+i(\omega_0-\delta/2)]\rho_{eggg}(0)-(\gamma/2+is_{12})\rho_{gegg}(0)}{(s-\lambda_3^b)(s-\lambda_4^b)},\\ \rho_{gegg}(s)=\frac{[s+\gamma/2+i(\omega_0+\delta/2)]\rho_{gegg}(0)-(\gamma/2+is_{12})\rho_{eggg}(0)}{(s-\lambda_3^b)(s-\lambda_4^b)}, \end{split}$$ where the poles correspond to two of the eigenvalues given in Eq. (\[eigs\_b1\]). Notice that for $s_{12}=0$ there is an EP at $\delta=\gamma$ but, in contrast to Eq. (\[solution\]), this solution is correct at the EP as it is not written in terms of the eigenvectors of $\mathcal{L}_b$. Moreover, the EP appears as a double pole, with the direct consequence of an anomalous decay dynamics at this point, in which the exponentials present polynomial corrections in time (see also [@Cabot_EPL]). We can consider collective measurements or individual ones, each case corresponding to different linear combinations of the above general results. For instance, for the collective correlation function associated to $\hat{L}=(\hat{\sigma}_1^-+\hat{\sigma}_2^-)/\sqrt{2}$, we have $\langle \hat{L}(\tau)\hat{L}^\dagger(0)\rangle_{ss}=(\rho_{eggg}(\tau)+\rho_{gegg}(\tau))/\sqrt{2}$ with the initial condition $\rho_{eggg}(0)=1/\sqrt{2}$ and $\rho_{gegg}(0)=1/\sqrt{2}$. Otherwise, considering only the initial excitation of one of the qubits, we have $\langle\hat{\sigma}_{1}^-(\tau)\hat{\sigma}^+_{1}(0)\rangle_{ss}=\rho_{eggg}(\tau)$ and $\langle\hat{\sigma}_{2}^-(\tau)\hat{\sigma}^+_{2}(0)\rangle_{ss}=\rho_{gegg}(\tau)$ for either $\rho_{eggg}(0)=1$ and $\rho_{gegg}(0)=0$ or the other way around. In general we will be interested in the Fourier transform or spectrum of these correlations, i.e. $$\begin{aligned} \label{out_spec} \mathcal{S}_{\hat{o}\hat{o}^\dagger}(\omega)&=&\int_{-\infty}^\infty d\tau \,e^{-i\omega \tau} \langle\hat{o}(\tau)\hat{o}^\dagger\rangle_{ss}\nonumber\\ &=&2\text{Re}\bigg\{\int_{0}^\infty d\tau \,e^{-i\omega \tau} \langle\hat{o}(\tau)\hat{o}^\dagger)\rangle_{ss} \bigg\},\end{aligned}$$ where $\hat{o}$ stands either for $\hat{\sigma}_j$ or $\hat{L}$. The second equality in (\[out\_spec\]) follows from the fact that in the stationary state $\langle\hat{o}(-\tau)\hat{o}^\dagger\rangle_{ss}= \langle\hat{o}\hat{o}^\dagger(\tau)\rangle_{ss}$, and moreover for these correlations $\langle\hat{o}\hat{o}^\dagger(\tau)\rangle_{ss}=\langle\hat{o}(\tau)\hat{o}^\dagger\rangle_{ss}^*$. Finally notice that these Fourier transformed correlations can be written in terms of the solutions in the Laplace domain as combinations of the terms $2\text{Re}[\rho_{eggg}(s=i\omega)]$ and $2\text{Re}[\rho_{gegg}(s=i\omega)]$. [99]{} H.-P. Breuer and F. Petruccione, *The Theory of Open Quantum Systems* (Oxford University Press, New York, 2002). F. Galve, G L. Giorgi, and R. Zambrini, in [*Lectures on General Quantum Correlations and their Applications*]{} (Eds.: F. Fanchini, D. Soares Pinto, G. Adesso), Springer, Cham, CH 2017, pp. 393-420. G. L. Giorgi, F. Galve, G. Manzano, P. Colet and R. Zambrini, Phys. Rev. A [**85**]{}, 052101 (2012). G. Manzano, F. Galve, G. L. Giorgi, E. Hernandez-Garcia, and R. Zambrini, Sci. Rep. **3**, 1439 (2013). A. Cabot, F. Galve, V. M. Eguíluz, K. Klemm, S. Maniscalco, and R. Zambrini, npj Quantum Inf. **4**, 57 (2018). G. L. Giorgi, F. Plastina, G. Francica, and R. Zambrini, Phys. Rev. A **88**, 042115 (2013). S. Siwiak-Jaszek and A. Olaya-Castro, Faraday Discuss., 216, 38 (2019) S. Sonar, M. Hajdušek, M. Mukherjee, R. Fazio, V. Vedral, S. Vinjanampathy, and L. Kwek, Phys. Rev. Lett. **120**, 163601 (2018). A. Mari, A. Farace, N. Didier, V. Giovannetti, and R. Fazio, Phys. Rev. Lett. **111**, 103605 (2013). M. Ludwig and F. Marquardt, Phys. Rev. Lett. [**111**]{}, 073603 (2013). A. Cabot, F. Galve, and R. Zambrini, New J. Phys. [**19**]{}, 113007 (2017). C. D. Tilley, C. K. Teoh, and A. D. Armour, New J. Phys. **20**, 113002 (2018). S. Walter, A. Nunnenkamp, and C. Bruder, Ann. Phys. (Berlin) **527**, 131 (2015). T. E. Lee and H. R. Sadeghpour, Phys. Rev. Lett. **111**, 234101 (2013). C. D.-Tilley and A. D. Armour, Phys. Rev. A, **94**, 063819 (2016). G. L. Giorgi, F. Galve, and R. Zambrini, Phys. Rev. A **94**, 052121 (2016). B. Bellomo, G. L. Giorgi, G. M. Palma and R. Zambrini, Phys. Rev. A [**95**]{}, 043807 (2017). A. Cabot, G. L. Giorgi, F. Galve and R. Zambrini, Phys. Rev. Lett. [**123**]{}, 023604 (2019). M. Xu, D. A. Tieri, E. C. Fine, J. K. Thompson and M. J. Holland, Phys. Rev. Lett. [**113**]{}, 154101 (2014). B. Zhu, J. Schachenmayer, M. Xu, F. Herrera, J. G. Restrepo, M. J. Holland, and A. M. Rey, New J. Phys. **17**, 083063 (2015). G. L. Giorgi, A. Cabot and R. Zambrini, in [*Advances in Open Systems and Fundamental Tests of Quantum Mechanics*]{} (Eds.: B. Vacchini, H.-P. Breuer, A. Bassi), Springer, Cham, CH 2019, pp. 73-89. B. Buča, J. Tindall, and D. Jaksch, Nat. Comm. **10**, 1730 (2018). J. Tindall, C. S. Munoz, B. Buča, and D. Jaksch, arXiv:1907.12837 W. D. Heiss, J. Phys. A: Math. Theor. [**45**]{}, 444016 (2012). C. M. Bender and S. Boettcher, Phys. Rev. Lett. [**80**]{}, 5243 (1998). R. El-Ganainy, K. Makris, M. Khajavikhan, Z. H. Musslimani, S. Rotter, and D. N. Christodoulides, Nat. Phys **14**, 11 (2018). L. Feng, R. El-Ganainy, and L. Ge, Nat. Photon. **11**, 752 (2017). S. Longhi, EPL **120**, 64001 (2017). M. A. Miri and A. Alú, Science **363**, 7709 (2019). Ş. K. Özdemir, S. Rotter, F. Nori, and L. Yang, Nat. Mater. **18**, 783 (2019). A. Cabot, G. L. Giorgi, S. Longhi and R. Zambrini, EPL [**127**]{}, 20001 (2019). S. Longhi, Phys. Rev. A [**98**]{}, 022134 (2018). A. Ghatak and T. Das J. Phys.: Condens. Matter **31**, 263001 (2019) W. Chen, Ş. K. Özdemir, G. Zhao, J. Wiersig, and L. Yang, Nature **548**, 192 (2017). S. Longhi, Opt. Lett. **43**, 2929 (2018). B. Peng, S. K. Ozdemir, M. Liertzer, W. Chen, J. Kramer, H. Yilmaz, J. Wiersig, S. Rotter, and L. Yang, Proc. Natl. Acad. Sci. USA **113**, 6845 (2016). P. Miao, Z. Zhang, J. Sun, W. Walasik, S. Longhi, N. M. Litchinitser, and L. Feng, Science **353**, 464 (2016). S. Longhi and L. Feng, Photon. Res. **5**, B1 (2017). H. Hodaei, M.-A. Miri, A. U. Hassan, W. E. Hayenga, M. Heinrich, D. N. Christodoulides, M. Khajavikhan, Laser Photon. Rev. **10**, 494 (2016). \] R. Fleury, D. Sounas, and A. Alú, Nat. Commun. **6**, 5905 (2015). K. Ding, G. Ma, M. Xiao, Z. Q. Zhang,and C. T. Chan, Phys. Rev. X **6**, 021007 (2016). C. Shi, M. Dubois, Y. Chen, L. Cheng, H. Ramezani, Y. Wang, and X. Zhang, Nat. Commun. **7**, 11110 (2016). X.-Y. Lu, H. Jing, J.-Y. Ma, and Y. Wu, Phys. Rev. Lett. **114**, 253601 (2015). H. Xu, D. Mason, L. Jiang, and J. Harris, Nature **537**, 80 (2016). E. Verhagen and A. Alú, Nat. Phys. **13**, 922 (2017). A. Goban, C. L. Hung, J. D. Hood, S.-P. Yu, J. A. Muniz, O. Painter, and H. J. Kimble, Phys. Rev. Lett. [**115**]{}, 063601 (2015). P. Samutpraphoot, T. orđević, P. L. Ocola, H. Bernien, C. Senko, V. Vuletić, M. D. Lukin arXiv:1909.09108 B. Casabone, K. Friebe, B. Brandstätter, K. Schüppert, R. Blatt, and T. E. Northup, Phys. Rev. Lett. [**114**]{}, 023602 (2015). R. E. Evans, M. K. Bhaskar, D. D. Sukachev, C. T. Nguyen, A. Sipahigil, M. J. Burek, B. Machielse, G. H. Zhang, A. S. Zibrov, E. Bielejec, H. Park, M. Lončar, and M. D. Lukin, Science [**362**]{}, 662 (2018). A. F. van Loo, A. Fedorov, K. Lalumière, B. C. Sanders, A. Blais, and A. Wallraff, Science [**342**]{}, 1494 (2013). J. A. Mlynek, A. A. Abdumalikov, C. Eichler, and A. Wallraff, Nat. Commun. [**5**]{}, 5186 (2014). F. Galve, A. Mandarino, M. G. A. Paris, C. Benedetti, and R. Zambrini, Sci. Rep. [**7**]{}, 42050 (2017); F. Galve and R. Zambrini, Phys. Rev. A [**97**]{}, 033846 (2018). A. González-Tudela and J. I. Cirac, Phys. Rev. A [**96**]{}, 043811 (2017); A. González-Tudela and J. I. Cirac, Phys. Rev. Lett. [**119**]{}, 143602 (2017). F. Le Kien, S. Dutta Gupta, K. P. Nayak, and K. Hakuta, Phys. Rev. A [**72**]{}, 063815 (2005). A. Asenjo-Garcia, J. D. Hood, D. E. Chang, and H. J. Kimble, Phys. Rev. A [**95**]{}, 033818 (2017). K. Lalumière, B. C. Sanders, A. F. van Loo, A. Fedorov, A. Wallraff, and A. Blais, Phys. Rev. A [**88**]{}, 043806 (2013). I. Marzoli, J. I. Cirac, R. Blatt, and P. Zoller, Phys. Rev. A [**49**]{}, 2771 (1994). M. Cattaneo, G. L. Giorgi, S. Maniscalco and R. Zambrini, arXiv preprint arXiv:1906.08893 M. Cattaneo, G. L. Giorgi, S. Maniscalco and R. Zambrini, arXiv preprint arXiv:1911.01836 G. Morigi, J. Eschner, and C. H. Keitel, Phys. Rev. Lett. [**85**]{}, 4458 (2000); G. Morigi, Phys. Rev. A [**67**]{}, 033402 (2003). H. J. Carmichael, [*Statistical Methods in Quantum Op- tics 1: Master Equations and Fokker-Planck Equations*]{}, Vol. 1 (Springer, Berlin 1998) pp. 19–28. [^1]: This kind of vectorization mapping, $\text{vec}(\cdot)$, transforms the density matrix $\hat{\rho}$ to a column vector $|\rho\rrangle=\text{vec}(\hat{\rho})$ by arranging consecutively its rows, while a product of operators transforms as $\text{vec}(\hat{o}_1\hat{\rho}\hat{o}_2)=(\hat{o}_1\otimes \hat{o}_2^\top)\text{vec}(\hat{\rho})$. [^2]: The identity in the Hilbert-Schmidt space can be written as $\mathcal{I}=\bigoplus_\mu\sum_k\frac{|\tau_k^\mu\rrangle\llangle \bar{\tau}_k^\mu|}{\llangle \bar{\tau}^\mu_k|\tau^\mu_k\rrangle}$ when $\mathcal{L}$ is diagonalizable.
[**The strong coupling from the revised ALEPH data for hadronic $\tau$ decays** ]{} \ [ABSTRACT]{}\ > We apply an analysis method previously developed for the extraction of the strong coupling from the OPAL data to the recently revised ALEPH data for non-strange hadronic $\tau$ decays. Our analysis yields the values $\a_s(m_\tau^2)=0.296\pm 0.010$ using fixed-order perturbation theory, and $\a_s(m_\tau^2)=0.310\pm 0.014$ using contour-improved perturbation theory. Averaging these values with our previously obtained values from the OPAL data, we find $\a_s(m_\tau^2)=0.303\pm 0.009$, respectively, $\a_s(m_\tau^2)=0.319\pm 0.012$. We present a critique of the analysis method employed previously, for example in analyses by the ALEPH and OPAL collaborations, and compare it with our own approach. Our conclusion is that non-perturbative effects limit the accuracy with which the strong coupling, an inherently perturbative quantity, can be extracted at energies as low as the $\tau$ mass. Our results further indicate that systematic errors on the determination of the strong coupling from analyses of hadronic $\tau$-decay data have been underestimated in much of the existing literature. \[intro\] Introduction ====================== Recently, Ref. [@ALEPH13], for the ALEPH collaboration, updated and revised previous ALEPH results for the non-strange vector ($V$) and axial vector ($A$) spectral distributions obtained from measurements of hadronic $\tau$ decays. In particular, Ref. [@ALEPH13] corrects a problem in the publicly posted 2005 and 2008 versions of the correlations between different energy bins uncovered in Ref. [@Tau10].[^1] The corrected data supersede those originally published by the ALEPH collaboration [@ALEPH; @ALEPH08]. One of the hadronic quantities of interest that can be extracted from these data is the strong coupling $\a_s(m_\tau^2)$ at the $\tau$ mass, through the use of Finite-Energy Sum Rules (FESRs) [@shankar], as advocated long ago [@Braaten88; @BNP]. Both the ALEPH and OPAL [@OPAL] collaborations have done so by applying an analysis strategy, developed in Refs. [@BNP; @DP1992], in which small, but non-negligible non-perturbative effects were estimated using a truncated form of the operator product expansion (OPE). A feature of the particular truncation scheme employed is that it assumes that, in addition to contributions which violate quark-hadron duality, also OPE contributions of dimension $D>8$ unsuppressed by non-leading powers of $\alpha_s$ can be safely neglected. Given the goal of extracting $\a_s(m_\tau^2)$ with the best possible accuracy, these features of what we will refer to as the “standard analysis” have been questioned, starting with the work of Refs. [@MY08; @CGP]. In these works, it was argued that both the OPE truncation to terms with $D\le 8$ and the neglect of violations of quark-hadron duality lead to additional numerically non-negligible systematic uncertainties not included in the errors obtained on $\a_s(m_\tau^2)$ and the OPE condensates from the standard-analysis approach. In order to remedy this situation, in Refs. [@alphas1; @alphas2], we developed a new analysis strategy designed to take both OPE and duality-violating (DV) non-perturbative effects consistently into account. This strategy was then successfully applied to the OPAL data [@alphas1; @alphas2]. In the present article, we apply this analysis strategy to the corrected ALEPH data, and compare our results to those obtained from the OPAL data in Ref. [@alphas2] as well as to those of the recent re-analysis presented in Ref. [@ALEPH13]. The calculation of the order-$\a_s^4$ term [@PT] in the perturbative expansion of the Adler function in 2008 led to a renewed interest in the determination of the strong coupling from hadronic $\tau$ decays, with many attempts to use this new information on the theory side of the relevant FESRs in order to sharpen the extraction of $\a_s(m_\tau^2)$ from the data [@ALEPH13; @ALEPH08; @MY08; @alphas1; @alphas2; @PT; @BJ; @Menke; @CF; @DM; @Cetal]. Since the perturbative series converges rather slowly, different partial resummation schemes have been considered, leading to variations in the obtained results. The majority of these post-2007 updates (Refs. [@ALEPH13; @ALEPH08; @PT; @BJ; @Menke; @CF; @DM; @Cetal]), however, were carried out assuming that the standard-analysis treatment of non-perturbative effects was essentially correct, with none of the references in this subset, with the exception of Refs. [@ALEPH13; @ALEPH08], redoing the analysis starting from the underlying experimental data (the emphasis, instead, being on the merits of different resummation schemes for the perturbative expansion). Reference [@MY08], which did revisit the determination of the higher-$D$ OPE contributions, and performed a more careful treatment of these contributions, did not, however, include DV contributions in its analysis framework. While its results were tested for self-consistency, the absence of a representation of DV effects meant no estimate of the residual systematic error associated with their neglect was possible. The only articles to incorporate both the improved treatment of higher-$D$ OPE contributions and an implementation of a physically motivated representation of DV effects were those of Refs. [@alphas1; @alphas2], which, due to the problem with the then-existing ALEPH covariance matrices, were restricted to analyzing OPAL data. Our goal in this article is to reconsider the treatment of non-perturbative effects employing the newly released ALEPH data, which have significantly smaller errors than the OPAL data. We will present results for the two most popular resummation schemes for the perturbative (, $D=0$ OPE) series: fixed-order perturbation theory (FOPT) and contour-improved perturbation theory (CIPT) [@CIPT], without trying to resolve the discrepancies that arise between them (for an overview of the two methods, see Ref. [@MJ]). This article is organized as follows. In Sec. \[theory\] we give a brief overview of the necessary theory, referring to Ref. [@alphas1] for more details. In Sec. \[data\] we discuss the new ALEPH data set, and check explicitly that the current publicly posted version of the correlation matrices pass the test that led to the identification of the problem with the previous version [@Tau10]. We also show the comparison of the experimental ALEPH and OPAL non-strange spectral functions. Section \[strategy\] summarizes our fitting strategy, developed in Refs. [@alphas1; @alphas2]. Sections \[fits\] and \[results\] present the details of the fits, and the results we obtain from them for $\a_s(m_\tau^2)$ and dimension 6 and 8 OPE coefficients in the $V$ and $A$ channels. We explore the $\chi^2$ landscape using the Markov-chain Monte Carlo code [hrothgar]{} [@hrothgar], which in the case of the OPAL data proved useful in uncovering potential ambiguities. Also included is an estimate for the total non-perturbative contribution to the ratio of non-strange hadronic and electronic $\tau$ branching fractions. In Sec. \[results\] we check how well the two Weinberg sum rules [@SW] and the sum rule for the electro-magnetic pion mass difference [@EMpion] are satisfied by our results. Finally, in Sec. \[ALEPH\], we present a critical discussion of the standard analysis employed in Refs. [@ALEPH13; @ALEPH; @ALEPH08; @OPAL], focusing on the most recent of these, described in Ref. [@ALEPH13]. We demonstrate explicitly the inconsistency of this analysis with regard to the treatment of non-perturbative effects, and conclude that, while the standard analysis approach was a reasonable one to attempt in the past, it must be abandoned in current or future determinations of $\a_s(m_\tau^2)$ from hadronic $\tau$ decay data. In our concluding section, Sec. \[conclusion\], we compare our approach with the standard-analysis method, highlighting and juxtaposing the assumptions underlying each, and summarize our results. \[theory\] Theory overview ========================== The sum-rule analysis starts from the correlation functions $$\begin{aligned} \label{correl} \P_{\m\n}(q)&=&i\int d^4x\,e^{iqx}\langle 0|T\left\{J_\m(x)J^\dagger_\n(0)\right\}|0\rangle\\ &=&\left(q_\m q_\n-q^2 g_{\m\n}\right)\P^{(1)}(q^2)+q_\m q_\n\P^{(0)}(q^2)\nonumber\\ &=&\left(q_\m q_\n-q^2 g_{\m\n}\right)\P^{(1+0)}(q^2)+q^2 g_{\m\n}\P^{(0)}(q^2)\ ,\nonumber\end{aligned}$$ where $J_\m$ stands for the non-strange $V$ or $A$ current, $\overline{u}\g_\m d$ or $\overline{u}\g_\m\g_5 d$, while the superscripts $(0)$ and $(1)$ label spin. The decomposition in the third line employs the combinations $\P^{(1+0)}(q^2)$ and $q^2\P^{(0)}(q^2)$, which are free of kinematic singularities. Defining $s=q^2=\, -Q^2$ and the spectral function $$\label{spectral} \r^{(1+0)}(s)=\frac{1}{\p}\;\mbox{Im}\,\P^{(1+0)}(s)\ ,$$ Cauchy’s theorem and the analytical properties of $\P^{(1+0)}(s)$, applied to the contour in Fig. \[cauchy-fig\], imply the FESR $$\begin{aligned} \label{cauchy} I^{(w)}_{V/A}(s_0)\equiv\frac{1}{s_0}\int_0^{s_0}ds\,w(s)\,\r^{(1+0)}_{V/A}(s) &=&-\frac{1}{2\p i\, s_0}\oint_{|s|=s_0} ds\,w(s)\,\P^{(1+0)}_{V/A}(s)\ ,\end{aligned}$$ valid for any $s_0>0$ and any weight $w(s)$ analytic inside and on the contour [@shankar]. ![image](cauchy.pdf){width="6cm"} > [[*Analytic structure of $\P^{(1+0)}(q^2)$ in the complex $s=q^2$ plane. There is a cut on the positive real axis starting at $s=q^2=4m_\p^2$ (a pole at $s=q^2=m_\p^2$ and a cut starting at $s=9m_\p^2$) for the $V$ ($A$) case. The solid curve shows the contour used in Eq. (\[cauchy\]).*]{}]{} The flavor $ud$ $V$ and $A$ spectral functions can be experimentally determined from the differential versions of the ratios, $$\label{R} R_{V/A;ud}= {\frac{\G [\tau\rightarrow ({\rm hadrons})_{V/A;ud}\n_\tau (\g ) ]} {\G [\tau\rightarrow e\bar{\n}_e \nu_\tau (\g ) ]}}\ ,$$ of the width for hadronic decays induced by the relevant current to that for the electron mode. Explicitly [@tsai71], $$\label{taukinspectral} {\frac{dR_{V/A;ud}(s)}{ds}}= 12\pi^2\vert V_{ud}\vert^2 S_{EW}\, {\frac{1}{m_\tau^2}} \left[ w_T(s;m_\tau^2) \rho_{V/A;ud}^{(1+0)}(s) - w_L(s;m_\tau^2) \rho_{V/A;ud}^{(0)}(s) \right]\ ,$$ where $S_{EW}$ is a short-distance electroweak correction and $w_T(s;s_0)=(1-s/s_0)^2(1+2s/s_0)$, $w_L(s;s_0)=2(s/s_0)(1-s/s_0)^2$. Apart from the pion-pole contribution, which is not chirally suppressed, $\rho_{V/A;ud}^{(0)}(s) = O[(m_d\mp m_u)^2]$, and the continuum part of $\rho_{V/A}^{(0)}(s)$ is thus numerically negligible. As a result, the spectral functions $\rho^{(1+0)}_{V/A;ud}(s)$ can be determined directly from $dR_{V/A;ud}(s)/ds$. The FESR (\[cauchy\]) can thus be studied for arbitrary $s_0$ and arbitrary analytic weight $w(s)$. From now on, we will denote the experimental version of the spectral integral on the left-hand side of Eq. (\[cauchy\]) by $I_{V/A;\rm ex}^{(w)}(s_0)$ (generically, $I_{\rm ex}^{(w)}(s_0)$) and the theoretical representation of the contour integral on the right-hand side by $I_{V/A;\rm th}^{(w)}(s_0)$ (generically, $I_{\rm th}^{(w)}(s_0)$). For large enough $|s|=s_0$, away from the positive real axis, $\P^{(1+0)}(s)$ can be approximated by the OPE $$\label{OPE} \P^{(1+0)}_{\rm OPE}(s)=\sum_{k=0}^\infty \frac{C_{2k}(s)}{(-s)^{k}}\ ,$$ with the OPE coefficients $C_{2k}$ logarithmically dependent on $s$ through perturbative corrections. The term with $k=0$ corresponds to the purely perturbative, mass-independent contributions, which have been calculated to order $\a_s^4$ in Ref. [@PT], and are the same for the $V$ and $A$ channels. The $C_{2k}$ with $k\ge 1$ are different for the $V$ and $A$ channels, and, for $k>1$, contain non-perturbative $D=2k$ condensate contributions. As in Refs. [@alphas1; @alphas2], we will neglect purely perturbative quark-mass contributions to $C_2$ and $C_4$, as they are numerically very small for the non-strange FERSs we consider in this article. For the same reason, we will neglect the $s$-dependence of the coefficients $C_{2k}$ for $k>1$. For the perturbative contribution, $C_0$, we will use the result of Ref. [@PT] and extract $\a_s(m_\tau^2)$ in the $\overline{\rm MS}$ scheme. Since the coefficient $c_{51}$ of the order-$\a_s^5$ term has not been calculated we will use the estimate $c_{51}=283$ of Ref. [@BJ] with a 100% uncertainty. We will also employ both FOPT and CIPT resummation schemes in evaluating the truncated perturbative series. For more details on the treatment of the $D>0$ OPE contributions, we refer the reader to Ref. [@alphas1]. Perturbation theory, and in general the OPE, breaks down near the positive real $s=q^2$ axis [@PQW]. We account for this by replacing the right-hand side of Eq. (\[cauchy\]) by $$\label{split} -\frac{1}{2\p is_0}\oint_{|s|=s_0}ds\,w(s)\, \left(\P^{(1+0)}_{\rm OPE}(s)+\D(s)\right)\ ,$$ with $$\label{DVdef} \D(s)\equiv\P^{(1+0)}(s)-\P^{(1+0)}_{\rm OPE}(s)\ ,$$ where the difference $\D(s)$ accounts, by definition, for the quark-hadron duality violating contribution to $\Pi^{(1+0)}(s)$. As shown in Ref. [@CGP], Eq. (\[split\]) can be rewritten as $$\label{sumrule} I_{\rm th}^{(w)}(s_0) = -\frac{1}{2\p is_0}\oint_{|s|=s_0} ds\,w(s)\,\P^{(1+0)}_{\rm OPE}(s)-\frac{1}{s_0}\, \int_{s_0}^\infty ds\,w(s)\,\frac{1}{\p}\,\mbox{Im}\, \D(s)\ ,$$ if $\D(s)$ is assumed to decay fast enough as $s\to\infty$. The imaginary parts $\frac{1}{\p}\,\mbox{Im}\,\D_{V/A}(s)$ can be interpreted as the DV parts, $\rho_{V/A}^{\rm DV}(s)$, of the $V/A$ spectral functions. The functional form of $\D(s)$ is not known, even for large $s$, and we thus need to resort to a model in order to account for DVs. Following Refs. [@CGP; @CGPmodel; @CGP05],[^2] we use a model based on large-$N_c$ and Regge considerations, choosing to parametrize $\rho_{V/A}^{\rm DV}(s)$ as[^3] $$\label{ansatz} \rho_{V/A}^{\rm DV}(s)= e^{-\d_{V/A}-\g_{V/A}s}\sin{(\a_{V/A}+\b_{V/A}s)}\ .$$ This introduces, in addition to $\a_s$ and the $D\ge 4$ OPE condensates, four new parameters in each channel. As in Refs. [@alphas1; @alphas2], we will assume that Eq. (\[ansatz\]) holds for $s\ge s_{\rm min}$, with $s_{\rm min}$ to be determined from fits to the data. This, in turn, assumes that we can take $s_{\rm min}$ significantly smaller than $m_\tau^2$, , that both the OPE and the [*ansatz*]{} (\[ansatz\]) can be used in some interval below $m_\tau^2$. Let us pause at this point to revisit the basic ideas underlying the DV  (\[ansatz\]). Since there exists, as yet, no theory of DVs starting from first principles in QCD, the  (\[ansatz\]) represents simply our best, physically motivated, guess as to an appropriate form of DV contributions to the $V$ and $A$ spectral functions. The damped oscillatory form employed is, however, far from arbitrary. First, it reflects the fact that DVs are expected to produce almost harmonic oscillations around the perturbative continuum, in line with expectations from Regge theory, in which resonances occur with equal squared-mass spacings on the relevant daughter trajectories. Second, the exponential damping factor in the   reflects the understanding that the OPE is (at best) an asymptotic, and not a convergent, expansion. It is certainly the case that the OPE representation is more successful for euclidean $Q^2$ $\sim 2$ GeV$^2$ than for comparable Minkowski scales, $q^2\sim 2$ GeV$^2$, where DV contributions are clearly visible in the spectral functions. Once DVs are identified as representing the irreducible error present in this asymptotic expansion, it is natural to assume that their contribution should exhibit an exponentially suppressed dependence on $s=q^2$, as in our  (\[ansatz\]). These qualitative expectations are also reflected in the explicit Regge- and large-$N_c$-motivated model discussed in much more detail in Refs. [@CGP; @CGPmodel; @CGP05; @russians; @MJ11]. These plausibility arguments aside, we will use the precise ALEPH data to subject the parametrization (\[ansatz\]) to non-trivial tests described in detail in Sec. \[ALEPH\]. Several considerations underlie our choice of weight functions $w(s)$. First, we will choose weight functions which are likely to be well-behaved in perturbation theory, based on the findings of Ref. [@BBJ12]. In particular, we will exclude weight functions with a term linear in $s$, and require the ones we use to include a constant term (which we will normalize to one). Second, because it is not known at which order the OPE might start to diverge (for the values of $s_0$ of interest), we wish to avoid terms in Eq. (\[OPE\]) with $D>8$, about which essentially nothing is known. That means that if we do not want to arbitrarily set the coefficients $C_D$ with $D>8$ equal to zero, our weight functions are restricted to polynomials with degree not larger than three. Combining these constraints, we are left with the form $$\label{weightform} w(s;s_0)=1+a(s/s_0)^2+b(s/s_0)^3\ .$$ This allows us at most three independent weight functions, and limits the extent to which we can use sufficiently pinched weights, , weights with a (multiple) zero at $s=s_0$, which help to suppress DVs [@KM98; @DS99]. The upshot is that, if we want to exploit the $s_0$ dependence of the data (instead of fitting only at $s_0=m_\tau^2$, as was done in Refs. [@ALEPH13; @ALEPH; @ALEPH08; @OPAL]) and treat the OPE consistently, modeling DVs is unavoidable [@alphas1]. We emphasize that the $s_0$ dependence of fit results provides a crucial test of the validity of FESR fits to the data, as we will see below. As in Refs. [@alphas1; @alphas2], we choose to consider the weight functions $$\begin{aligned} \hw_0(x)&=&1\ ,\label{weights}\\ \hw_2(x)&=&1-x^2\ ,\nonumber\\ \hw_3(x)&=&(1-x)^2(1+2x)=1-3x^2+2x^3=w_T(s;s_0)\ ,\nonumber\\ x&\equiv&s/s_0\ .\nonumber\end{aligned}$$ The first choice, $\hat{w}_0$, is predicated on the fact that pinching is known to suppress DV contributions and we need at least one weight which is sufficiently sensitive to DV contributions to fix the DV parameters. The remaining two weights $\hat{w}_2$ and $\hat{w}_3$ are singly and doubly pinched, respectively. For a more detailed discussion of our choices, we refer to Ref. [@alphas1]. An important observation is that these choices for what goes into the parametrization of $I_{\rm th}^{(w)}(s_0)$ did remarkably well in the analysis of the OPAL data. It therefore makes sense to see what happens if we apply the same strategy to the ALEPH data. \[data\] The ALEPH data ======================= In this section, we discuss the revised ALEPH data, which are available from Ref. [@datahtml]. First, we perform a minor rescaling, in order to account for more precise values of some “external” quantities (, quantities not directly measured by ALEPH, but used in their analysis of the data); this is discussed in Sec. \[renormalization\], where we also specify our other inputs. Then, in Sec. \[correlations\] we apply to the corrected covariance matrices the test of Ref. [@Tau10] that led us to uncover the problem with the previously published versions, and verify that the revised covariances pass this test. Finally, we compare the $V$ and $A$ spectral functions obtained from the ALEPH data with those from the OPAL data. \[renormalization\] Data and normalization ------------------------------------------ We will use the following input values in our analysis: $$\begin{aligned} \label{input} m_\tau&=&1.77682(16)~\mbox{GeV}\ ,\\ B_e&=&0.17827(40)\ ,\nonumber\\ V_{ud}&=&0.97425(22)\ ,\nonumber\\ S_{EW}&=&1.0201(3)\ ,\nonumber\\ m_\p&=&139.57018(35)~\mbox{MeV}\ ,\nonumber\\ f_\p&=&92.21(14)~\mbox{MeV}\ .\nonumber\end{aligned}$$ Here $B_e$ is the branching fraction for the decay $\tau\to e\overline{\n}_e\n_\tau$ and we have used the result of an HFAG fit of the $\tau$ branching fractions which incorporates $\p_{\m 2}$ and $K_{\m 2}$ data and Standard Model expectations based on these data for the $\p$ and $K$ branching fractions [@hfagpimu2kmu2fit11]; $f_\pi$ is the $\pi$ decay constant. The value for $V_{ud}$ is from Ref. [@htrpp10], that for $S_{EW}$ from Ref. [@SEW], and the values for $m_\tau$, $m_\pi$ and $f_\pi$ are from the Particle Data Group [@PDG]. Only the error on $B_e$ has a significant effect in our analysis; errors on the other input quantities are too small to affect the final analysis errors in any significant way. To the best of our knowledge, Ref. [@ALEPH13] uses the values $B_e=0.17818(32)$ and $S_{EW}=1.0198$. This value for $B_e$ we infer from the ALEPH values for $R_V=1.782(9)$ [@ALEPH13] and the corresponding branching fraction $B_V=0.31747$ [@datahtml] specified in the publicly posted $V$ data file (no error quoted). The continuum (pion-less) axial branching fraction $B_{A,{\rm cont}}=0.19369$ with $B_e=0.17818$ translates into $R_{A,{\rm cont}}=1.08705$. From these values, and the quoted value $R_{ud}=3.475(11)$ [@ALEPH13], it follows that the ALEPH value for $R_\p$, the pion pole contribution to $R_{ud}$, is $R_\p=0.606$. However, if one employs the very precisely known value of $f_\p$ quoted above, obtained from $\p_{\m 2}$ decays, together with the quoted values for $S_{EW}$ and $V_{ud}$, one finds instead the more precisely determined expectation $R_\p=0.6101$. Using this latter value as well as the ALEPH value $R_{ud}=3.475(11)$ leads to $R_V+R_{A,{\rm cont}}=2.865$, instead of the ALEPH value $(B_V+B_{A,{\rm cont}})/B_e=(0.31747+0.19369)/0.17818=2.8688$. We employ the more precise $\p_{\m 2}$ expectation for the important $A$ channel pion-pole contribution, and take this difference into account by rescaling the $V$ and continuum $A$ non-strange spectral functions by the common factor $2.865/2.8688=0.9987$, since we have no information on whether this rescaling should affect the $V$ and $A$ channels asymmetrically. Our rescaling is thus imperfect, but it is to be noted that the effect of this rescaling lowers our value for $\a_s(m_\tau^2)$ by less than one percent, a much smaller shift than that allowed by the total error, see Sec. \[results\]. The new ALEPH data use a variable bin width, with the highest bin, number 80, centered at ${\tt sbin}(80)=3.3375$ GeV$^2$, which is above $m_\tau^2=3.1571$ GeV$^2$. The next-highest bin, number 79, is centered at ${\tt sbin}(79)=3.0875$ GeV$^2$, with a width ${\tt dsbin}(79)=0.1750$ GeV$^2$, so that also ${\tt sbin}(79)+{\tt dsbin}(79)/2>m_\tau^2$. In order to avoid using values of $s$ larger than $m_\tau^2$, we will modify these values to $$\begin{aligned} \label{binadjust} {\tt sbin}(79)&=&3.07854~\mbox{GeV}^2\ ,\\ {\tt dsbin}(79)&=&0.157089~\mbox{GeV}^2\ ,\nonumber\end{aligned}$$ so that ${\tt sbin}(79)+{\tt dsbin}(79)/2=m_\tau^2$. Finally, ALEPH provides binned spectral data for ${\tt sfm2}({\tt sbin})$, which are related to the spectral functions by $$\label{ALEPHform} {\tt sfm2}({\tt sbin})= 100\times \frac{12\p^2|V_{ud}|^2 S_{EW}B_e}{m_\tau^2}\,\D w^T({\tt sbin};m_\tau^2)\rho^{(1+0)}({\tt sbin})\ ,$$ in which $$\label{binwidthav} \D w^T({\tt sbin};m_\tau^2)=\int_{{\tt sbin}-{\tt dsbin}/2}^{{\tt sbin} +{\tt dsbin}/2}ds\, w^T(s;m_\tau^2)\ .$$ For infinitesimal ${\tt dsbin}=ds$ one has $\D w^T(s;m_\tau^2)=w^T(s;m_\tau^2)ds$, but for finite bin width we have to make a choice in how we construct moments with other weights from the spectral functions obtained from Eq. (\[ALEPHform\]). We choose to use the definition $$\label{defIex} I^{(w)}_{\rm ex}(s_0)=\sum_{{\tt sbin}\le s_0}\left(\int_{{\tt sbin}- {\tt dsbin}/2}^{{\tt sbin}+{\tt dsbin}/2}ds\, w(s;s_0)\right)\rho^{(1+0)}({\tt sbin})$$ for all moments considered in this article. \[correlations\] Correlations ----------------------------- ![image](ToyDataPlot.pdf){width="11cm"} > [[*Vector spectral function times $2\p^2$. Top panel: ALEPH data from 2008 [[@ALEPH08]]{}; bottom panel: Monte Carlo sample with 2008 covariance matrix.* ]{}]{} ![image](ALEPH14andToy2.pdf){width="11cm"} As shown in Ref. [@Tau10], there was a problem with the publicly posted 2005 and 2008 versions of the ALEPH covariance matrices. This problem, since corrected in Ref. [@ALEPH13], turns out to have resulted from an inadvertent omission of contributions to the correlations between different bins induced by the unfolding procedure. The problem was discovered by producing fake data sets from a multivariate gaussian distribution based on the posted ALEPH data and covariance matrices, and then comparing the resulting fake data to the actual ALEPH data. The result of this test is shown in Fig. \[test1\], which is the same as Fig. 3 of Ref. [@Tau10]. The top panel shows the experimental data taken from Ref. [@ALEPH08], the bottom panel a typical fake data set produced using the corresponding covariance matrix. The absence of the strong correlations seen in the actual data from the corresponding fake data is what signals the existence of the problem with the previous version of the ALEPH covariance matrix. Figure \[test2\] shows the result of performing the same test on the updated and corrected results reported in Ref. [@ALEPH13], the top panel again showing the actual ALEPH data and the bottom panel a typical fake data set. The fake data (red points) obviously behave much more like the corresponding real data than was the case previously.[^4] We have examined many such fake data sets with the same conclusion. A similar exercise was, of course, carried out for the $A$-channel case. \[OPAL\] Comparison with OPAL data ---------------------------------- In Fig. \[ALEPH-OPAL\] we show the vector and axial spectral functions as measured by ALEPH [@ALEPH13; @datahtml] and OPAL [@OPAL]. The normalizations of the spectral functions for both experiments have been updated to take into account modern values for relevant branching fractions; for the normalization of ALEPH data, see Sec. \[renormalization\] above, and for the normalization of OPAL data, see Sec. III of Ref. [@alphas2]. While there is in general good agreement between the ALEPH and OPAL spectral functions, a detailed inspection reveals some tension between the two, given the size of the errors, for instance in the regions below $0.5$ GeV$^2$ and around $2$ GeV$^2$ in the vector channel, with possibly anti-correlated tensions in the same regions in the axial channel. The presence of a large $D=0$, 1-loop $\alpha_s$-independent contribution in the weighted OPE integrals enhances the impact of such small discrepancies on the output $\alpha_s(m_\tau^2)$. We quantify the impact of these differences below, showing that they lead to some tension between the values for $\a_s(m_\tau^2)$ obtained from the two data sets, though the results turn out to agree within total estimated errors. ![image](ALEPH-OPAL-V.pdf){width="13cm"} ![image](ALEPH-OPAL-A.pdf){width="13cm"} > [[*Comparison of ALEPH and OPAL data for the spectral functions. Top panel: $I=1$ vector channel; bottom panel: $I=1$ continuum (pion-pole subtracted) axial channel.*]{}]{} \[strategy\] Fitting strategy ============================= As already explained in Sec. \[theory\], and in more detail in Refs. [@alphas1; @alphas2], non-pinched weights are needed in order to get a handle on the DV parameters of Eq. (\[ansatz\]). The simplest and most robust choice of weight allowing us to extract these parameters is the weight $\hw_0(x)=1$. In order to check the stability of these simple fits, we also perform simultaneous fits of the weights $\hw_0$ and $\hw_2$, and of $\hw_0$, $\hw_2$ and $\hw_3$, as in Ref. [@alphas1; @alphas2]. This gives us access to the $D=6$ and $D=8$ terms in the OPE, but also allows us to test for the consistency of the values of $\a_s(m_\tau^2)$ and the DV parameters between our different fits. The values we obtain for $I^{(w)}_{\rm ex}(s_0)$ from the ALEPH data are highly correlated, both between different values of $s_0$, and between different weight functions. If we consider only fits using $I^{(\hw_0)}_{\rm ex}(s_0)$ for a range of $s_0$ values, it turns out that fully correlated $\chi^2$ fits are possible, but if we also include $I^{(\hw_2)}_{\rm ex}(s_0)$ and $I^{(\hw_3)}_{\rm ex}(s_0)$ in the fits, the complete correlation matrices become too singular. For fits with multiple weights, we will follow Refs. [@alphas1; @alphas2], using instead the block-diagonal “fit quality” $$\label{blockcorr} \cq^2=\sum_w\sum_{s_0^i,\, s_0^j} \left(I_{\rm ex}^{(w)}(s_0^i)-I_{\rm th}^{(w)}(s_0^i;{\vec p})\right) \left(C^{(w)}\right)^{-1}_{ij} \left(I_{\rm ex}^{(w)}(s_0^j)-I_{\rm th}^{(w)}(s_0^j;{\vec p})\right)\ ,$$ where we have made the dependence of $I_{\rm th}^{(w)}$ on the fit parameters ${\vec p}$ explicit. The matrix $C^{(w)}$ is the (block-diagonal) covariance matrix of the set of moments with fixed weight $w$ and $s_0$ running over the chosen fit window range. The sums over $s_0^i$ and $s_0^j$ are over bins $i$ and $j$, and the sum over $w$ is over $\hw_0$ and $\hw_2$, or over $\hw_0$, $\hw_2$ and $\hw_3$.[^5] The motivation for this choice is that the cross-correlations between two moments arise mainly because the weight functions used in multiple-moment fits appear to be close to being linearly dependent in practice (even though, as a set of polynomials, of course they are not). This near-linear dependence is possibly caused by the relatively large errors on the data for values of $s$ toward $m_\tau^2$, because it is primarily in this region that the weights $\hw_0$, $\hw_2$ and $\hw_3$ differ from each other. An important observation is that we can freely choose our fit quality $\cq^2$, as long as errors are propagated taking the full data correlation matrix into account. In our case, we choose to estimate fit errors for fits using Eq. (\[blockcorr\]) by propagating the data covariance matrix through a small fluctuation analysis; for details on how this is done, we refer to the appendix of Ref. [@alphas1]. We note that the fit quality $\cq^2$ does not follow a standard $\chi^2$ distribution, so that no absolute meaning can be attached to the minimum value obtained in a fit of this type. The theoretical moments $I_{\rm th}^{(w)}(s_0;{\vec p})$ are non-linear functions of (some of) the fit parameters ${\vec p}$, and it is thus not obvious what the probability distribution of the model parameters looks like. As in Ref. [@alphas2], we will therefore also explore the posterior probability distribution of the model parameters, assuming that the input data follow a multivariate gaussian distribution. In order to map out this probability distribution, we use the same Markov-chain Monte Carlo code [hrothgar]{} [@hrothgar] as was used in Ref. [@alphas2], to which we refer for more details. The distribution generated by [hrothgar]{} is proportional to $\mbox{exp}[-\cq^2({\vec p})/2]$ on the space of parameters, given the data. \[fits\] Fits ============= In this section we present our fits, leaving the discussion of $\a_s$ and other parameters obtained from these fits to Sec. \[results\]. We first present fits to moments constructed from the $V$ spectral function only, followed by fits using both the $V$ and $A$ spectral moments. We have considered $\chi^2$ fits to $I^{(\hw_0)}_{\rm ex}$ and combined fits using fit qualities of the form (\[blockcorr\]) to $I^{(\hw_0)}_{\rm ex}$, $I^{(\hw_2)}_{\rm ex}$, and $I^{(\hw_3)}_{\rm exp}$. Below we will show only the $\chi^2$ fits to $I^{(\hw_0)}_{\rm ex}$ and the $\cq^2$ fits to all three moments. The results from $\cq^2$ fits to the two moments $I^{(\hw_0)}_{\rm ex}$ and $I^{(\hw_2)}_{\rm ex}$ are completely consistent with these, and we therefore omit them below in the interest of brevity. As reviewed above, and discussed in much more detail in Refs. [@alphas1; @alphas2], the necessity to fit not only OPE parameters, but also DV parameters, makes it impossible to fit spectral moments for the sum of the $V$ and $A$ spectral functions. Already for $I^{(\hw_0)}_{\rm ex}$ this would entail a 9-parameter fit, and with the existing data such fits turn out to be unstable. Reference [@ALEPH13] did perform fits to moments of the $V+A$ spectral function at the price of neglecting duality violations and contributions from $D>8$ terms in the OPE; we will compare our fits with those of Ref. [@ALEPH13] in detail in Sec. \[ALEPH\] below. From Fig. \[ALEPH-OPAL\], we see that the only “feature” in the $A$ channel is the peak corresponding to the $a_1$ meson. In contrast, the $V$ channel data indicate the existence of more resonance-like features than just the $\r$ meson peak around $s=0.6$ GeV$^2$, even though the resolution is not good enough to resolve multiple resonances beyond the $\r$. If we wish to avoid making the assumption that already the lowest peak in each channel is in the asymptotic regime in which the  (\[ansatz\]) is valid, we should limit ourselves to fits to the $V$ channel only. However, we will present also fits to the combined $V$ and $A$ channels below, and see that the results are consistent with those from fits to only the $V$ channel. In all cases, we find it necessary to include the moment $\hw_0$ in our fits in order to determine both $\a_s(m_\tau^2)$ and the DV parameters. While one might consider fits to the spectral function itself, such fits are found to be insufficiently sensitive to the parameter $\a_s(m_\tau^2)$, and hence have not been pursued.[^6] Fits involving only pinched moments such as $\hw_2$ and $\hw_3$, on the other hand, are insufficiently sensitive to the DV parameters. All our fits will thus include the spectral moments $I^{(\hw_0)}_{\rm ex}(s_0)$, either in the $V$ channel alone, or in the combined $V$ and $A$ channels. In the latter case, there is a separate set of DV parameters for each of these channels, but the fit parameter $\a_s(m_\tau^2)$ is, of course, common to both.[^7] \[V\] Fits to vector channel data --------------------------------- We begin with fits to the single moment $I^{(\hw_0)}_{\rm ex}(s_0)$, as a function of $s_{\rm min}$, with $s_{\rm min}$ defined to be the minimum value of $s_0$ included in the fit. $s_{\rm min}$ (GeV$^2$) $\chi^2$/dof $p$-value (%) $\alpha_s$ $\delta_V$ $\gamma_V$ $\alpha_V$ $\beta_V$ ------------------------- -------------- --------------- ------------ ------------ ------------ ------------ ----------- -- -- 1.425 33.0/21 5 0.312(11) 3.36(36) 0.66(22) -0.33(61) 3.27(33) 1.475 29.5/19 6 0.304(11) 3.32(41) 0.70(25) -1.21(73) 3.72(39) 1.500 29.5/18 4 0.304(11) 3.32(41) 0.70(25) -1.19(87) 3.71(45) 1.525 29.0/17 3 0.302(11) 3.37(43) 0.68(26) -1.49(94) 3.86(48) 1.550 24.5/16 8 0.295(10) 3.50(50) 0.62(29) -2.43(94) 4.32(48) 1.575 23.5/15 8 0.298(11) 3.50(47) 0.62(28) -2.1(1.0) 4.15(53) 1.600 23.4/14 5 0.297(12) 3.50(48) 0.62(28) -2.1(1.1) 4.16(56) 1.625 23.4/13 4 0.298(13) 3.47(50) 0.63(28) -2.0(1.2) 4.12(62) 1.675 23.1/11 2 0.301(15) 3.35(60) 0.68(31) -1.7(1.4) 3.96(70) 1.425 33.2/21 4 0.331(15) 3.20(34) 0.74(21) -0.30(61) 3.24(33) 1.475 29.5/19 6 0.320(14) 3.16(40) 0.78(24) -1.20(73) 3.70(39) 1.500 29.5/18 4 0.320(15) 3.16(40) 0.78(24) -1.19(87) 3.69(45) 1.525 28.9/17 4 0.317(14) 3.22(42) 0.75(25) -1.51(93) 3.85(48) 1.550 24.3/16 8 0.308(13) 3.36(49) 0.69(28) -2.48(93) 4.33(48) 1.575 23.3/15 8 0.311(14) 3.35(46) 0.69(27) -2.2(1.0) 4.17(52) 1.600 23.3/14 6 0.311(15) 3.36(47) 0.69(27) -2.2(1.1) 4.19(56) 1.625 23.2/13 4 0.312(16) 3.33(49) 0.70(28) -2.1(1.2) 4.15(62) 1.675 23.0/11 2 0.314(19) 3.23(58) 0.74(30) -1.8(1.5) 4.02(74) Since these are $\chi^2$ fits, one may estimate the $p$-values for these fits; they are shown in the third column of Tab. \[VVw1paper\]. We note that the $p$-values are not large, but they are not small enough to exclude the validity of our fit function based on the ALEPH data. Judged by $p$-value, the fits with $s_{\rm min}=1.55$ and $1.575$ GeV$^2$ are the best fits, and we thus take the average value of the central values for the fit parameters from these two fits as our best value, with a statistical error that is the larger of the two (noting that these are essentially equal in size). For the strong coupling, we find $$\begin{aligned} \label{ashw0} \a_s(m_\tau^2)&=&0.296(11)\ ,\qquad\mbox{(FOPT)}\ ,\\ &=&0.310(14)\ ,\qquad\mbox{(CIPT)}\ .\nonumber\end{aligned}$$ The difference between the FOPT and CIPT results reflects the well-known fact that the two prescriptions show no sign of converging to one another as the truncation order is increased [@PT; @BJ]. We observe that the $p$-value starts to decrease again from $s_{\rm min}=1.6$ GeV$^2$, indicating that the data become too sparse for an optimal fit. We investigated the sensitivity of these fits to omitting the data in up to four bins with the largest values of $s$, and found no significant difference. This is no surprise, given the errors shown in Fig. \[CIFOw0fit\]. For illustration, we show the parameter correlation matrix for the FOPT fit with $s_{\rm min}=1.55$ GeV$^2$ in Tab. \[corr\]. $\a_s$ $\d_V$ $\g_V$ $\a_V$ $\b_V$ -------- -------- -------- -------- -------- -------- $\a_s$ 1 0.600 -0.606 0.689 -0.653 $\d_V$ 0.600 1 -0.994 0.310 -0.297 $\g_V$ -0.606 -0.994 1 -0.330 0.315 $\a_V$ 0.689 0.310 -0.330 1 -0.996 $\b_V$ -0.653 -0.297 0.315 -0.996 1 ![image](CIFO_w0fit_w0_V.pdf){width="7cm"} ![image](CIFO_w0fit_spectrum_V.pdf){width="7cm"} In Fig. \[CIFOw0fit\] we show the results of CIPT and FOPT fits to $I^{(\hw_0)}_{\rm ex}(s_0)$ for $s_{\rm min}=1.55$ GeV$^2$. The left panel shows the results of the fits for the moment, the right-hand panel the OPE+DV versions of the spectral functions resulting from these fits. ![image](alphastau_chisq.pdf){width="10cm"} ![image](alphastau_delta_c.pdf){width="7cm"} ![image](gamma_delta_c.pdf){width="7cm"} As in Ref. [@alphas2], we studied the posterior probability distribution, using the same Markov-chain Monte Carlo code, [hrothgar]{} [@hrothgar]. We remind the reader that it is not obvious what this distribution looks like, even if we assume that the data errors follow a multivariate gaussian distribution. For the fits of Tab. \[VVw1paper\], this code generates points in the 5-dimensional parameter space, and computes the $\chi^2$ value associated with each of these points. These points are distributed as ${\rm exp}[-\chi^2({\vec p})]$, with $\vec p$ the parameter vector, and $\chi^2$ evaluated on the ALEPH data (including the full covariance matrix) and the values of the parameters at these points. In Fig. \[chi2\] we show $\chi^2$ as a function of $\a_s(m_\tau^2)$, choosing the FOPT fit with $s_{\rm min}=1.55$ GeV$^2$. Since for each $\a_s(m_\tau^2)$ points with many different values for the other four parameters are generated stochastically, the distribution appears as the cloud shown in the figure. This distribution shows a unique minimum for the value of $\chi^2$, at approximately $\a_s(m_\tau^2)=0.295$, consistent with Tab. \[VVw1paper\]. The width of the distribution is also roughly consistent with the error of $\pm 0.010$, but we see that the distribution of points is not entirely symmetric around the minimum. There is no alternative (local) minimum, as was the case with the OPAL data [@alphas2]. We also find the parameters $\d_V$ and $\gamma_V$ to be much better constrained than was the case for the corresponding fits to the OPAL data in Ref. [@alphas2]. The distributions in the $\delta_V$–$\alpha_s(m_\tau^2)$ and $\delta_V$–$\gamma_V$ planes are shown in the left and right panels of Fig. \[contour\].[^8] Since for all other fits presented in the rest of this article the conclusions about the posterior probability distribution found with [hrothgar]{} are similar, we will refrain from showing the analogues of Figs. \[chi2\] and \[contour\] for those fits. $s_{\rm min}$ (GeV$^2$) $\cq^2$/dof $\alpha_s$ $\delta_{V}$ $\gamma_{V}$ $\alpha_{V}$ $\beta_{V}$ $10^2C_{6V}$ $10^2C_{8V}$ ------------------------- --------------- ------------ -------------- -------------- -------------- ------------- -------------- -------------- -- 1.425 106.0/71=1.49 0.305(10) 3.02(38) 0.87(24) -0.68(56) 3.43(31) -0.59(17) 0.94(29) 1.475 93.3/65=1.43 0.302(10) 3.07(44) 0.85(27) -1.41(68) 3.81(36) -0.71(16) 1.19(28) 1.500 93.2/62=1.50 0.302(10) 3.08(45) 0.85(27) -1.40(77) 3.80(40) -0.71(18) 1.19(30) 1.525 85.6/59=1.45 0.298(10) 3.21(49) 0.78(29) -1.96(78) 4.08(41) -0.79(16) 1.36(27) 1.550 76.3/56=1.36 0.295(10) 3.30(52) 0.74(30) -2.48(81) 4.33(41) -0.86(14) 1.50(24) 1.575 74.5/53=1.41 0.297(10) 3.29(51) 0.74(29) -2.25(87) 4.22(44) -0.83(16) 1.43(27) 1.600 74.2/50=1.48 0.297(11) 3.31(51) 0.73(30) -2.27(92) 4.23(47) -0.83(16) 1.44(29) 1.625 73.8/47=1.57 0.298(11) 3.28(54) 0.74(31) -2.16(99) 4.18(50) -0.81(18) 1.40(32) 1.675 72.0/41=1.76 0.299(12) 3.28(63) 0.74(34) -2.1(1.1) 4.13(57) -0.80(21) 1.37(39) 1.425 98.6/71=1.39 0.328(16) 3.17(39) 0.77(25) -0.43(61) 3.30(32) -0.60(19) 0.83(35) 1.475 89.5/65=1.38 0.319(14) 3.11(44) 0.81(27) -1.24(71) 3.72(37) -0.76(16) 1.18(31) 1.500 89.4/62=1.44 0.319(15) 3.11(44) 0.81(27) -1.20(81) 3.70(42) -0.76(18) 1.16(34) 1.525 82.1/59=1.39 0.314(14) 3.22(48) 0.77(28) -1.81(80) 4.00(42) -0.85(15) 1.37(28) 1.550 73.7/56=1.32 0.309(13) 3.28(51) 0.74(30) -2.39(82) 4.28(42) -0.93(13) 1.53(25) 1.575 71.8/53=1.35 0.311(14) 3.28(50) 0.74(29) -2.12(89) 4.15(45) -0.89(15) 1.45(28) 1.600 71.7/50=1.43 0.311(14) 3.28(51) 0.74(29) -2.16(94) 4.17(48) -0.90(15) 1.46(29) 1.625 71.5/47=1.52 0.312(15) 3.24(53) 0.75(30) -2.0(1.0) 4.11(51) -0.88(17) 1.42(34) 1.675 69.8/41=1.70 0.313(16) 3.22(63) 0.76(33) -1.9(1.2) 4.04(59) -0.86(20) 1.38(42) Next, we consider simultaneous fits to the moments $I^{(\hw_0)}_{\rm ex}(s_0)$, $I^{(\hw_2)}_{\rm ex}(s_0)$ and $I^{(\hw_0)}_{\rm ex}(s_3)$; results for the same values of $s_{\rm min}$ as before are given in Tab. \[VVwtaupaper\]. These fits are performed by minimizing $\cq^2$ as defined in Eq. (\[blockcorr\]), with correlations between different moments omitted. However, the full correlation matrix, including correlations between different moments, has been taken into account in the parameter fit error estimates shown in the table. These errors were determined by linear propagation of the full data covariance matrix; for a detailed explanation of the method, we refer to the appendix of Ref. [@alphas1]. Judging by the values of $\cq^2/$dof,[^9] again the two fits for $s_{\rm min}=1.55$ and $1.575$ GeV$^2$ are the optimal ones. Averaging parameter values between these two fits, we find $$\begin{aligned} \label{ashw023} \a_s(m_\tau^2)&=&0.296(10)\ ,\qquad\mbox{(FOPT)}\ ,\\ &=&0.310(14)\ ,\qquad\mbox{(CIPT)}\ ,\nonumber\end{aligned}$$ in excellent agreement with Eq. (\[ashw0\]). We have also considered fits involving only the two moments $I^{(\hw_0)}_{\rm ex}(s_0)$ and $I^{(\hw_2)}_{\rm ex}(s_0)$, and find results very similar those contained in Tabs. \[VVw1paper\] and \[VVwtaupaper\]. In Fig. \[CIFOw023fit\] we show the quality of the fits of Tab. \[VVwtaupaper\] for $s_{\rm min}=1.55$ GeV$^2$. ![image](CIFO_w023fit_w0_V.pdf){width="7cm"} ![image](CIFO_w023fit_w2_V.pdf){width="7cm"} ![image](CIFO_w023fit_w3_V.pdf){width="7cm"} ![image](CIFO_w023fit_spectrum_V.pdf){width="7cm"} We end this subsection with several comments. First, we see that pinching indeed serves to suppress the role of DV contributions. The upper right panel in Fig. \[CIFOw023fit\] shows the singly pinched $\hat{w}_2$ case and the lower left panel shows the doubly pinched $\hat{w}_3$ case. There is also a significant difference between the colored and black curves in all panels, though with the onset of this difference shifting to lower $s_0$ as the degree of pinching is increased. The existence of these differences implies that, with the errors on the ALEPH data, the presence of duality violations is evident for all three moments. This, in turn, implies that omitting duality violations from the theory side of the corresponding FESRs has the potential to produce a significant additional systematic error on $\a_s(m_\tau^2)$ (and the higher $D$ OPE coefficients) that cannot be estimated if only fits without DV parameters are attempted. We will return to this point in Sec. \[ALEPH\] below. Second, we note that the spectral function itself below $s=s_{\rm min}$ is not very well described by the curves obtained from the fits. While the form of Eq. (\[ansatz\]) constitutes a reasonable assumption for asymptotically large $s$, we do not know [*a priori*]{} what a reasonable value of $s_{\rm min}$ should be. It is clear, however, that our  works reasonably well for $s\,\gtap\, 1.5$ GeV$^2$, but that the asymptotic regime definitely does not include the region around the $\r$ peak. ------------------------- -------------- --------------- ------------ ---------- ---------- ----------- ---------- -- $s_{\rm min}$ (GeV$^2$) $\chi^2$/dof $p$-value (%) $\alpha_s$ $\d_V$ $\g_V$ $\a_V$ $\b_V$ $\d_A$ $\g_A$ $\a_A$ $\b_A$ 1.500 49.8/37 8 0.310(14) 3.45(40) 0.62(24) -1.0(1.0) 3.60(53) 1.85(38) 1.38(20) 4.5(1.2) 2.46(59) 1.525 48.6/35 6 0.309(15) 3.53(42) 0.59(25) -1.2(1.2) 3.71(60) 1.99(40) 1.31(20) 4.4(1.2) 2.49(62) 1.550 40.0/33 19 0.297(11) 3.57(48) 0.58(28) -2.33(97) 4.27(50) 1.56(49) 1.44(22) 5.43(89) 1.99(46) 1.575 38.7/31 16 0.300(12) 3.57(45) 0.58(26) -1.9(1.1) 4.08(55) 1.67(51) 1.41(23) 5.22(94) 2.10(48) 1.600 37.2/298 14 0.300(12) 3.56(46) 0.59(27) -2.0(1.2) 4.10(59) 1.41(57) 1.52(25) 5.4(1.0) 2.01(52) 1.625 35.4/27 13 0.300(13) 3.50(48) 0.62(27) -1.9(1.3) 4.07(64) 0.90(72) 1.73(29) 5.8(1.2) 1.82(60) 1.500 49.7/37 8 0.327(18) 3.29(39) 0.70(24) -1.0(1.0) 3.59(53) 1.92(39) 1.35(20) 4.5(1.1) 2.50(60) 1.525 48.5/35 6 0.326(19) 3.37(40) 0.66(24) -1.2(1.2) 3.70(60) 2.06(41) 1.28(21) 4.4(1.2) 2.54(62) 1.550 39.7/33 20 0.311(13) 3.43(47) 0.65(27) -2.38(96) 4.28(49) 1.61(49) 1.43(22) 5.36(87) 2.04(45) 1.575 38.4/31 17 0.315(15) 3.42(44) 0.65(26) -2.0(1.1) 4.10(56) 1.72(52) 1.39(24) 5.15(92) 2.14(48) 1.600 36.9/29 15 0.314(15) 3.41(45) 0.66(26) -2.1(1.2) 4.13(59) 1.46(58) 1.50(25) 5.33(98) 2.06(51) 1.625 35.1/27 14 0.314(16) 3.36(48) 0.68(27) -2.0(1.3) 4.11(64) 0.96(72) 1.71(29) 5.7(1.1) 1.87(58) ------------------------- -------------- --------------- ------------ ---------- ---------- ----------- ---------- -- : *Combined $V$ and $A$ channel fits to $I^{(\hw_0)}_{\rm ex}(s_0)$ from $s_0=s_{\rm min}$ to $s_0=m_\tau^2$. FOPT results are shown above the double line, CIPT below; no $D>0$ OPE terms are included in the fit. $\gamma_{V,A}$ and $\beta_{V,A}$ in units of [GeV]{}$^{-2}$.*[]{data-label="VAw1paper"} \[VandA\] Combined fits to vector and axial channel data -------------------------------------------------------- We now consider fits analogous to those of the preceding subsection, involving simultaneous fitting of the $V$ and $A$ spectral moments as a function of $s_{\rm min}$. The fit parameter $\a_s(m_\tau^2)$ is common to the two channels, while the $D>0$ OPE and DV parameters are distinct for each. Fits to $I^{(\hw_0)}_{ex,V}(s_0)$ and $I^{(\hw_0)}_{ex,A}(s_0)$ are shown in Tab. \[VAw1paper\]; we displayed fewer values of $s_{\rm min}$ for the sake of brevity. ![image](CIFO_w0VAfit_w0_V.pdf){width="7cm"} ![image](CIFO_w0VAfit_spectrum_V.pdf){width="7cm"} ![image](CIFO_w0VAfit_w0_A.pdf){width="7cm"} ![image](CIFO_w0VAfit_spectrum_A.pdf){width="7cm"} Fits with $s_{\rm min}=1.55$ and $1.575$ GeV$^2$ have the highest $p$-values, as before. Averaging the parameter values for these fits, we find $$\begin{aligned} \label{ashw0VA} \a_s(m_\tau^2)&=&0.299(12)\ ,\qquad\mbox{(FOPT)}\ ,\\ &=&0.313(15)\ ,\qquad\mbox{(CIPT)}\ ,\nonumber\end{aligned}$$ slightly higher values than those of Eqs. (\[ashw0\]) and  (\[ashw023\]), but consistent within errors. The errors are $\chi^2$ errors, since all correlations were taken into account in the fit; they are slightly larger than those found in the $V$-channel fits. $s_{\rm min}$ (GeV$^2$) $\cq^2$/dof $\alpha_s$ $\delta_{V,A}$ $\gamma_{V,A}$ $\alpha_{V,A}$ $\beta_{V,A}$ $10^2C_{6V,A}$ $10^2C_{8V,A}$ ------------------------- -------------- ------------ ---------------- ---------------- ---------------- --------------- ---------------- ---------------- 1.475 182/131=1.39 0.297(7) 2.90(42) 0.95(26) -1.61(65) 3.91(35) -0.78(13) 1.31(23) 2.26(35) 1.13(18) 4.92(58) 2.25(30) -0.08(35) 1.12(96) 1.500 160/125=1.28 0.297(8) 2.92(43) 0.94(26) -1.62(73) 3.91(39) -0.78(14) 1.31(25) 1.90(44) 1.29(21) 5.26(69) 2.08(36) -0.26(44) 1.8(1.4) 1.525 149/119=1.25 0.294(8) 3.08(48) 0.86(28) -2.16(75) 4.18(40) -0.85(13) 1.46(23) 1.86(48) 1.30(22) 5.38(72) 2.02(37) -0.38(49) 2.1(1.6) 1.550 126/113=1.11 0.292(9) 3.19(51) 0.80(30) -2.65(79) 4.42(41) -0.90(13) 1.57(22) 1.53(56) 1.42(24) 5.73(84) 1.84(43) -0.63(61) 3.0(2.2) 1.575 124/107=1.16 0.293(9) 3.18(51) 0.81(29) -2.47(84) 4.33(43) -0.88(14) 1.52(24) 1.57(61) 1.41(26) 5.67(86) 1.87(44) -0.57(61) 2.8(2.2) 1.600 116/101=1.15 0.293(9) 3.20(52) 0.80(30) -2.51(89) 4.35(46) -0.89(14) 1.53(25) 1.14(74) 1.59(29) 6.0(1.0) 1.72(53) -0.73(72) 3.6(2.7) 1.625 112/95=1.18 0.294(10) 3.20(55) 0.79(31) -2.43(95) 4.31(48) -0.87(15) 1.50(28) 0.85(92) 1.71(34) 6.2(1.2) 1.61(63) -0.80(80) 4.0(3.2) 1.475 159/131=1.21 0.338(13) 3.45(32) 0.61(20) -0.63(67) 3.42(35) -0.58(16) 0.83(31) 2.23(33) 1.25(21) 3.45(81) 3.02(42) 0.59(25) -0.64(58) 1.500 146/125=1.17 0.328(15) 3.26(39) 0.72(24) -0.92(79) 3.56(41) -0.67(18) 1.00(35) 1.96(41) 1.34(22) 4.41(89) 2.53(46) 0.25(40) 0.3(1.0) 1.525 136/119=1.14 0.320(13) 3.35(44) 0.69(26) -1.59(79) 3.90(41) -0.80(15) 1.26(29) 1.93(46) 1.32(23) 4.76(83) 2.35(43) 0.05(43) 0.78(12) 1.550 118/113=1.04 0.312(13) 3.35(49) 0.70(29) -2.28(81) 4.23(42) -0.90(13) 1.48(25) 1.59(55) 1.44(25) 5.37(89) 2.03(46) -0.33(56) 2.0(1.8) 1.575 115/107=1.07 0.315(13) 3.35(48) 0.70(28) -1.98(88) 4.09(45) -0.86(15) 1.39(29) 1.65(59) 1.42(27) 5.23(92) 2.11(47) -0.22(55) 1.6(1.7) 1.600 108/101=1.07 0.314(14) 3.33(49) 0.71(29) -2.04(93) 4.12(47) -0.87(15) 1.41(30) 1.23(70) 1.60(30) 5.6(1.1) 1.95(55) -0.37(64) 2.2(2.2) 1.625 105/95=1.10 0.315(15) 3.28(53) 0.73(30) -1.9(1.0) 4.06(51) -0.85(17) 1.37(34) 0.96(85) 1.71(35) 5.7(1.2) 1.87(63) -0.42(71) 2.4(2.5) For $s_{\rm min}=1.55$ GeV$^2$ we show the quality of the fits in the left panels of Fig. \[CIFOw0VAfit\] and the $V$ and $A$ spectral-function comparisons obtained using parameter values from the fit in the corresponding right-hand panels. We note that the fit curves in the axial case are essentially determined by the shoulder of the $a_1$ resonance, in contrast to what happens in the vector case, where the $\r$ peak is well away from the region relevant for the shape of the fit curves. Tab. \[VAwtaupaper\] shows the results of the combined $V$ and $A$ channel fits to the three moments $I^{(\hw_0)}_{\rm ex}(s_0)$, $I^{(\hw_2)}_{\rm ex}(s_0)$ and $I^{(\hw_0)}_{\rm ex}(s_3)$. Judging by the values of $\cq^2$/dof, the best fits are again those with $s_{\rm min}=1.55$ and $1.575$ GeV$^2$, leading to $$\begin{aligned} \label{ashw023VA} \a_s(m_\tau^2)&=&0.293(9)\ ,\qquad\mbox{(FOPT)}\ ,\\ &=&0.313(13)\ ,\qquad\mbox{(CIPT)}\ . \nonumber \end{aligned}$$ These values are in good agreement with those of the other fits reported above. As before, fits to just the pair of moments $I^{(\hw_0)}_{\rm ex}(s_0)$ and $I^{(\hw_2)}_{\rm ex}(s_0)$ do not lead to any surprises. We show the quality of the fits of Tab. \[VAwtaupaper\] for the moments $I^{(\hw_0)}_{\rm ex}(s_0)$ and the comparison of the resulting spectral functions to the experimental ones for both channels in Fig. \[CIFOw023VAfit\]. The fits for the other two moments look very similar to those in Fig. \[CIFOw023fit\] for the $V$ channel, and show a similar quality in the $A$ channel. ![image](CIFO_w023VAfit_w0_V.pdf){width="7cm"} ![image](CIFO_w023VAfit_spectrum_V.pdf){width="7cm"} ![image](CIFO_w023VAfit_w0_A.pdf){width="7cm"} ![image](CIFO_w023VAfit_spectrum_A.pdf){width="7cm"} \[results\] Tests and results ============================= There are a number of consistency checks that can be applied once values for $\a_s(m_\tau^2)$ as well as the $D>0$ OPE and DV parameters have been obtained from a fit. We will present some of these in Sec. \[tests\]. Then, in Sec. \[alphas\], we will present our final number for $\a_s(m_\tau^2)$, following this in Sec. \[nonpert\] by a determination of the non-perturbative contribution to $R_{V+A;ud}$ and a comparison of the $D=6$ OPE coefficients with the results of estimates based on the vacuum saturation approximation (VSA). In Sec. \[OPALresults\] we will compare the present results with those from our fits to the OPAL data. ![image](CIFO_w023VAfit_w3_VplusA.pdf){width="10cm"} \[tests\] Tests --------------- We consider first the comparison of the experimental value of $$\label{Rdef} I^{(\hw_3)}_{ex,V}(s_0)+I^{(\hw_3)}_{ex,A}(s_0) = {\frac{m_\tau^2}{12\pi^2 \vert V_{ud}\vert^2 S_{EW}}} \, R_{V+A;ud}(s_0)$$ with the function obtained from the fit. In Fig. \[CIFOw023VAfitw3VplusA\] we show this comparison, using the parameter values for $s_{\rm min}=1.55$ GeV$^2$ from Tab. \[VAwtaupaper\]. The fitted curves are in good agreement everywhere above $s_0\approx 1.3$ GeV$^2$ ($s_0\approx 1.5$ GeV$^2$) for the FOPT (CIPT) fits.[^10] We include this test because (in rescaled form) it was originally advocated as an important confirmation of the analysis of Ref. [@ALEPH]. One can see that our fits satisfy this test at least as well (see  Fig. 73 of Ref. [@ALEPH]). In other words, this test is not able to discriminate between the results of our analysis and those Refs. [@ALEPH13; @ALEPH; @ALEPH08]. For more discussion on the comparison between our analysis and that of Refs. [@ALEPH13; @ALEPH; @ALEPH08] we refer to Sec. \[ALEPH\]. ![image](WSR1.pdf){width="7cm"} ![image](WSR1noDV.pdf){width="7cm"} As in Ref. [@alphas1], we may also consider the first and second Weinberg sum rules (WSRs) [@SW], as well as the DGMLY sum rule for the pion electro-magnetic mass splitting [@EMpion]. These sum rules can be written as $$\begin{aligned} \label{WSRdefs} \int_0^\infty ds\,\left(\r^{(1+0)}_V(s)-\r^{(1+0)}_A(s)\right) &=&\int_0^\infty ds\,\left(\r^{(1)}_V(s)-\r^{(1)}_A(s)\right)-2f_\p^2=0\ ,\\ \int_0^\infty ds\,s\left(\r^{(1+0)}_V(s)-\r^{(1+0)}_A(s)\right) &=&\int_0^\infty ds\,s\left(\r^{(1)}_V(s)-\r^{(1)}_A(s)\right)-2m_\p^2 f_\p^2=0\ ,\nonumber\\ \int_0^\infty ds\,s\log{(s/\m^2)}\left(\r^{(1)}_V(s)-\r^{(1)}_A(s)\right) &=&\frac{8\p f_0^2}{3\a} \left(m_{\p^\pm}^2-m_{\p^0}^2\right)\ ,\nonumber\end{aligned}$$ where $f_0$ is the pion decay constant in the chiral limit, and $\a$ is the fine-structure constant. For the second WSR we assume that terms of order $m_im_j$, $i,j=u,d$ can be neglected. Without this assumption, the integral is linearly divergent, forcing us to cut it off. If we cut off the integral at $s_0$, there would be an extra contribution proportional to $m_im_j\a_s^2s_0$ in this sum rule. This contribution is still very small at $s_0=m_\tau^2$ (of order a few percent of the contribution $2m_\p^2 f_\p^2$), allowing us to assume that we are effectively in the chiral limit with regard to the second WSR. Even the term $2m_\p^2 f_\p^2$, while dominating the term proportional to $m_im_j\a_s^2s_0$, vanishes in the chiral limit, and itself turns out to be numerically negligible within errors. Also the DGMLY sum rule holds only in the chiral limit, and in that limit the integral on the left-hand side is independent of $\m$ because of the second WSR. In Fig. \[WSR\] we show the first integral in Eq. (\[WSRdefs\]) as a function of the “switch” point $s_{\rm sw}$ below which we use the experimental data, and above which we use the DV  (\[ansatz\]) with parameters from the CIPT fit with $s_{\rm min}=1.55$ GeV$^2$ of Tab. \[VAwtaupaper\] in order to evaluate the integral. Using parameter values from Tab. \[VAw1paper\] or FOPT fits leads to almost identical figures.[^11] The figure on the left includes the contribution from Eq. (\[ansatz\]), while the figure on the right omits such contributions. The latter is equivalent to the upper right panel of Fig. 8 in the first paper in Ref. [@ALEPH]. Clearly, the first WSR is very well satisfied by our fits, but only if duality violations are taken into account. We do not show similar figures for the second WSR and the DGMLY sum rule, because our conclusions for these sum rules are very similar. Just as in Ref. [@alphas1; @alphas2], these sum rules are satisfied within errors, but only if duality violations are taken into account. In particular, within errors, one may assume that our representation of the spectral functions is in the chiral limit, for the purpose of these three sum rules. \[alphas\] The strong coupling ------------------------------ The presence of duality violations forces us to make several assumptions in order to extract a value for $\a_s(m_\tau^2)$. These assumptions have been checked against the data,  Figs. \[CIFOw0fit\] and \[CIFOw023fit\]–\[WSR\]. First, we need to assume that Eq. (\[ansatz\]) provides a satisfactory description of duality violations for asymptotically large $s$. Second, we need to assume that $s\,\gtap\, 1.5$ GeV$^2$ is already in the asymptotic region. And, finally, if we wish to also use the axial data, we need to assume that this is true both in the $V$ and $A$ channels. As already discussed above, this would amount to the assumption that the upper shoulder of the $a_1$ resonance is already more or less in the asymptotic region. Using only the $V$-channel fits, we avoid having to make this latter assumption, and doing so we find, from the results quoted in Eq. (\[ashw023\]), $$\begin{aligned} \label{alphasfinal} \a_s(m_\tau^2)&=&0.296(10)(1)(2)=0.296\pm 0.010\ ,\qquad(\overline{\mbox{MS}},\ n_f=3,\ \mbox{FOPT})\ ,\\ &=&0.310(14)(1)(1)=0.310\pm 0.014\ ,\qquad(\overline{\mbox{MS}},\ n_f=3,\ \mbox{CIPT})\ ,\nonumber\end{aligned}$$ where the first error is the statistical fit error already given in Eq. (\[ashw023\]), while the second represents half the difference between the $s_{\rm min}=1.55$ and $1.575$ GeV$^2$ results of Tab. \[VVwtaupaper\] from which the average is derived. The third error represents the change induced by varying the estimated 6-loop $D=0$ coefficient $c_{51}=283$ [@BJ] by the assumed $100\%$ uncertainty about its central value, as in Ref. [@alphas1; @alphas2]. The error from this latter uncertainty would be about $\pm 0.004$ for both FOPT and CIPT if it were estimated from fits using only the moment with weight $\hw_0$; this would raise both final errors by $0.001$. We observe that the final errors we find are of the same order of magnitude as the difference between the FOPT and CIPT values of $\a_s(m_\tau^2)$. We also note that in all tables the value of $\a_s(m_\tau^2)$ is very stable as a function of $s_{\rm min}$ for all values of $s_{\rm min}$ included in these tables, except for possibly the lowest $s_{\rm min}$ shown. Equation (\[alphasfinal\]) constitutes our final result for $\a_s(m_\tau^2)$ from the revised ALEPH data. Converting these results into values for $\a_s$ at the $Z$ mass using the standard self-consistent combination of 4-loop running with 3-loop matching at the flavor thresholds [@cks97], we find $$\begin{aligned} \label{alphasZ} \a_s(m_Z^2)&=&0.1155\pm 0.0014\ ,\qquad(\overline{\mbox{MS}}, \ n_f=5,\ \mbox{FOPT})\ ,\\ &=&0.1174\pm 0.0019\ ,\qquad(\overline{\mbox{MS}},\ n_f=5, \ \mbox{CIPT})\ .\nonumber\end{aligned}$$ \[nonpert\] Non-perturbative quantities --------------------------------------- As in Ref. [@alphas2], we would like to estimate the relative deviation of the aggregate dimension-6 condensates $C_{6,V/A}$ from the values given by the VSA. We express these condensates in terms of the VSA-violating parameters $\r_1$ and $\r_5$ by [@BNP] $$\label{C6VA} C_{6,V/A} \,=\, \frac{32}{81}\,\p \a_s(m_\tau^2)\, \langle\bar qq\rangle^2 \!\left(\!\!\begin{array}{c} 2\,\rho_1 - 9\,\rho_5 \\ 11\,\rho_1 \end{array}\!\!\right)\! \ ,$$ with VSA results for $C_{6,V/A}$ corresponding to $\r_1=\r_5=1$. Using $\langle\bar qq(m_\tau^2)\rangle=(-272\ \mbox{MeV})^3$ [@jam02], and the averages of the results for $C_{6,V}$ and $C_{6,A}$ from the $s_{\rm min}=1.55$ and $1.575$ GeV$^2$ fits of Tab. \[VAwtaupaper\], we find[^12] $$\begin{aligned} \label{rho1rho5} \rho_1 &\!\!=\!\!& \,-4 \pm 4 \,, \quad \rho_5 \,=\, 5.9 \pm 0.9 \qquad \mbox{(FOPT)} \ , \\ \rho_1 &\!\!=\!\!& \,-2 \pm 3 \,, \quad \rho_5 \,=\, 5.9 \pm 0.8 \qquad \mbox{(CIPT)} \ .\nonumber\end{aligned}$$ While no conclusion can be drawn about the accuracy of the VSA for $\rho_1$, it is clear that the VSA is a poor approximation for $\rho_5$. The value for $\r_5$ is consistent with the one we found from OPAL data in Ref. [@alphas2]. It is conventional to characterize the size of non-perturbative contributions to the ratio $R_{V+A;ud}=R_{V;ud}+R_{A;ud}$ of the total non-strange hadronic decay width to the electron decay width, where $R_{V/A;ud}$ have been defined in Eq. (\[taukinspectral\]), by the parametrization $$\label{Rtau} R_{V+A;ud} =N_c S_{\rm EW}|V_{ud}|^2\left(1+\d_P+\d_{NP}\right)\ ,$$ where $\d_P$ stands for the perturbative, and $\d_{NP}$ for the non-perturbative contributions beyond the parton model. If one knows $\d_{NP}$, the quantity $\d_P$, and hence $\a_s(m_\tau^2)$ can be determined from the experimental value of $R_{V+A;ud}$. In such an approach, the error on $\a_s(m_\tau^2)$ is thus directly correlated with that on $\d_{NP}$. As in Ref. [@alphas2], our fits give access to the values of $\d_{NP}$, as well as those of $\d^{(6)}$, $\d^{(8)}$, and $\d^{\rm DV}$, the contributions to $\d_{NP}$ from the $D=6$ and $D=8$ terms in the OPE as well as the DV term. From the $s_{\rm min}=1.55$ GeV$^2$ fits of Tab. \[VAwtaupaper\], we find $$\begin{aligned} \label{deltas} \d^{(6)}&=&0.058\pm 0.026 \ ,\qquad\quad\ \d^{(8)}=-0.036\pm 0.017 \ ,\\ \d_{\rm DV}&=&-0.0016\pm 0.0011 \ \qquad\mbox{(FOPT)}\ ,\nonumber\\ \null\nonumber\\ \d^{(6)}&=&0.040\pm 0.024 \ ,\qquad\quad\ \d^{(8)}=-0.024\pm 0.015 \ ,\nonumber\\ \d_{\rm DV}&=&-0.0009\pm 0.0009 \ \qquad\mbox{(CIPT)}\ .\nonumber\end{aligned}$$ The FOPT and CIPT estimates for these quantities are consistent with each other. There is a strong correlation between $\d^{(6)}$ and $\d^{(8)}$, about $-0.97$ in the FOPT case. The values for $\d^{NP}$ derived from these results are $$\begin{aligned} \label{dNP} \d^{NP}&=&0.020\pm 0.009\qquad\mbox{(FOPT)}\ ,\\ \d^{NP}&=&0.016\pm 0.010\qquad\mbox{(CIPT)}\ ,\nonumber\end{aligned}$$ which differ by $1.6\ \s$, respectively, $1.2\ \s$ from the values found using the the OPAL data in Ref. [@alphas2]. With the value $R_{V+A;ud}=3.475(11)$ quoted in Ref. [@ALEPH13], one finds $\d_P\approx 0.18$, an order of magnitude larger than $\d_{NP}$, indicating that $R_{V+A;ud}$ is a dominantly perturbative quantity. However, as in Ref. [@alphas2], we find an error on $\d_{NP}$ much larger than that reported by standard analyses in the literature, almost an order of magnitude so, for example, when compared to Ref. [@ALEPH13]. The result is that the error on $\a_s(m_\tau^2)$ is underestimated in the standard analysis; for further discussion, we again refer to Sec. \[ALEPH\] below. \[OPALresults\]Comparison with the fits of Ref. [@alphas2] to OPAL data ----------------------------------------------------------------------- A particularly interesting check is to look for consistency of the results from our fits to the ALEPH data with those we obtained by fitting the OPAL data in Ref. [@alphas2]. For the strong coupling, our results from OPAL data were $$\begin{aligned} \label{alphasOPAL} \a_s(m_\tau^2)&=&0.325\pm 0.018\ ,\qquad(\overline{\mbox{MS}},\ n_f=3, \ \mbox{FOPT,\ OPAL,\ Ref.~\cite{alphas2}})\ ,\\ &=&0.347\pm 0.025\ ,\qquad(\overline{\mbox{MS}},\ n_f=3, \ \mbox{CIPT,\ OPAL,\ Ref.~\cite{alphas2}})\ .\nonumber\end{aligned}$$ The values (\[alphasfinal\]) we find from the ALEPH data are $1.4$, respectively, $1.3$ $\s$ lower than the OPAL values, assuming that the errors on the ALEPH and OPAL values are independent. We also note that the fits in Ref. [@alphas2] were not entirely unambiguous; a choice about the preferred range for $\d_V$ had to be made. The fact that the difference between our central ALEPH- and OPAL-based values, as well as that between our central CIPT- and FOPT-based results, is, in each case, comparable in size to the error obtained in any of these analyses supports the notion that any improvement in the precision with which $\a_s(m_\tau^2)$ can be determined from hadronic $\tau$ decays will require significant improvements to the data. Of course, this assumes that the fit  employed is valid in the region of $s_0$ larger than about $1.5$ GeV$^2$. We will return to this point in Sec. \[ALEPH\] below, as well as in the Conclusion. The coupling $\a_s(m_\tau^2)$ is, of course, not the only fit parameter. One may for instance compare the values of the OPE and DV parameters between Tab. \[VVwtaupaper\] above and Tab. 4 of Ref. [@alphas2] for $s_{\rm min}\approx 1.5$ GeV$^2$, and conclude that they agree between the ALEPH and OPAL fits within (sometimes fairly large) errors. However, comparing Tab. \[VAwtaupaper\] above with Tab. 5 of Ref. [@alphas2], one observes that the OPE and DV parameters for the axial channel agree less well between the ALEPH and OPAL fits. This may be an indication that it is safer to restrict our fits to the vector channel. Results for $\a_s(m_\tau^2)$ are, nevertheless, found to be consistent between pure-$V$ and combined $V$ and $A$ fits, both in this article and in Ref. [@alphas2]. \[final\] Final results for the strong coupling from ALEPH and OPAL data ------------------------------------------------------------------------ To conclude this section, we present our best values for the strong coupling at the $\tau$ mass extracted from the ALEPH and OPAL data for hadronic $\tau$ decays, and based on the assumptions that underlie our analysis. The FOPT and CIPT averages, weighted according to the errors in Eqs. (\[alphasfinal\]) and (\[alphasOPAL\]), are $$\begin{aligned} \label{alphasALEPHOPAL} \a_s(m_\tau^2)&=&0.303\pm 0.009\ ,\qquad(\overline{\mbox{MS}},\ n_f=3, \ \mbox{FOPT,\ ALEPH\ \&\ OPAL})\ ,\\ &=&0.319\pm 0.012\ ,\qquad(\overline{\mbox{MS}},\ n_f=3, \ \mbox{CIPT,\ ALEPH\ \&\ OPAL})\ .\nonumber\end{aligned}$$ These convert to the values $$\begin{aligned} \label{alphasALEPHOPALZ} \a_s(m_Z^2)&=&0.1165\pm 0.0012\ ,\qquad(\overline{\mbox{MS}},\ n_f=5, \ \mbox{FOPT,\ ALEPH\ \&\ OPAL})\ ,\\ &=&0.1185\pm 0.0015\ ,\qquad(\overline{\mbox{MS}},\ n_f=5, \ \mbox{CIPT,\ ALEPH\ \&\ OPAL})\ .\nonumber\end{aligned}$$ \[ALEPH\] The analysis of Ref. [@ALEPH13] ========================================= We now turn to a discussion of what we have referred to as the standard analysis, which was used in Refs. [@ALEPH13; @ALEPH; @ALEPH08; @OPAL], and is based on Ref. [@DP1992]. We begin with a brief overview of what is done in this approach. One considers spectral moments with the weights $$\begin{aligned} \label{stweights} w_{k\ell}(x)&=&(1-x)^2(1+2x)(1-x)^k x^\ell\ ,\\ x&=&s/s_0\ ,\nonumber\end{aligned}$$ choosing $(k,\ell)\in\{(0,0),\,(1,0),\,(1,1),\,(1,2),\,(1,3)\}$, and evaluating these moments at $s_0=m_\tau^2$ only. Ignoring logarithms,[^13] terms in the OPE contribute to these weights up to $D=16$. The five $s_0=m_\tau^2$ moment values are, of course, insufficient to determine the eight OPE parameters $\alpha_s(m_\tau^2)$, $\langle {\frac{\a_s}{\p}} GG\rangle$, $C_6$, $C_8$, $C_{10}$, $C_{12}$, $C_{14}$ and $C_{16}$, so some truncation is necessary. The standard analysis approach to this problem is to assume the OPE coefficients $C_{D=2k}$ for $D>8$ are small enough that they may all be safely neglected in all of the FESRs under consideration, despite numerical enhancements of their contributions via larger coefficients in some of the higher degree weights. Duality violations are, similarly, assumed to be small enough that $\D(s)$ in Eq. (\[DVdef\]) can be ignored as well, at least for $s_0$ close to $m_\tau^2$. With these assumptions, the remaining OPE parameters $\a_s(m_\tau^2)$, $\langle {\frac{\a_s}{\p}} GG\rangle$, $C_6$ and $C_8$ are fitted using the $s_0=m_\tau^2$ values of the five $w_{k\ell}$ spectral moments noted above, for each of the channels $V$, $A$, and $V+A$. The central values and errors for $\a_s(m_\tau^2)$ are taken from the fits (FOPT and CIPT) to the $V+A$ channel, based on the VSA-motivated expectation of significant $D=6$ cancellation and the hope of similar strong DV cancellations in the $V+A$ sum. However, as we have seen in Eq. (\[rho1rho5\]), VSA is a rather poor approximation. Furthermore, the fact that the spectral function for the $V+A$ combination is flatter in the region between $2$ and $3$ GeV$^2$ than is the case for the $V$ or $A$ channels separately may mislead one into believing that DVs are already negligible at these scales for the $V+A$ combination. In actual fact, however, though somewhat reduced in the $V+A$ sum, DV oscillations are still evident in the ALEPH $V+A$ distribution. In addition, since we have a good representation of the individual $V$ and $A$ channels, we also have a good representation of their sum. The fact that our fits yield results for $\gamma_A$ significantly larger than those for $\gamma_V$ implies that the level of reduction of DV contributions in going from the separate $V$ and $A$ channels to the $V+A$ sum is accidental in the window between $2$ and $3$ GeV$^2$, and does not persist to higher $s$, where the stronger exponential damping in the $A$ channel would drive the result for the $V+A$ sum towards that for the $V$ channel alone. These assumptions should be compared with those that have to be made in order to carry out the analysis presented in this article (as well as in the OPAL-based analyses of Refs. [@alphas1; @alphas2]). DVs are unambiguously present in the spectral functions, as can be seen, for example, in the relevant panels of Figs. \[CIFOw0fit\], \[CIFOw023fit\], \[CIFOw0VAfit\] and \[CIFOw023VAfit\]. In the standard analysis, the hope is that the double or triple pinching of the weights in Eq. (\[stweights\]) is sufficient to allow DVs to be ignored altogether, and indeed, for example Fig. \[CIFOw023fit\], shows that pinching significantly reduces the role of DV contributions, especially near $s_0=m_\tau^2$. However, if, as in the standard analysis, one restricts one’s attention to $s_0=m_\tau^2$, and wishes to employ only weights which are at least doubly pinched, the number of OPE parameters to be fit will necessarily exceed the number of weights employed, making additional assumptions, such as the truncation in dimension of the OPE described above, unavoidable.[^14] With the standard-analysis choice of the set of weights of Eq. (\[stweights\]), one finds that the OPE must be truncated at dimension $D=8$ in order to leave at least one residual degree of freedom in the fits. In our analysis, in contrast, we choose not to ignore DVs [*a priori*]{}. This requires us to model their contribution to the spectral functions (as we did through Eq. (\[ansatz\])), and to use not just the single value $s_0=m_\tau^2$, but rather a range of $s_0$ extending down from $m_\tau^2$. The one assumption we [*do*]{} have to make is that the  (\[ansatz\]) provides a sufficiently accurate description of DVs for values of $s_0$ between approximately $1.5$ GeV$^2$ and $m_\tau^2$. Clearly, whatever choice is made, it needs to be tested. For our analysis framework, we have presented detailed tests already above. In this section we consider primarily the standard analysis, most recently used in Ref. [@ALEPH13]. Our conclusion, from what follows below, is that the assumptions made in this framework do not hold up to quantitive scrutiny, and hence that the standard analysis approach should no longer be employed in future analyses.[^15] The results presented in Tab. 4 of Ref. [@ALEPH13] already indicate that there are problems with the standard analysis. Let us consider the values obtained for the gluon condensate, $\langle{\frac{\a_s}{\p}} GG\rangle$, in the different channels, together with the $\chi^2$ value for each fit (recall that for each of these fits there is only one degree of freedom): $$\begin{aligned} \label{gluon} \langle\frac{\a_s}{\p}GG\rangle&=&(-0.5\pm 0.3)\times 10^{-2}~\mbox{GeV}^4\ ,\qquad \chi^2=0.43\qquad V\ ,\\ &&(-3.4\pm 0.4)\times 10^{-2}~\mbox{GeV}^4\ ,\qquad \chi^2=3.4~\qquad A\ ,\nonumber\\ &&(-2.0\pm 0.3)\times 10^{-2}~\mbox{GeV}^4\ ,\qquad \chi^2=1.1~\qquad V+A\ .\nonumber\end{aligned}$$ The $\chi^2$ values correspond to $p$-values of 51%, 7%, and 29%, respectively, indicating that all fits are acceptable. For these fits to be taken as meaningful, however, their results should satisfy known physical constraints. One such constraint is that there is only one effective gluon condensate, whose values should therefore come out the same in all of the $V$, $A$ and $V+A$ channels. This is rather far from the case for the results quoted in Eq. (\[gluon\]), where, for example, the $V$ and $V+A$ channel fit values differ very significantly. It is, moreover, problematic to accept the $V+A$ channel value and ignore the $V$ channel one when the $p$-value of the $V$-channel fit is, in fact, larger than that of the $V+A$ channel. There can be several reasons for the inconsistencies in the results of Ref. [@ALEPH13]. One possibility is that some of the weights (\[stweights\]) have theoretical problems already in perturbation theory, as argued in Ref. [@BBJ12]. Another possibility is that the assumptions underlying the standard analysis do not hold. Whatever the reason, the discrepant gluon condensate values point to a serious problem with the standard analysis framework.[^16] We now turn to quantitative tests of the OPE fit results reported in Ref. [@ALEPH13]. We focus on the $V+A$ channel, where DVs and $D>4$ OPE contributions were expected to play a reduced role, and on the CIPT $D=0$ treatment, since this is the only case for which the OPE fit parameter values are quoted in Ref. [@ALEPH13]. The tests consist of comparing the weighted OPE and spectral integrals for the weights $w_{k\ell}$ employed in the analysis of Ref. [@ALEPH13], not just at $s_0=m_\tau^2$, but over an interval of $s_0$ extending below $m_\tau^2$. If the assumptions made about $D>8$ OPE and DV contributions being negligible are valid at $s_0=m_\tau^2$ they should also be valid in some interval below this point. A good match between the weighted spectral integrals and the corresponding OPE integrals, evaluated using the results for the OPE parameters quoted in Ref. [@ALEPH13], should thus be found over an interval of $s_0$. If, on the other hand, these assumptions are not valid, then the fit parameter values will contain contaminations from DV contributions and/or contributions with higher $D$, both of which scale differently with $s_0$ than do the $D=0$, $4$, $6$ and $8$ contributions appearing in the truncated OPE form. Such contamination will show up as a disagreement between the $s_0$ dependence of the fitted OPE representations and the experimental spectral integrals. It is worth expanding somewhat on this latter point since the agreement of the OPE and spectral integrals at $s_0=m_\tau^2$ for the weights $w_{k\ell}$ employed in the standard analysis is sometimes mistakenly interpreted as suggesting the validity of the assumptions underlying the standard analysis at $s_0=m_\tau^2$. However, while the agreement is certainly a necessary condition for the validity of these assumptions, it is not in general, a sufficient one. This caution is particularly relevant since four parameters are being fit using only five data points, making it relatively easy for the effects of neglected, but in fact non-negligible, higher-$D$ and/or DV contributions to be absorbed, [*at a fixed*]{} $s_0$, into the values of the four fitted lower-$D$ parameters. That this is a realistic possibility is demonstrated by the alternate set of OPE fit parameters obtained in the analysis of Ref. [@MY08], which neglected DV contributions, but not OPE contributions with $D>8$. The results of this fit, including non-zero $C_D$ with $D>8$ and an $\alpha_s(m_\tau^2)$ significantly different from that obtained via the standard analysis of the same data [@ALEPH08], produced equally good agreement between the $s_0=m_\tau^2$ OPE and spectral integral results for all the $w_{k\ell}$ employed in the standard analysis fit of Ref. [@ALEPH08], conclusively demonstrating that this agreement does not establish the validity of the standard analysis assumptions. So long as one works at fixed $s_0=m_\tau^2$, there is no way to determine whether the results of the standard analysis are, in fact, contaminated by neglected higher-$D$ OPE and/or DV effects or not. One may, however, take advantage of the fact that different contributions to the theory sides of the various FESR scale differently with $s_0$, with integrated DV contributions oscillatory in $s_0$ and integrated $D=2k$ OPE contributions scaling as $1/s_0^k$. If the $D=0$, $4$, $6$ and $8$ parameters obtained from the fixed-$s_0=m_\tau^2$ standard analysis fit have, in fact, absorbed the effects of $D>8$ and/or DV contributions, the fact that the nominal lower-$D$ $s_0$-scaling does not properly match that of the higher-$D$ and/or DV contaminations will be exposed when one considers the same FESR, with the same standard analysis OPE fit parameter values, at lower $s_0$. A breakdown of the standard analysis assumptions will thus be demonstrated by a failure of the agreement of the OPE and spectral integrals observed at $s_0=m_\tau^2$ to persist over a range of $s_0$ below $m_\tau^2$. Such $s_0$-dependence tests represent important self-consistency checks for all FESR analyses. Before carrying out these self-consistency tests on the results of the standard analysis, it is useful to make explicit the relative roles of the various different $D$ contributions entering the $s_0=m_\tau^2$ results for the $w_{k\ell}$-weighted OPE integrals employed in the $V+A$ CIPT fit of Ref. [@ALEPH13]. For the $D=0$ contributions, it is important to remember that the leading one-loop contribution is independent of both $s_0$ and $\a_s$. It is thus the difference of the full $D=0$ contribution and this leading term which determines the $\alpha_s$ dependence of the $D=0$ contributions, and which is relevant to the determination of $\alpha_s(m_\tau^2)$. Tab. \[aleph13ope\] shows the $s_0=m_\tau^2$ results for (i) the $\alpha_s$-dependent $D=0$ contributions and (ii) the $D=4$, $6$, and $8$ contributions corresponding to the CIPT fit results of Tab. 4 of Ref. [@ALEPH13], for each of the $w_{k\ell}$ employed in that analysis. The sum of the $D=6$ and $8$ contributions, which is $\sim 1-2\%$ of the $\a_s$-dependent $D=0$ contribution for $w_{00}$ and $w_{10}$, is, in contrast, $\sim 10-25\%$ of the corresponding $D=0$ contributions for the $w_{11}$, $w_{12}$ and $w_{13}$ cases. Furthermore, for $w_{11}$, the $D=4$ contribution is essentially the same size as the $\alpha_s$-dependent $D=0$ one. $(k,\ell )$ $\alpha_s$-dependent $D=0$ $D=4$ $D=6$ $D=8$ ------------- ---------------------------- ----------- ----------- ----------- $(0,0)$ 0.005173 -0.000008 -0.000117  0.000033 $(1,0)$ 0.004399 -0.000361 -0.000117  0.000082 $(1,1)$ 0.000365  0.000350 -0.000039 -0.000049 $(1,2)$ 0.000208  0.000002  0.000039 -0.000016 $(1,3)$ 0.000081  0.000000  0.000000  0.000016 It is clear from these observations that it is the $w_{11}$, $w_{12}$ and $w_{13}$ moments which dominate the determinations of the $D=4,\, 6$ and $8$ OPE parameters in the analysis of Ref. [@ALEPH13]. Bearing in mind the very slow variation with $s_0$ of the $D=0$ contributions to the dimensionless OPE integrals and the $1/s_0^k$ scaling of the $D=2k$ contributions, it is, moreover, clear that the relative roles of the non-perturbative contributions will grow significantly relative to the $\alpha_s$-dependent $D=0$ ones as $s_0$ is decreased. Studying the $s_0$ dependence of the match of the OPE to the corresponding spectral integrals for the $w_{11}$, $w_{12}$ and $w_{13}$ spectral weights thus provides a particularly powerful test of the reliability of the values for the $D=4,\, 6$ and $8$ parameters obtained in the fits of Ref. [@ALEPH13]. The results of these tests are shown in Fig. \[aleph13fitprobs\]. It is clear that the $s_0$-dependence of the experimental spectral integrals and fitted OPE integrals is very different, demonstrating conclusively the unreliability of the $D=4,\, 6$ and $8$ fit parameter values obtained in Ref. [@ALEPH13]. Changes in the values of the $D=6$ and $8$ parameters, which enter the $w_{00}$ FESR, would of course also force a change in the $\alpha_s{(m_\tau^2)}$ required to produce a match between the $s_0=m_\tau^2$ $w_{00}$-weighted OPE and spectral integrals. ![image](udvpa_wspec11_aleph2013_ope_spec_comp_may13_14.pdf){width="7cm"} ![image](udvpa_wspec12_aleph2013_ope_spec_comp_may13_14.pdf){width="7cm"} ![image](udvpa_wspec13_aleph2013_ope_spec_comp_may13_14.pdf){width="7cm"} It is worth expanding somewhat on these observations for the $w_{13}$ case, where the source of the problem with the fit of Ref. [@ALEPH13] becomes particularly obvious. Because of the $x^3$ factor present in $w_{13}(x)$, the $D=2$ and $4$ contributions to the OPE part are completely negligible numerically, leaving the standard analysis version of the $w_{13}$-weighted OPE integral entirely determined by the parameters $\alpha_s(m_\tau^2)$ and $C_{8,V+A}$. With the results and errors for these quantities from Tabs. 4 and 5 of Ref. [@ALEPH13], one finds that, as $s_0$ is decreased from $m_\tau^2$ to  $2$ GeV$^2$, the $\alpha_s$-dependent $D=0$ contribution [*decreases*]{} by $0.000001(0)$, while the $D=8$ contribution [*increases*]{} by $0.000086(20)$. This is to be compared to the [*increase*]{} in the corresponding spectral integral, which is $0.000028(8)$. Evidently the disagreement between the $w_{13}$-weighted OPE and spectral integral results seen in Fig. \[aleph13fitprobs\] results from a problem with the fit value for $C_{8,V+A}$. Trying to fix the problem with the $w_{13}$ FESR through a change in $C_{8,V+A}$ alone turns out to exacerbate the problem with the $w_{12}$ FESR. Working backward, one finds that attempting to change $C_{4,V+A}$, $C_{6,V+A}$ and $C_{8,V+A}$ so as to improve the match between the $s_0$ dependences of the OPE and spectral integrals for the $w_{11}$, $w_{12}$ and $w_{13}$ FESRs without any change in $\alpha_s(m_\tau^2)$ produces changes in the $D\ge 4$ contributions to the $w_{10}$ and $w_{00}$ FESRs that can only be compensated for by a decrease in $\alpha_s(m_\tau^2)$. The problem of the discrepancies between the $s_0$-dependences of the OPE and spectral integrals in the $w_{11}$, $w_{12}$ and $w_{13}$ FESR parts of the standard analysis can thus not be resolved simply through shifts in $C_{4,V+A}$, $C_{6,V+A}$ and $C_{6,V+A}$ which leave the target of the analysis, namely the output $\alpha_s(m_\tau^2)$ value, unchanged. A natural question, given the discussion above, is whether our approach produces a better match between experiment and theory for the higher spectral weights. The answer, as we will see below, is yes. Before embarking on this investigation, however, it is important to emphasize the non-optimal nature of the FESRs with weights $w_{10}$, $w_{11}$, $w_{12}$, and $w_{13}$. First, all of these weights contain a term linear in the variable $x$, a fact which, according to the arguments of Ref. [@BBJ12], should make standard methods of estimating the uncertainty associated with truncating the integrated perturbative series for these weights much less reliable than is the case for the weights employed in our analysis. Second, the values of the $C_D$ with $D>8$ obtained from the fits reported in Ref. [@MY08] were found to produce very strong cancellations amongst higher-$D$ OPE contributions when employed in the higher $(k,\ell )$ $w_{k\ell}$ FESRs, making these FESRs particularly sensitive to any shortcomings in the treatment of higher-$D$ OPE contributions, as well as a poor choice for use in attempting to fit the values of $C_D$ with $D>8$. The strong cancellation amongst higher-$D$ OPE contributions for the higher-$(k,\ell )$ $w_{k\ell}$ moments turns out to be also a feature of the results of our extended analysis below, and hence not attributable to the neglect of DV contributions in Ref. [@MY08]. Because of these strong cancellations, the use of the higher-$(k,\ell )$ $w_{k\ell}$ should be avoided in future analyses, and we consider them below only for the sake of comparison with the results of the analysis of Ref. [@ALEPH13]. In making this comparison, we will focus on the CIPT resummation of perturbation theory, with the CIPT version of the standard analysis being the only one for which quantitative fit results are reported in Ref. [@ALEPH13]. To evaluate the OPE contributions to the $w_{10}$, $w_{11}$, $w_{12}$ and $w_{13}$ FESRs requires knowledge of five new quantities, $C_{4,V+A}$, $C_{10,V+A}$, $C_{12,V+A}$, $C_{14,V+A}$ and $C_{16,V+A}$, in addition to the OPE and DV parameters already obtained in our analysis. We estimate these using the $w(s)=(s/s_0)^{k-1}$ versions of the FESR Eq. (\[sumrule\]), neglecting, as before, sub-leading contributions at each order $D>2$ in the OPE. This yields, for $D=2k>2$, $$\begin{aligned} \label{OPEnFESR} (-1)^{k+1} C_{2k,V+A} &=&2f_\p^2 m_\p^{2(k-1)}+\int_0^{s_0}ds\,s^{k-1}\,\r^{(1)}_{V+A}(s)\\ &&+\int_{s_0}^\infty ds\,s^{k-1}\,\r_{V+A}^{\rm DV}(s) +\frac{1}{2\p i}\oint_{|z|=s_0}dz\,z^{k-1}\,\P_{V+A}^{PT}(z)\ ,\nonumber\end{aligned}$$ where $\P^{PT}$ is the perturbative contribution to $\P(z)$, corresponding to the $D=0$ term in Eq. (\[OPE\]). The choices $k=2,\, \cdots ,\, 8$ yield $C_4,\, \cdots ,\, C_{16}$, respectively. With $\alpha_s(m_\tau^2)$ and the $V$ and $A$ channel DV parameters from the $s_{\rm min}=1.55$ GeV$^2$ combined $V$ and $A$ CIPT fit of Tab. \[VAwtaupaper\], we find, for the central values, $$\begin{aligned} \label{OPEn} C_{4,V+A}&= &0.00268\ {\rm GeV}^4\ ,\\ C_{6,V+A}&= &-0.0125\ {\rm GeV}^6\ , \nonumber\\ C_{8,V+A}&= &0.0349\ {\rm GeV}^8\ , \nonumber\\ C_{10,V+A}&= &-0.0832\ {\rm GeV}^{10}\ , \nonumber\\ C_{12,V+A}&= &0.161\ {\rm GeV}^{12}\ ,\nonumber\\ C_{14,V+A}&= &-0.191\ {\rm GeV}^{14}\ ,\nonumber\\ C_{16,V+A}&= &-0.233\ {\rm GeV}^{16}\ .\nonumber\end{aligned}$$ For $C_{6,V+A}$ and $C_{8,V+A}$ the agreement with the values in Tab. \[VAwtaupaper\] is excellent. With such values of the $C_D$, $D>8$ contributions are far from negligible compared to the $D=6$ and $8$ ones for the $w_{k\ell}$ spectral weights with degree higher than three; the maximum scale, $m_\tau^2$, accessible in hadronic $\tau$ decays is not, it turns out, high enough to ensure that the OPE series is rapidly converging in dimension. The theory parts $I_{\rm th}^{(w_{k\ell})}(s_0)$ of the $w_{10}$, $w_{11}$, $w_{12}$ and $w_{13}$ FESRs produced by the results of Eq. (\[OPEn\]) and Tab. \[VAwtaupaper\] are compared to the corresponding spectral integrals in Fig. \[w10w11theoryspecintcomp\] as a function of $s_0$. The agreement is obviously excellent, and far superior to that obtained from the standard analysis of Ref. [@ALEPH13]. This excellent agreement, over the whole range of $s_0$ shown, is completely destroyed if one removes the $D>8$ contributions from the theory sides of the $w_{10}$, $w_{11}$, $w_{12}$ and $w_{13}$ FESRs. We emphasize again that the aim here is not a reliable determination of the OPE coefficients $C_{4-16}$, but a proof of existence of a set of values which, combined with our values for $\a_s(m_\tau^2)$ and the DV parameters, give an excellent representation of the $s_0$ dependence of the moments with the weights $w_{10}$, $w_{11}$, $w_{12}$ and $w_{13}$ (in addition, of course, to the weights included in our fits, in particular $w_{00}=\hw_3$). ![image](udvpa_wspec10_our_opepdv_vs_aleph2013_spec_sep6_14.pdf){width="7cm"} ![image](udvpa_wspec11_our_opepdv_vs_aleph2013_spec_sep6_14.pdf){width="7cm"} ![image](udvpa_wspec12_our_opepdv_vs_aleph2013_spec_sep6_14.pdf){width="7cm"} ![image](udvpa_wspec13_our_opepdv_vs_aleph2013_spec_sep6_14.pdf){width="7cm"} The problems demonstrated above with the standard analysis results of Ref. [@ALEPH13] could be a consequence of the neglect of non-negligible DVs, the breakdown of the assumption that $D>8$ OPE contributions are negligible for all of the $w_{k\ell}$ employed, or both. In an attempt to clarify the situation, it is useful to consider a fit in which the potentially dangerous assumption about $D>8$ OPE contributions is avoided. As an example, we consider a fit to the doubly pinched $\hat{w}_3=w_{00}$ FESR in the $V+A$ channel ignoring DV contributions. Since the weight is doubly pinched, one expects DV contributions to be significantly suppressed, though the actual amount of suppression is not clear [*a priori*]{}. Since the OPE integrals still depend on three parameters, $\alpha_s(m_\tau^2)$, $C_{6,V+A}$ and $C_{8,V+A}$, it is, of course, necessary to consider the fit over a range of $s_0$. To be specific, we focus on fits employing the FOPT resummation of perturbation theory. This exercise results in apparently perfectly acceptable fits, with $p$-values $10\%$ and higher for $s_{\rm min}\ge 1.95$ GeV$^2$. The fit quality drops dramatically as $s_0$ is lowered beyond this point, with $p$-values already at the $0.2\%$ level for $s_{\rm min}=1.90$ GeV$^2$. The highest $p$-value, $57\%$, occurs for $s_{\rm min}=2.2$ GeV$^2$, and corresponds to $$\begin{aligned} \label{w00noDVsfitvals} \alpha_s(m_\tau^2)&=&0.330\pm 0.006\ ,\\ C_{6,V+A}&=&0.0070\pm 0.0022\ {\rm GeV}^6\ ,\nonumber\\ C_{8,V+A}&=&-0.0088\pm 0.0042\ {\rm GeV}^8\ .\nonumber\end{aligned}$$ ![image](what3_no_DVs_OPE_fit_what3_ope_spec_ints_aug28_14.pdf){width="7cm"} ![image](what3_no_DVs_OPE_fit_what2_ope_spec_ints_aug28_14.pdf){width="7cm"} The quality of the resulting match between the fitted OPE and spectral integrals for $s_{\rm min}=2.2$ GeV$^2$, shown in the left panel of Fig. \[w00noDVscomp\], is excellent. Despite this good quality match, the results of Eq. (\[w00noDVsfitvals\]) are incomplete, in the sense that, in addition to the fit error induced by the covariances of the $V+A$ spectral data, there is an unspecified (and hence unquantified) systematic error associated with the neglect of DV contributions in the fit. Since the DV contribution to the FESR (\[sumrule\]) involves the weighted integral of the DV component of the spectral function in the interval $s\ge s_0$, neglecting this systematic error would be reasonable if the $V+A$ spectral distribution showed no signs of DVs in the region $s>2.2$ GeV$^2$. This is, however, rather far from being the case, making the absence of an estimate for the residual systematic error associated with neglecting DV contributions problematic. One internally consistent way to test whether DV contributions are sufficiently small to be neglected for the $\hat w_3$ FESR is to demonstrate that they are already small for the singly pinched $\hat{w}_2$ FESR. Whether or not this is the case can be investigated by comparing the $\hat w_2$-weighted OPE and spectral integrals, in the same $s_0$ range, using parameters obtained from the no-DV fit to $\hat w_3$, Eq. (\[w00noDVsfitvals\]). The results of this test are shown in Fig. \[w00noDVscomp\] (right panel). The agreement between the OPE and spectral integrals is clearly not good, indicating the presence of significant DV contributions in the $\hat{w}_2$ FESR. This, together with the rapid deterioration of the $\hat w_3$ no-DV fit quality for $s_{\rm min}\le 1.95$ GeV$^2$, suggests that neglecting DV contributions to the $\hat w_3$ FESR is also dangerous. The hope underlying existing FESR analyses which ignore DV effects is that the double pinching of the weight $w_{00}=\hw_3$ is sufficient to make the residual DV contributions very small. While the arguments above make this possibility unlikely, it is still logically possible that, although DVs cannot be ignored in the singly-pinched $\hw_2$ FESR, they can be ignored in the doubly-pinched $\hw_3$ FESR. Let us therefore consider again the FOPT version of the $\hat{w}_3$ FESR in the $V+A$ channel, but now, rather than ignoring DVs, taking as external input the results for the DV parameters from the $s_{\rm min}=1.55$ GeV$^2$ FOPT fit of Tab. \[VAwtaupaper\] and fitting the remaining OPE parameters $\a_s(m_\tau^2)$, $C_{6,V+A}$, and $C_{8,V+A}$, to the $\hw_3$ weighted spectral integral in the $V+A$ channel in the presence of this estimate of the DV contributions. The results of this exercise, which are to be compared with Eq. (\[w00noDVsfitvals\]), are $$\begin{aligned} \label{w00externalDVinputfitvals} \alpha_s(m_\tau^2)&=&0.301\pm 0.006 \pm 0.009\ ,\\ C_{6,V+A}&=&-0.0127 \pm 0.0020 \pm 0.0066 \ {\rm GeV}^6\ ,\nonumber\\ C_{8,V+A}&=&0.0399 \pm 0.0040 \pm 0.021\ {\rm GeV}^8\ ,\nonumber\end{aligned}$$ where the first error is statistical and the second is that induced by the correlated uncertainties of the external input DV parameters. The inclusion of the DV contributions induces a significant decrease in the value of $\a_s(m_\tau^2)$ and significant changes in the results for $C_{6,V+A}$ and $C_{8,V+A}$ (including changes in sign for both) as compared to the no-DV fit results of Eq. (\[w00noDVsfitvals\]). The fit parameters are all changed in the direction of the results of the more detailed combined $V$ and $A$ fits discussed in Sec. \[fits\]. This exercise clearly demonstrates that the effects of DVs on the parameters obtained from the $V+A$ $\hat{w}_3$ FESR analysis are much larger than the nominal errors obtained on those parameters from the no-DV fit. This provides a further indication of the necessity of modeling DV effects in analyses attempting to extract $\a_s(m_\tau^2)$ from hadronic $\tau$-decay data. \[conclusion\] Conclusion ========================= In this article, we reanalyzed the recently revised ALEPH data [@ALEPH13] for non-strange hadronic $\tau$ decays, with as primary goal the extraction of the strong coupling $\a_s$ at the scale $m_\tau$. The rather low value of $m_\tau$ raises the question of to what extent the determination of a perturbative quantity like $\a_s$ in such an analysis might be “contaminated” by non-perturbative effects. Our specific aim was to take all known non-perturbative effects into account and arrive at a realistic estimate of the systematic error on the value of $\a_s$ extracted using hadronic $\tau$ data. This is important for three reasons. First, the value of $\a_s$ from $\tau$ decays, evolved to the $Z$ mass, has long been claimed to be one of the most precise values available. Second, because the $\tau$ mass is so much smaller than other scales at which the strong coupling has been determined, $\a_s(m_\tau^2)$ provides a powerful test of the QCD running of the strong coupling, with the corresponding $\b$ function known to four-loop order. Finally, there continues to be some tension between the values of the $n_f=5$ coupling $\alpha_s(M_Z^2)$ obtained from different sources. While lattice determinations involving analyses of small-size Wilson loops [@hpqcdalphas; @adelaidehpqcd], $c\bar{c}$ pseudoscalar correlators [@hpqcdcharmpsalphas], the relevant combination of ghost and gluon two-point functions [@sternbeckggalphas; @frenchgp], and employing the Schrödinger functional scheme [@pacscsalphas] yield values, $0.1183(8)$ [@hpqcdalphas], $0.1192(11)$ [@adelaidehpqcd], $0.1186(5)$ [@hpqcdcharmpsalphas], $0.1196(11)$ [@frenchgp], and $0.1205(20)$ [@pacscsalphas], compatible both amongst one another and with the central value of the global electroweak fit result, $\alpha_s(M_Z^2)=0.1196(30)$ [@globalewalphas], lower values have been obtained in a number of other analyses, , $0.1174(12)$ from lattice analyses of $f_\pi /\Lambda_{QCD}$ [@fpilattalphas], $0.1166(12)$ from an analysis of the static quark energy [@staticValphas], $0.1118(17)$ from the recently revised JLQCD lattice determination from current-current two-point functions [@jlqcdalphas], and values in the range $0.1130-0.1160$ from analyses of DIS data and shape observables in $e^+e^-$ [@disjetsthrustetc]. We have employed our analysis method previously [@alphas1; @alphas2], using the OPAL data [@OPAL], but the revised ALEPH data have significantly smaller errors, and thus provide a more stringent test of our analysis method. The fact that at such low scales non-perturbative effects are not negligible has of course been long known, and has been taken into account in the analysis of hadronic $\tau$ decays through the inclusion of higher-dimension condensate terms in the OPE. However, the experimental data are provided in the form of spectral functions, , as functions of $s=q^2$ with $q$ denoting momentum in Minkowski space. Such values of $q^2$, viewed as a complex variable, are outside the domain of validity of the OPE. While this is well known, it can also easily be inferred from the form of the vector spectral function in Fig. \[ALEPH-OPAL\], which clearly shows oscillations that cannot be represented by the OPE. These oscillations lead unavoidably to the conclusion that violations of quark-hadron duality are, in general, significant at the scales accessible through experimental hadronic $\tau$ decay data. It follows that in order to investigate the effect of duality violations on the extraction of $\a_s$ from $\tau$ decay data, they need to be taken into account. Unfortunately, a model is needed in order to parametrize the oscillations in the spectral functions, and this modeling necessitates making some assumptions on which to base the analysis. This is, however, true for any such analysis: the assumption that duality violations can be ignored in a given analysis amounts to assuming a model as well; in terms of the  (\[ansatz\]) it corresponds to taking the parameters $\d_{V,A}$ to $\infty$. We have, instead, assumed that this  (with finite $\d$) provides a reasonable model of the resonance features present in the spectral functions for values of $s$ in some region below $m_\tau^2$ in which perturbation theory is still meaningful.[^17] As much as our aim is to find the most accurate value of $\a_s(m_\tau^2)$ possible given the data, an equally important goal was to test the validity of our approach, with the increased precision of the ALEPH data as compared to the OPAL data being particularly useful in this regard. This increased precision is, moreover, found to produce unique fit minima in the [hrothgar]{} studies of the multi-dimensional fit parameter space, improving the situation found for the corresponding fits to the OPAL data, and confirming that the precision of the ALEPH data is more than good enough to support fits incorporating an explicit representation of DV contributions. Despite the recent resurgence of interest in this problem, triggered by the completion of the five-loop calculation of the Adler function in Ref. [@PT], very few investigations have carried out a complete analysis starting from the data. In essence, only two methods have been proposed through which to investigate non-perturbative effects, with the first being the method based on Refs. [@BNP; @DP1992], which was employed by Refs. [@ALEPH13; @ALEPH; @ALEPH08; @OPAL], and the second being the method we employed in this article, applying and extending ideas proposed in earlier work [@MY08; @alphas1; @alphas2; @BBJ12]. In the absence of a detailed theoretical understanding of duality violations, it is important to test for the self-consistency of either analysis method using the data employed in the analysis. In Sec. \[ALEPH\] we demonstrated that the first method, used in Ref. [@ALEPH13], does not pass such tests. Indications supporting this conclusion have been published in earlier work, but now that the revised data are available, and in view of our critique in Sec. \[ALEPH\], we conclude that this method suffers from numerically significant systematic uncertainties not quantifiable within the analysis framework employed in Ref. [@ALEPH13], and hence must be discarded. The second method, employed in this article, does a much better job in fully describing the data, as we have shown in great detail in Secs. \[fits\], \[results\] and \[ALEPH\] above. However, there are some signs that also the limits of this method maybe in view. Fit qualities are typically larger than in the case of our analysis of the OPAL data [@alphas2], and a comparison of results based on ALEPH and OPAL data also shows some tension, even though errors are too large to say anything more conclusive. While these tensions may be caused by imperfections in the data (for instance slight discrepancies in the spectral function data visible in Fig. \[ALEPH-OPAL\]), it is by no means excluded that they point to shortcomings of the theory description as well. We briefly reviewed, in Sec. \[theory\], why we consider the DV parametrization in Eq. (\[ansatz\]) a physically sensible one. However, it remains relevant to test this form more quantitatively using experimental data. In this regard, we would like to stress that the exercise involving the $x^N$ FESRs leading to the results of Eq. (\[OPEn\]) represents a highly non-trivial test of this type. This follows from the fact that DV contributions to the $x^N$ FESRs are generally not small, and oscillate with $s_0$. The $D=0$ OPE and DV contributions to the theory side of the $x^N$ FESR for each $N$ are, in this exercise, fixed by the results of the earlier fits involving the  (\[ansatz\]), leaving only a $D=2N+2$ OPE contribution controlled by $C_{2N+2}$ to complete the theory side of the FESR. The different $x^N$ considered provide very different weightings on the interval from $s_0$ to $\infty$, and the different $s_0$ considered represent integration over different portions of the oscillations in the experimentally accessible region. Therefore, a problem with the DV  would be expected to show up as an inability to successfully fit, with the single parameter $C_{2N+2}$, the $s_0$-dependent difference between the experimental spectral integrals and the sum of the previously fixed $D=0$ OPE and DV theory integral contributions. In fact, as we have seen, a set of $C_{2N+2}$ exist which produce excellent matches to the experimental spectral interals over a sizeable range of $s_0$ for all $N$ ($N=1, \cdots , 7$) required to generate the results, shown in Fig. \[w10w11theoryspecintcomp\], for the weights $w_{k\ell}$ employed in Ref. [@ALEPH13]. The fact that the form (\[ansatz\]) conforms to the qualitative features expected of the contribution representing the residual error of an asymptotic series, and the success of the detailed self-consistency tests just described, confirms that the  (\[ansatz\]) provides a good representation of DV effects in the channels of interest. Possible residual inaccuracies in this representation should, in any case, not be turned into an argument to not include DVs at all, since that strategy would lead to the presence of unquantifiable systematic errors which use of our  strongly indicates are unlikely to be small. It is interesting to compare the values of $\a_s(m_\tau^2)$ from the various analyses. First, the half differences between our ALEPH- and OPAL-based values are 0.015 (FOPT) and 0.019 (CIPT), while the average (between FOPT and CIPT) fit errors is about 0.012 for fits to ALEPH data ( Eq. (\[alphasfinal\])), and about double that for fits to OPAL data. Finally, the difference between the FOPT and CIPT values is 0.014 for the ALEPH-based values, and 0.022 for the OPAL-based values. These differences and errors are all comparable in size, and it appears reasonable to conclude that they reflect both the data and theory limitations on the accuracy with which $\a_s(m_\tau^2)$ can be obtained from analyses of hadronic $\tau$ decay, at least at present. We do not believe that it is meaningful to condense these results in the form of one central value and one aggregate error for $\a_s(m_\tau^2)$. Clearly, our ALEPH-based values are not in agreement with the value obtained in Ref. [@ALEPH13], despite using the same data. Averaging the values of Eq. (\[alphasfinal\]) and adding half the difference between the two values as an error estimate for the CIPT/FOPT perturbative uncertainty, we would find a value $\a_s(m_\tau^2)=0.303\pm 0.014$, to be compared with the value $0.332\pm 0.012$ quoted in Ref. [@ALEPH13]. It should be emphasized again that the error in the latter value does not include a component accounting for the systematic problems identified in Sec. \[ALEPH\]. One may ask whether one can do better. First, it would be interesting to apply our analysis method to data with better statistics, and such data are in principle available from the BaBar and Belle experiments. Such data would allow us to scrutinize our theoretical understanding in more detail and would, as can be seen from Fig. \[ALEPH-OPAL\], be especially useful in the upper part of the spectrum. However, to date the analyses required to produce inclusive hadronic spectral functions from these data are not complete, and thus such an investigation must be postponed until they become available. Second, it would be nice to develop a deeper insight into the theory itself, or, lacking that, to develop new tools for testing any given model for duality violations. A recent idea in this direction based on functional analysis can be found in Ref. [@CGP14]. Finally, we note that the difference between the results for $\alpha_s(m_\tau^2)$ obtained using the FOPT and CIPT resummation schemes represents, at present, an important limitation on the accuracy with which $\alpha_s$ can be obtained at a scale as low as $m_\tau^2$; further progress will require an improved understanding of this issue. [**Acknowledgments**]{} We would like to thank Matthias Jamin for useful discussions, and Andy Mahdavi for generous help with [hrothgar]{}. MG thanks IFAE and the Department of Physics at the UAB, and KM and SP thank the Department of Physics and Astronomy at SFSU for hospitality. The work of DB was supported by the Gottfried Wilhelm Leibniz programme of the Deutsche Forschungsgemeinschaft (DFG) and the Alexander von Humbodlt Foundation. MG is supported in part by the US Department of Energy under contract DE-FG02-92ER40711, and JO is supported by the US Department of Energy under contract DE-FG02-95ER40896. SP is supported by CICYTFEDER-FPA2011-25948, 2014 SGR 1450, and the Spanish Consolider-Ingenio 2010 Program CPAN (CSD2007-00042). KM is supported by a grant from the Natural Sciences and Engineering Research Council of Canada. [99]{} M. Davier, A. Hoecker, B. Malaescu, C. Z.  Yuan and Z. Zhang, Eur. Phys. J. C [**74**]{}, 2803 (2014) \[arXiv:1312.1501 \[hep-ex\]\]. D. R. Boito, O. Catà, M. Golterman, M. Jamin, K. Maltman, J. Osborne and S. Peris, Nucl. Phys. Proc. Suppl.  [**218**]{}, 104 (2011) \[arXiv:1011.4426 \[hep-ph\]\]. R. Barate [*et al.*]{} \[ALEPH Collaboration\], Eur. Phys. J.  C [**4**]{}, 409 (1998); S. Schael [*et al.*]{} \[ALEPH Collaboration\], Phys. Rept.  [**421**]{}, 191 (2005) \[arXiv:hep-ex/0506072\]; , Eur. Phys. J. C [**56**]{}, 305 (2008) \[arXiv:0803.0979 \[hep-ph\]\]. R. Shankar, Phys. Rev. D [**15**]{}, 755 (1977); R. G. Moorhouse, M. R. Pennington and G. G. Ross, Nucl. Phys. B [**124**]{}, 285 (1977); K. G. Chetyrkin and N. V. Krasnikov, Nucl. Phys. B [**119**]{}, 174 (1977); K. G. Chetyrkin, N. V. Krasnikov and A. N. Tavkhelidze, Phys. Lett. B [**76**]{}, 83 (1978); N. V. Krasnikov, A. A. Pivovarov and N. N. Tavkhelidze, Z. Phys. C [**19**]{}, 301 (1983); E. G. Floratos, S. Narison and E. de Rafael, Nucl. Phys. B [**155**]{}, 115 (1979); R. A. Bertlmann, G. Launer and E. de Rafael, Nucl. Phys. B [**250**]{}, 61 (1985). E. Braaten, Phys. Rev. Lett.  [**60**]{}, 1606 (1988). , [S. Narison]{}, and [A. Pich]{}, Nucl. Phys. B [**373**]{} (1992) 581. K. Ackerstaff [*et al.*]{} \[OPAL Collaboration\], Eur. Phys. J.  C [**7**]{} (1999) 571 \[arXiv:hep-ex/9808019\]. F. Le Diberder, A. Pich, Phys. Lett.  B [**289**]{}, 165 (1992). K. Maltman, T. Yavin, Phys. Rev.  C [**78**]{}, 094020 (2008) \[arXiv:0807.0650 \[hep-ph\]\]. O. Catà, M. Golterman, S. Peris, Phys. Rev.  D [**79**]{}, 053002 (2009) \[arXiv:0812.2285 \[hep-ph\]\]. D. Boito, O. Catà, M. Golterman, M. Jamin, K. Maltman, J. Osborne and S. Peris, Phys. Rev. D [**84**]{}, 113006 (2011) \[arXiv:1110.1127 \[hep-ph\]\]. D. Boito, M. Golterman, M. Jamin, A. Mahdavi, K. Maltman, J. Osborne and S. Peris, Phys. Rev. D [**85**]{}, 093015 (2012) \[arXiv:1203.3146 \[hep-ph\]\]. P. A. Baikov, K. G. Chetyrkin and J. H. Kühn, Phys. Rev. Lett.  [**101**]{} (2008) 012002 \[arXiv:0801.1821 \[hep-ph\]\]. M. Beneke and M. Jamin, JHEP [**0809**]{}, 044 (2008) \[arXiv:0806.3156 \[hep-ph\]\]. S. Menke, arXiv:0904.1796 \[hep-ph\]. I. Caprini and J. Fischer, Eur. Phys. J.  C [**64**]{}, 35 (2009) \[arXiv:0906.5211 \[hep-ph\]\]. S. Descotes-Genon, B. Malaescu, arXiv:1002.2968 \[hep-ph\]. G. Abbas, B. Ananthanarayan, I. Caprini and J. Fischer, Phys. Rev. D [**87**]{}, 014008 (2013) \[arXiv:1211.4316 \[hep-ph\]\]; I. Caprini, Mod. Phys. Lett. A [**28**]{}, 1360003 (2013) \[arXiv:1306.0985 \[hep-ph\]\]; G. Abbas, B. Ananthanarayan and I. Caprini, Mod. Phys. Lett. A [**28**]{}, 1360004 (2013) \[arXiv:1306.1095 \[hep-ph\]\]. A. A. Pivovarov, Z. Phys.  C [**53**]{}, 461 (1992) \[Sov. J. Nucl. Phys.  [**54**]{}, 676 (1991)\] \[Yad. Fiz.  [**54**]{} (1991) 1114\] \[arXiv:hep-ph/0302003\]; F. Le Diberder and A. Pich, Phys. Lett.  B [**286**]{}, 147 (1992). M. Jamin, JHEP [**0509**]{}, 058 (2005) \[hep-ph/0509001\]. A. Mahdavi, H. Hoekstra, A. Babul, J. Sievers, S. T. Myers and J. P. Henry, Astrophys. J.  [**664**]{}, 162 (2007) \[astro-ph/0703372\]. S. Weinberg, Phys. Rev. Lett.  [**18**]{}, 507 (1967). T. Das, G. S. Guralnik, V. S. Mathur, F. E. Low, J. E. Young, Phys. Rev. Lett.  [**18**]{}, 759 (1967). Y.-S. Tsai, Phys. Rev. D [**4**]{}, 2821 (1971). E. C. Poggio, H. R. Quinn, S. Weinberg, Phys. Rev.  D [**13**]{}, 1958 (1976). O. Catà, M. Golterman, S. Peris, Phys. Rev.  D [**77**]{}, 093006 (2008) \[arXiv:0803.0246 \[hep-ph\]\]. O. Catà, M. Golterman, S. Peris, JHEP [**0508**]{}, 076 (2005) \[hep-ph/0506004\]. B. Blok, M. A. Shifman and D. X. Zhang, Phys. Rev.  D [**57**]{}, 2691 (1998) \[Erratum-ibid.  D [**59**]{}, 019901 (1999)\] \[arXiv:hep-ph/9709333\]; I. I. Y. Bigi, M. A. Shifman, N. Uraltsev, A. I. Vainshtein, Phys. Rev.  D [**59**]{}, 054011 (1999) \[hep-ph/9805241\]; M. A. Shifman, \[hep-ph/0009131\]; M. Golterman, S. Peris, B. Phily, E. de Rafael, JHEP [**0201**]{}, 024 (2002) \[hep-ph/0112042\]. M. Jamin, JHEP [**1109**]{}, 141 (2011) \[arXiv:1103.2718 \[hep-ph\]\]. M. Beneke, D. Boito and M. Jamin, JHEP [**1301**]{}, 125 (2013) \[arXiv:1210.8038 \[hep-ph\]\]. K. Maltman, Phys. Lett.  B [**440**]{}, 367 (1998) \[hep-ph/9901239\]. C. A. Dominguez and K. Schilcher, Phys. Lett.  B [**448**]{}, 93 (1999) \[hep-ph/9811261\]. . For the results of this fit, see\ [http://www.slac.stanford.edu/xorg/hfag/tau/hfag-data/tau/2009/\ TauFit\_Mar2011/BB\_PiKUniv/BB\_PiKUniv\_summary0.pdf]{} . I. S. Towner and J. C. Hardy, Rep. Prog. Phys. [**73**]{}, 046301 (2010). J. Erler, Rev. Mex. Fis. [**50**]{}, 200 (2004). J. Beringer [*et al*]{}. (Particle Data Group), Phys. Rev. D86, 010001 (2012) and 2013 partial update for the 2014 edition; [http://pdg.lbl.gov]{} . K. G. Chetyrkin, B. A. Kniehl and M. Steinhauser, Phys.Rev. Lett. [**79**]{}, 2184 (1997) \[hep-ph/9706430\]. M. Jamin, Phys. Lett. B [**538**]{}, 71 (2002) \[hep-ph/0201174\]. C. T. H. Davies [*et al.*]{}, Phys. Rev. D [**78**]{}, 114507 (2008) \[arXiv:0807.1687 \[hep-lat\]\]. K. Maltman, D. Leinweber, P. Moran and A. Sternbeck, Phys. Rev. D [**78**]{}, 114504 (2008) \[arXiv:0807.2020 \[hep-lat\]\]. B. Chakraborty [*et al.*]{}, arXiv:1408.4169 \[hep-lat\]; C. McNeile [*et al.*]{}, Phys. Rev. D [**82**]{}, 034512 \[arXiv:1004.4285 \[hep-lat\]\]. A. Sternbeck [*et al.*]{}, PoS [**LATTICE2007**]{}, 256 (2007) \[arXiv:0710.2965 \[hep-lat\]\]; PoS [**LATTICE2009**]{}, 227 (2010) \[arXiv:1003.1585 \[hep-lat\]; A. Sternbeck, K. Maltman, M. Müller-Preussker and L. von Smekal, PoS [**LATTICE2012**]{}, 243 (2012) \[arXiv:1212.2039 \[hep-lat\]\]. B. Blossier [*et al.*]{}, Phys. Rev. D  [**89**]{}, 014507 (2014) \[arXiv:1310.3763 \[hep-ph\]\]; Phys. Rev. Lett. [**108**]{}, 262002 (2012) \[arXiv:1201.5770 \[hep-ph\]\]. S. Aoki [*et al.*]{}, JHEP [**0910**]{}, 053 (2009) \[arXiv:0906.3906 \[hep-lat\]\]. M. Baak [*et al.*]{} \[Gfitter Group Collaboration\], Eur. Phys. J. C [**74**]{}, 3046 (2014) \[arXiv:1407.3792 \[hep-ph\]\]. J.-L. Kneur and A. Neveu, Phys. Rev. D [**88**]{}, 074025 (2013) \[arXiv:1305.6910 \[hep-ph\]\]. A. Bazavov, N. Brambilla, X. Garcia i Tormo, P. Petreczky, J. Soto and A. Vairo, arXiv:1407.8437 \[hep-ph\]. E. Shintani [*et al.*]{}, Phys. Rev. D [**82**]{}, 074505 (2010) \[arXiv:1002.0371 \[hep-lat\]\]; (Erratum, Phys. Rev. [bf D89]{}, 099903 (2014)). For an extensive list and discussion of recent determinations from DIS, jets and shape observables, see Section 1.2 of S. Moch, S. Weinzierl, S. Alekhin, J. Blumlein, L. de la Cruz, S. Dittmaier, M. Dowling and J. Erler [*et al.*]{}, arXiv:1405.4781 \[hep-ph\]. I. Caprini, M. Golterman and S. Peris, Phys. Rev. D [**90**]{}, 033008 (2014) \[arXiv:1407.2577 \[hep-ph\]\]. [^1]: The updated and corrected data can be found at http://aleph.web.lal.in2p3.fr/tau/specfun13.html. [^2]: See also Refs. [@russians; @MJ11]. [^3]: In Ref. [@alphas1] we used $\k_{V/A}\equiv e^{-\d_{V/A}}$; in Ref. [@alphas2] we switched to $\d_{V/A}$. [^4]: Even though the new wider binning near the kinematic endpoint makes it somewhat harder to see such differences in this region. [^5]: If only one weight is included in the sum, $\cq^2$ reverts to the standard $\chi^2$. [^6]: It is important to distinguish fits to the spectral function itself from fits to [*the moments of*]{} the spectral function; they are quite different. Even in the case of the $\hat{w}_0$ moment, the integral $I^{(\hw_0)}_{V;\rm ex}(s_0)$ contains all the data from threshold to $s_0$ and always includes, in particular, the $\rho$ peak. On the other hand, a fit of the DV  (\[ansatz\]) to the vector spectrum would probably only include data for $s_0$ between $s_{\rm min}$ and $m_{\tau}^2$, and, since one needs to choose $s_{\rm min}\gg m_\r^2$, the $\rho$ peak is clearly excluded. The change in $I^{(\hw_0)}_{V/A;\rm ex}(s_0)$ as $s_0$ is increased from the upper edge of bin $k$ to the upper edge of bin $k+1$ is, of course, equal to the average value of the relevant spectral function, $\rho_{V/A}$, in bin $k+1$. As such, in fits which employ all possible $s_0\ge s_{\rm min}$, the fact that the $s_0$-dependence of $I^{(\hw_0)}_{V/A;\rm ex}(s_0)$ is one of the key elements entering the fit means that spectral function values in the interval $s_{\rm min}\le s\le m_\tau^2$ are part of the input, but clearly not the [*only*]{} input.\ Let us be even more specific. First, as already noted, even for single-weight $\hat{w}_0$ fits, the integral of the experimental spectral function over the region from threshold to $s_{\rm min}$ enters the $\hat{w}_0$ moment for all $s_0$. While this is a region in which the OPE and the DV  are not valid, this additional input turns out to be crucial; fits for both $\alpha_s(m_\tau^2)$ and the DV parameters are not possible without including it. Second, as seen in our previous analysis employing the OPAL data, fit results are not changed if, rather than using integrated data for all available $s_0>s_{\rm min}$, one instead employs a winnowed set thereof in the analysis. For such a winnowed set, it is only the sums of the experimental spectral function values over the bins lying between adjacent winnowed $s_0$, and not the full set of spectral function values in all bins in those intervals, that determine the $s_0$ variation entering the fit. Finally, all of the multi-weight fits we employ involve weights, $w(x=s/s_0)$, which are themselves $s_0$ dependent. This means that the $s_0$ dependence of the DV part of the corresponding theory moments results not just from the values of $\rho^{\rm DV}(s)$ in the inteval $s_0\le s\le m_\tau^2$ (where experimental constraints exist), but also involve $s_0$- and $w(x)$-dependent weighted integrals of the DV  form in the interval from $m_\tau^2$ to $\infty$. It would thus be incorrect to characterize the moment-based fit analysis we employ as in any way representing simply a fit to the experimental spectral functions. [^7]: The $D>2$ OPE coefficients are also generally different between the $V$ and $A$ channels [@BNP]. In the case of $C_4$ (which, due to the absence of terms linear in $x$, does not enter for the weights we employ, in the approximation of dropping contributions higher-than-leading order in $\alpha_s$) the full gluon condensate and leading-order quark condensate contributions are the same for the $V$ and $A$ channels. For polynomial weights with a term linear in $x$, $D=4$ contributions would be present, and one could impose the resulting near-equality of $C_4$ in the $V$ and $A$ channels. This was done in the version of the analysis performed by OPAL but not in the analyses of the ALEPH collaboration, including Ref. [@ALEPH13]. The fact that the fitted value of the gluon condensate obtained from independent $V$ and $A$ channel fits in Ref. [@ALEPH13] are not close to agreeing within errors is, in fact, a clear sign of the unphysical nature of these fits, see Sec. \[ALEPH\] below. [^8]: Note that the vertical axis covers the interval $\d_V\in[2,5]$, to be compared with the significantly larger interval $\d_V\in[-2,5]$ in Fig. 2 of Ref. [@alphas2]. [^9]: Which, given the fact that $\cq^2$ is not equal to $\chi^2$ for these fits, cannot easily be translated into $p$-values. [^10]: We recall that even though correlations between different spectral moments are not included in the fit quality $\cq^2$, those between bins within one spectral moment are included, making these fits strongly correlated. [^11]: The contribution from OPE terms to the spectral functions $\r_{V,A}$ is suppressed by an extra power of $\a_s$, and small enough to be negligible [@CGP; @alphas1]. [^12]: We neglected the smaller errors on $\a_s$ and $\langle\bar qq\rangle$. [^13]: Which appear in subleading terms in $\a_s$ at each order in the OPE. [^14]: For a detailed discussion of this point, see Ref. [@alphas1]. [^15]: We point out that the inadequacy of the standard analysis framework was already demonstrated in Refs. [@MY08; @CGP; @alphas1; @alphas2], but it appears important to re-emphasize this point in view of the continued use of this framework in the literature, in particular in the updated analysis of Ref. [@ALEPH13]. [^16]: This problem already existed in earlier ALEPH analyses [@ALEPH; @ALEPH08], but in principle it might have been due to the problem with the data itself. Note that OPAL enforced equality of the gluon condensate between various channels, and were able to obtain reasonable fits as judged by the $\chi^2$, possibly because of the larger data errors. [^17]: Up to the order considered [@PT].
--- author: - 'G. Cassam-Chena[ï]{}$^{\&}$' - 'A. Decourchelle' - 'J. Ballet' - 'D. C. Ellison' bibliography: - 'biblio\_article3.bib' title: Morphology of synchrotron emission in young supernova remnants --- Introduction\[sect-intro\] ========================== Shocks in supernova remnants (SNRs) are believed to produce the majority of the Galactic cosmic-rays (CRs) at least up to the “knee” ($\sim 3 \times 10^{15}$ eV). The particle acceleration mechanism most likely responsible for this is known as diffusive shock acceleration (DSA) [e.g., @dr83; @ble87]. This mechanism may transfer a large fraction of the ram kinetic energy (up to $50\%$) into relativistic particles and remove it from the thermal plasma [see, for example, @joe91]. Convincing observational support for the acceleration of particles in shell-type SNRs comes from their nonthermal radio and X-ray emissions due to synchrotron radiation from relativistic GeV and at least TeV electrons, respectively. In radio and X-rays, synchrotron-dominated SNRs display various morphologies: for instance, the synchrotron emission dominates in two bright limbs in SN 1006 [e.g., @rob04] whereas it is distorted and complex in RX J1713.7–3946 [e.g., @cad04b]. The detection and imaging with the *HESS* telescopes of TeV $\gamma$-rays in RX J1713.7–3946 provides unambiguous evidence for particle acceleration to very high energies. The $\gamma$-ray morphology in this remnant is similar to that seen in X-rays [@aha04]. Recent works based on *Chandra* [@vil03a for Cas A] and *XMM-Newton* [@cad04a for Kepler’s SNR] observations have demonstrated that X-ray synchrotron emission is also present in ejecta-dominated SNRs and largely contributes to the continuum emission at the forward shock. This X-ray emission arises from sharp filaments encircling the SNR’s outer boundary. The observed width of these filaments is a few arcseconds, and has been used to constrain the magnetic field intensity just behind the shock[^1] [@vil03a; @bek03; @bev04a; @vob05; @ba05]. A number of recent hydrodynamical models, including particle acceleration and photon emission, have been presented to explain various features of these observations. @re98 has described the morphology and spectrum of the synchrotron X-ray emission from SNRs in the Sedov evolutionary phase. Similar work based on numerical simulations was done by @vaa04 who take into account the diffusion of particles. CRs are treated as test-particles in these studies. Here, we expand on the work of @re98 by considering young (ejecta-dominated) SNRs. We investigate the synchrotron emission morphology, both in radio and X-rays, as well as how it can be modified by efficient particle acceleration. Our results show that the radio and X-ray profiles are very different due to the effects of the magnetic field evolution and synchrotron losses in the interaction region between the contact discontinuity and the forward shock. For typical parameters, the radio emission peaks at the contact discontinuity while the X-ray emission forms sheet-like structures at the forward shock. Hydrodynamics and particle acceleration\[sect-hydro\] ===================================================== The hydrodynamic evolution of young supernova remnants, including the backreaction from accelerated particles, can be described by self-similar solutions if the initial density profiles in the ejected material (ejecta) and in the ambient medium have power-law distributions [@ch82; @ch83], and if the acceleration efficiency (*i.e.* the fraction of total ram kinetic energy going into suprathermal particles) is independent of time. Here, we use the self-similar model of @ch83 which considers a thermal gas ($\gamma=5/3$) and the cosmic-ray fluid ($\gamma=4/3$), with the boundary conditions calculated from the non-linear diffusive shock acceleration (DSA) model of @bee99 and @elb00 as described in @dee00. This acceleration model is an approximate, semi-analytical model that determines the shock modification and particle spectrum from thermal to relativistic energies in the plane-wave, steady state approximation as a function of an arbitrary injection parameter, $\eta_{\mathrm{inj}}$ (*i.e.* the fraction of total particles which end up with suprathermal energies). The validity of the self-similar solutions has been discussed by @dee00 and direct comparisons between this self-similar model and the more general CR-hydro model of @eld04 showed good correspondence for a range of input conditions. The hydrodynamic evolution provides the shock characteristics necessary to calculate the particle spectrum at the forward shock[^2], at any time. Once a particle spectrum has been produced at the shock, it will evolve downstream because of radiative and adiabatic expansion losses. We assume that the accelerated particles remain confined to the fluid element in which they were produced, so adiabatic losses are determined directly from the fluid element expansion. The basic power law spectrum produced by DSA, before losses are taken into account, is modified at the highest energies with a exponential cutoff, $\exp(-p/p_{\mathrm{max}})$, where $p_{\mathrm{max}}$ is determined by matching either the acceleration time to the shock age or the upstream diffusive length to some fraction of the shock radius. In our simulation, the electron-to-proton density ratio at relativistic energies, $(e/p)_{\mathrm{rel}}$, is set equal to $0.01$ [see @elb00]. Unless explicitly stated, our numerical examples are given for the following supernova parameters: $M_{\mathrm{ej}} = 5 \; \mathrm{M}_{\odot}$ for the ejected mass, $E_{51} = 1$ where $E_{51}$ is the kinetic energy of the ejecta in units of $10^{51}$ ergs and $n=9$, where $n$ is the index of the initial power-law density profile in the ejecta ($\rho \propto r^{-n}$). In our simulations, the SNR age is $t_{f} = 400$ years and the shock velocity at the forward shock is $v_{s} \simeq 5 \times 10^{3} \: \mathrm{km/s}$. For the ambient medium parameters, we take a magnetic field $B_0 = 10 \: \mu\mathrm{G}$, a density $n_0 = 0.1 \; \mathrm{cm}^{-3}$, an ambient gas pressure $p_{\mathrm{g},0}/k = 2 \: 300 \; \mathrm{K} \: \mathrm{cm}^{-3}$ and $s=0$, where $s$ is the index of its initial power-law density profile ($\rho \propto r^{-s}$). The case $s=0$ corresponds to a uniform interstellar medium ($s=2$ describes a stellar wind). In the next section, we discuss the importance of the magnetic field for the synchrotron emission and particle acceleration. We do not, however, explicitly include the dynamical influence of the magnetic field on the hydrodynamics. Results\[sect-results\] ======================= Magnetic field\[subsect-MF\] ---------------------------- To track the synchrotron losses, we are interested in the temporal evolution of the magnetic field behind the shock. We assume the magnetic field to be simply compressed at the shock and passively carried by the flow, frozen in the plasma, so that it evolves conserving flux. In this simple 1-D approach, we do not consider any production of the SNR magnetic field, for instance, by hydrodynamical instabilities which is an additional effect. As for the magnetic field ahead of the forward shock, it is assumed to be isotropic and fully turbulent. Appendix \[app-MFevol\] (see the on-line version) shows how to compute the magnetic field profile for self-similar solutions in both test-particle and nonlinear particle acceleration cases. ### Test-Particle limit\[Test-particle\] We first discuss the behavior of the normal and tangential components of the magnetic field in the test-particle case where the backreaction of the accelerated particles is neglected. When the SNR evolves in an ambient medium which is uniform in density and magnetic field, the expansion and flux freezing generally cause the tangential component of the magnetic field to increase at the contact discontinuity whereas the normal component falls to zero (Fig. \[fig-B-profile-TP-n9s0q0\]). As a result, the magnetic field profile is dominated by the tangential component. ![Radial profile of the normal ($B_r$) and tangential ($B_t$) components of the magnetic field in a test-particle self-similar model. Each component is normalized to the forward shock.[]{data-label="fig-B-profile-TP-n9s0q0"}](2853_f1.eps){width="9cm"} One has often invoked hydrodynamic instabilities to explain the magnetic field increase at the interface between the shocked ejecta and the shocked ambient medium [@jun95]. The numerical simulations of @jun96 have shown that the magnetic field could be amplified by a factor 60 by Rayleigh-Taylor and Kelvin-Helmholtz instabilities. Here, we note that simple advection of the magnetic field already predicts amplification by a factor 5 (Table \[Tab-sigmaB\] top, $n=9$). We note that, if the SNR evolves in a wind with a decreasing initial density profile, advection goes the other way (diluting the magnetic field instead of amplifying it). But when both the ambient density and magnetic field decrease with radius, as would be the case for a pre-supernova stellar wind, the magnetic field is larger close to the contact discontinuity than at the forward shock (by a factor of $\sim 1000$ in some cases). This is because the dilution of the advected magnetic field is negligible compared to the fact that the ambient magnetic field was much larger at early times. ### Nonlinear Particle Acceleration ![Magnetic field radial profile for different values of the injection efficiency, $\eta_{\mathrm{inj}}$, when the hydrodynamics is coupled with the non-linear DSA model. The width of the shocked region is smaller and smaller as the feedback of the accelerated particles on the SNR dynamics increases. []{data-label="fig-B-profile-NL"}](2853_f2.eps){width="9cm"} We now consider the behavior of the normal and tangential components of the magnetic field in the nonlinear case where the backreaction of the accelerated particles on the shock is taken into account. In the ideal non-linear case, where the acceleration is instantaneous, the magnetic field diverges at the contact discontinuity because of its tangential component, whatever the injection efficiency is, as in the test-particle case. However, the contrast between the magnetic field in a given fluid element and the one just behind the shock, will be always smaller than in the test-particle case (see Table \[Tab-sigmaB\]). Figure \[fig-B-profile-NL\] shows the profile of the total downstream magnetic field for different values of the injection efficiency. Table \[Tab-rtot\_etainj\] shows the associated compression ratio and immediate post-shock magnetic field. ----------------------------------------------------------------------------- $\eta_{\mathrm{inj}}$ $10^{-3}$ $2 \times $10^{-4}$ $10^{-5}$ 10^{-4}$ ----------------------------- ----------- ----------- ----------- ----------- $r_{\mathrm{tot}}$ 8.5 7.5 5.9 4.1 $B_{s}$ (${\mu}\mathrm{G}$) 69 61 49 34 ----------------------------------------------------------------------------- : Compression ratio, $r_{\mathrm{tot}}$, and downstream magnetic field, $B_{s}$, at the forward shock obtained for different injection, $\eta_{\mathrm{inj}}$. The magnetic field compression ratio is given by $r_{\mathrm{B}} \equiv B_{s}/B_{0} = \sqrt{1/3+2 \: r_{\mathrm{tot}}^{2}/3}$ and $B_{0}= 10 \: {\mu}\mathrm{G}$ here.[]{data-label="Tab-rtot_etainj"} Synchrotron emission\[subsect-syn-emis\] ---------------------------------------- ![Radio (*top panel*) and X-ray (*bottom panel*) synchrotron volume emissivity, $\epsilon_{\nu}$, radial profile for different injection efficiencies. []{data-label="fig-synch-etainj"}](2853_f4.eps){width="9cm"} Once the magnetic field structure and the particle spectrum (attached to a fluid element) modified by the radiative and adiabatic expansion losses as computed in @re98 are known, we compute the synchrotron emission [@ryl79], averaged over the pitch-angle, in any energy band[^3]. Figure \[fig-synch-etainj\] shows the radial profiles of the synchrotron emission in the radio (top panel) and X-ray (bottom panel) domains for different injection efficiencies, $\eta_{\mathrm{inj}}$. An increase in the injection efficiency not only provides a larger number of accelerated electrons, but also a larger compression of the downstream magnetic field (see Table \[Tab-rtot\_etainj\]) and a narrower interaction region. These effects combine to produce enhanced synchrotron emission as the injection increases. The radio synchrotron emission is produced by GeV electrons which are not affected by radiative losses. Consequently, the radio synchrotron emission critically depends on the final magnetic field profile (Fig. \[fig-B-profile-NL\]) and, therefore, peaks at the contact discontinuity. In contrast, the X-ray synchrotron emission is produced by the highest momentum electrons ($\sim 10^{3-5} \: m_{\mathrm{p}} \: c$) which, depending on the downstream field strength, may suffer radiative losses. The high energy electrons that have been accelerated at the earliest time have suffered strong synchrotron losses as they were advected behind the shock. Because of this, they are not numerous enough at the end to radiate in the X-ray regime despite a strong magnetic field. As a result, the X-ray synchrotron emission rapidly decreases behind the shock. The X-ray profile becomes sharper when the injection efficiency increases because it provides larger compression of the downstream magnetic field and then stronger synchrotron losses. Figure \[fig-synchproj-etainj\] shows the synchrotron emission after integration along the line-of-sight. The radial profile of the radio emission (top panel) shows a peak at the contact discontinuity. The radial profile of the X-ray projected synchrotron emission (bottom panel) shows bright rims just behind the forward shock whose width decreases as the injection efficiency increases. Discussion and Conclusion\[sect-concl\] ======================================= We have computed the radio and X-ray synchrotron emission in young ejecta-dominated SNRs. This has been done using a one dimensional, self-similar hydrodynamical calculation coupled with a non-linear diffusive shock acceleration model, and taking into account the adiabatic and radiative losses of the electron spectrum during its advection in the remnant. We show that the morphology of the synchrotron emission in young ejecta-dominated SNRs is very different in radio and X-ray. This is the result of the increased magnetic field toward the contact discontinuity, to which only low energy electrons that emit radio are sensitive, while the high energy electrons emitting X-rays experience strong radiative losses and are mostly dependent on the post-shock magnetic field. Briefly, the radio synchrotron emission increases as one moves from the forward shock toward the contact discontinuity due to a compression of the magnetic field (particularly its tangential component), assuming both uniform ambient density and upstream magnetic field. Such a compression naturally results from the dynamical evolution of the SNR. In contrast, because of the radiative losses, the X-ray synchrotron emission decreases behind the forward shock and forms sheet-like structures after line-of-sight projection. Their widths decrease as the acceleration becomes more efficient. The morphology of the radio synchrotron emission obtained for the young ejecta-dominated stage of SNRs will differ from that of SNRs in the Sedov phase (but not in X-ray). Indeed, @re98 has shown that both the normal and tangential components of the magnetic field decrease behind the forward shock in the Sedov phase and, as a result, we expect the radio synchrotron emission to decrease behind the shock (however, less rapidly than the X-ray synchrotron emission since the radio electrons do not experience radiative losses). Our model qualitatively reproduces the main features of the radio and X-ray observations of emission in young ejecta-dominated SNRs (e.g., Tycho and Kepler), *i.e.* bright radio synchrotron emission at the interface between the shocked ejecta and ambient medium, and a narrow filament of X-ray emission at the forward shock. However, this model is unable to reproduce the thin radio filaments observed at the forward shock in some SNRs [for instance those seen in Tycho’s SNR, @div91]. We note that extensions of this work to cases with exponential ejecta profiles and/or SNRs evolving in a pre-supernova stellar wind with varying magnetic fields, cannot be done with self-similar solutions. These cases can be calculated in the numerical CR-modified hydrodynamical model described in @eld05 and this work is in progress [@elc05]. Magnetic field evolution\[app-MFevol\] ====================================== The evolution of the normal (subscript $r$) and tangential (subscript $t$) components of the magnetic field at the downstream position, $B$, is given by [@rec81]: $$\begin{aligned} \label{Br(r)} B_{r}(r) & = & B_{r,j} \: \left( \frac{r}{r_j} \right)^{-2} \\ \label{Bt(r)} B_{t}(r) & = & B_{t,j} \: \frac{\rho}{\rho_j} \: \frac{r}{r_j},\end{aligned}$$ and the total magnetic field is simply [@re98]: $$B(r) = \left( B_{r}(r)^2 + B_t(r)^2 \right)^{1/2}.$$ In these equations, $r$ and $\rho$ are, respectively, the radius and density of a fluid element at the current time that was shocked at the previous time $t_j$. At time $t_j$, the fluid element was just behind the shock at the radius $r_{j}$, with a density $\rho_j$ and a magnetic field $B_{j}$. We assume that the upstream magnetic field at time $t_{j}$, $B_{0,j}$ is isotropic and fully turbulent so that the components of the immediate post-shock magnetic field $B_{j}$ in Eqs (\[Br(r)\]) and (\[Bt(r)\]) are given on average by [@bek02]: $$\begin{aligned} \label{Brj-Btj}\label{Brj} B_{r,j} &=& 1/ \sqrt{3} \: B_{0,j}\\ \label{Btj} B_{t,j} &=& \sqrt{2/3} \: r_{\mathrm{tot}} \: B_{0,j}.\end{aligned}$$ where $r_{\mathrm{tot}}$ is the shock compression ratio. In the self-similar approach, $r_{\mathrm{tot}}$ is assumed independent of time [see @dee00 for details]. We consider that the current magnetic field upstream of the forward shock, $B_{0,s}$, can behave like: $$\label{B-wind} B_{0,s} = B_{0,j} \left( \frac{r_{s}}{r_{j}} \right)^{-q}$$ where $r_{s}$ is the current shock radius. If the magnetic field is uniform, the index $q$ is equal to 0. In a stellar wind ($s=2$), the magnetic field profile may be decreasing yielding $q=1$ [@lyp04] or $q=2$ if we assume that it is frozen in the plasma. We define the magnetic field contrast factor, $\sigma_{B} \equiv B/B_{s}$, as the ratio between the current magnetic field in a fluid element, $B$, and the current one just behind the shock, $B_{s}$. We have: $$\label{sigmaB} \sigma_{B} = \left( \frac{ \sigma_{B_{r}}^{2} + 2 \: r_{\mathrm{tot}}^{2} \: \sigma_{B_{t}}^{2}} { 1 + 2 \: r_{\mathrm{tot}}^{2}} \right)^{1/2}$$ where $\sigma_{B_{r}} \equiv B_{r}/B_{r,s}$ and $\sigma_{B_{t}} \equiv B_{t}/B_{t,s}$ are the magnetic field contrast factors of the normal and tangential components of the field, respectively. The components $B_{r,s}$ and $B_{t,s}$ obey the same relation as in Eqs (\[Brj\]) and (\[Btj\]). Test-Particle limit\[Test-particle\] ------------------------------------ Assuming adiabaticity of the thermal gas, the magnetic field contrast factors of the normal and tangential components of the field are given by: $$\begin{aligned} \label{sigmaBr} \sigma_{B_{r}} &=& \left( \frac{R_s}{R} \right)^2 \: \left( \frac{v_{j}}{v_{s}} \right)^{\beta_{r}}\\\label{sigmaBt} \sigma_{B_{t}} &=& \left( \frac{P_{\mathrm{g},s}}{P_{\mathrm{g}}} \right)^{-3/5} \: \left( \frac{R_s}{R} \right)^{-(11-3s)/5} \:\left( \frac{v_{j}}{v_{s}} \right)^{\beta_{t}}\end{aligned}$$ where the indexes $\beta_{r}$ and $\beta_{t}$ are given by: $$\begin{aligned} \label{betar} \beta_{r} &=& \left( q-2 \right) \: \frac{n-3}{3-s} \\ \label{betat} \beta_{t} &=& \frac{5 n - 33 - 3 s (n-5)}{5(3-s)} + q \: \frac{n-3}{3-s}.\end{aligned}$$ In Eqs (\[sigmaBr\]) and (\[sigmaBt\]), $R_s/R$ and $P_{\mathrm{g},s}/P_{\mathrm{g}}$ are the ratio of the self-similar radii and thermal gas pressures, respectively, between the shock (subscript $s$) and a fluid element [see @ch82]. They depend on $n$ and $s$, but also weakly on $v_{j}/v_{s}$ where $v_{s}$ and $v_{j}$ are the current shock velocity and the shock velocity at the time $t_j$, respectively. In the framework of these self-similar solutions, the forward shock velocity tends to infinity at early times, corresponding to fluid elements close to the contact discontinuity at the current time. To limit the maximum velocity to a realistic value, we look at the value of $\sigma_{B}$ for a shock velocity ratio $v_{j}/v_{s} = 10$. For the typical forward shock velocity $v_{s}$ that we have used for the numerical application, the initial velocity corresponds to $v_{j} \simeq 5 \times 10^{4} \: \mathrm{km/s}$. This shock velocity is the criterion used to define the radial position of the oldest fluid element that is currently located close to the contact discontinuity. Here, we consider the case of both an uniform ambient medium ($s=0$) and upstream magnetic field ($q=0$). Under this assumption, $B_{s}=B_{j}$, since $r_{\mathrm{tot}}$ is constant with time. Then, the magnetic field contrast factor, $\sigma_{B}$ is equal to $B/B_{j}$ and can be viewed as a compression or a dilution factor. Table \[Tab-sigmaB\] (top) gives the contrast $\sigma_{B}$ for different values of $n$. -- ---- ------- ------- ------- ------ ------ ------ 7 1.181 4840 9 1.140 4850 12 1.121 4940 7 1.080 1.135 0.806 0.50 0.66 4370 9 1.060 1.045 0.754 2.6 3.4 4470 12 1.051 0.988 0.714 28 36 4610 -- ---- ------- ------- ------- ------ ------ ------ Nonlinear Particle Acceleration\[sect-Nonlinear-DSA\] ----------------------------------------------------- In the ideal non-linear case, where the acceleration is instantaneous and efficient, the thermal gas pressure falls to zero at the contact discontinuity while the relativistic gas pressure goes to infinity. Hence, the contrast factor of the tangential field component, $\sigma_{B_{t}}$, given by Eq. (\[sigmaBt\]), obtained in the test-particle limit, is not defined when $v_{j}/v_{s}$ tends to infinity. However, the contrast of the tangential component of the magnetic field can also be found by using the adiabaticity of the relativistic gas: $$\begin{aligned} \label{sigmaBt-cosmic} \sigma_{B_{t}} &=& \left( \frac{P_{\mathrm{c},s}}{P_{\mathrm{c}}} \right)^{-3/4} \: \left( \frac{R_s}{R} \right)^{-(10-3s)/4} \:\left( \frac{v_{j}}{v_{s}} \right)^{\beta_{t}'}\end{aligned}$$ where the index $\beta_{t}'$ is given by: $$\begin{aligned} \label{betat-cosmic} \beta_{t}' &=& \frac{4 n - 30 - 3 s (n-5)}{4(3-s)} + q \: \frac{n-3}{3-s}.\end{aligned}$$ In Eq. (\[sigmaBt-cosmic\]), $P_{\mathrm{c},s}/P_{\mathrm{c}}$ is the ratio of the self-similar relativistic gas pressures between the shock (subscript $s$) and a fluid element. This ratio depends on $n$, $s$, and $v_{j}/v_{s}$. The contrast of the normal field component, $\sigma_{B_{r}}$, is still given by Eq. (\[sigmaBr\]). The asymptotic behavior of the contrast factor, $\sigma_{B_{t}}$, can be derived from Eq. (\[sigmaBt-cosmic\]) because the relativistic gas pressure does not tend to zero at the contact discontinuity. Because the thermal gas pressure vanishes as we approach the contact discontinuity in the case of ideal particle acceleration, i.e., when the acceleration is instantaneous and efficient, the contrast of the tangential field component, $\sigma_{B_{t}}$, will always be smaller than in the test-particle case where the thermal gas pressure rapidly tends to a constant (see Eq. \[sigmaBt\]). Table \[Tab-sigmaB\] (bottom) gives the lower and upper limits on the magnetic field contrast factor, $\sigma_{B}$, in the case of ideal nonlinear particle acceleration for $\eta_{\mathrm{inj}}=10^{-3}$ and for different values of $n$ when both the ambient medium and upstream magnetic field are uniform ($s=0$ and $q=0$). The lower and upper limits on $\sigma_{B}$ are obtained by replacing in Eq. (\[sigmaBt-cosmic\]) the ratio of the self-similar relativistic gas pressures, $P_{\mathrm{c},s}/P_{\mathrm{c}}$, by the ratio of the self-similar total gas pressures, $P_{s}/P \equiv (P_{\mathrm{c},s}+P_{\mathrm{g},s})/(P_{\mathrm{c}}+P_{\mathrm{g}})$, and by the ratio between the self-similar relativistic gas pressure at the shock and the self-similar total gas pressure, $P_{c,s}/P$, respectively. However, for an injection efficiency lower than $\sim 5 \times 10^{-4}$, the acceleration is not efficient enough for the shock to be modified at the beginning of the evolution. In that case, the fluid elements that have been shocked at the earliest times are still dominated by the thermal gas so that test-particle solutions could still apply locally. [^1]: Magnetic field values are found to be at least 30 times higher than the typical Galactic field of $3 \: {\mu}$G and imply that the field has been amplified, perhaps by the particle acceleration process [@bel01]. [^2]: We do not consider CR production at the reverse shock since the magnetic field at the reverse shock may be considerably smaller than that at the forward shock due to the dilution by expansion and flux freezing of the progenitor magnetic field [see @eld05]. [^3]: We did not calculate the synchrotron emission from the precursor.
--- abstract: 'To monitor health information using wireless sensors on body is a promising new application. Human body acts as a transmission channel in wearable wireless devices, so electromagnetic propagation modeling is well thought-out for transmission channel in Wireless Body Area Sensor Network (WBASN). In this paper we have presented the wave propagation in WBASN which is modeled as point source (Antenna), close to the arm of the human body. Four possible cases are presented, where transmitter and receiver are inside or outside of the body. Dyadic Green’s function is specifically used to propose a channel model for arm motion of human body model. This function is expanded in terms of vector wave function and scattering superposition principle. This paper describes the analytical derivation of the spherical electric field distribution model and the simulation of those derivations.' author: - | Q. Ain, A. Ikram, N. Javaid, U. Qasim$^{\ddag}$, Z. A. Khan$^{\S}$\ $^{\ddag}$University of Alberta, Alberta, Canada\ Department of Electrical Engineering, COMSATS\ Institute of Information Technology, Islamabad, Pakistan.\ $^{\S}$Faculty of Engineering, Dalhousie University, Halifax, Canada. title: 'Modeling Propagation Characteristics for Arm-Motion in Wireless Body Area Sensor Networks' --- Wireless Body Area Networks, Dyadic Green’s Function Introduction ============ Hospitals throughout the world are facing a unique problem, as the aged population is increased, health-care population is decreased. Telecommunication community is not doing much work in the field of medicine however, there is a need of remote patient monitoring technology. To fulfill this task, it is required to build communication network between an external interface and portable sensor devices worn on and implemented within the body of the user which can be done by BASNs. BASNs is not only useful for remote patient monitoring, but can also establishes within the hospitals; like in operation theaters and intensive care units. It would enhance patient comfort as well as provide ease to doctors and nurses to perform their work efficiently. BAN is used for connecting body to wireless devices and finds applications in various areas such as entertainment, defense forces and sports. The basic step in building any wireless device is to study the transmission channel and to model it accurately. Channel modeling is a technique that has been initiated by a group of researchers throughout the world \[1\]. They have studied path loss and performed measurement campaigns for wireless node on the body \[2-8\]. Some researchers have taken into account, the implanted devices which are the area of BAN called as intra-body communication \[9\]. For the short range low data rate communication in BAN, measurement groups have considered Ultra-Wide Band (UWB) as the appropriate air interface. The models developed by measurement campaigns are only path loss models and do not provide any description of propagation channel. It is important to study the propagation mechanism of radio waves on and inside the body in order to develop an accurate BAN channel model. This study will show the underlying propagation characteristics. It would help in the development of BAN transceivers which are much suited to the body environment. For a given position of the transmitter on or inside the body it is required to find out the electromagnetic field on or inside the body for a BAN channel model. This is quite a critical problem that requires a large amount of computational power. Therefore, it is necessary to derive an analytical expression which will perform this objective. In short this determines which propagation mechanism takes place, that is reflection, diffraction and transmission \[10\]. An appropriate method of doing this task is by using Dyadic Green’s function. The solution of canonical problems, such as cylinder, multi layer and sphere have been solved in Electro Magnetic (EM) theory, using Dyadic Green’s Functions \[11-13\]. Motivation ========== Recently, WBASNs shows potential due to increasing application in medical health care. In WBASNs, each sensor in the body sends it’s data to antenna,both sensors and antenna are worn directly on the body. Examples include sensors which can measure Brain activity, blood pressure, body movement and automatic emergency calls. We require simple and generic body area propagation models to develop efficient and low power radio systems near the human body. To achieve better performance and reliability, wave propagation needs to be modeled correctly. Few studies have focused on analytic model of propagation around a cylinder (as human body resembles a cylinder) using different functions. These functions involve Mathieu function, Dyadic Green’s function, Maxwell’s equations, Finite Difference Time Domain (FDTD) and Uniform Theory of Diffraction (UTD). Some of these approaches have already proven effective for evaluating body area communication system proposals. Finite Difference Time Domain had successfully measured the communication scenarios. Complete Ultra-Wide band models have been developed using measurements and simulations, however they do not consider the physical propagation mechanism. So, the researchers have to rely on ad-hoc modeling approaches which can result in less accurate propagation trends and inappropriate modeling choices \[14, 15\]. Uniform Theory of Diffraction depends on a ray tracing mechanism allowing propagation channel to be explained in terms of ray diffraction around the body . It typically based on high-frequency approximations which is not valid for low frequencies, also not useful when antenna is very close to the body \[16\]. A generic approach is proposed to understand the body area propagation by considering the body as a lossy cylinder and antenna as a point source by using Maxwell’s equation. A solution for a line source near lossy cylinder is derived using addition theorem of Hankel functions then the line source is converted into the point source by taking inverse Fourier transform. The model accurately predicts the path loss model and can be extended to all frequencies and polarities but this is limited in scope and not always physically motivated \[17\]. Mathieu functions are also used for body area propagation model. The human body is treated as a lossy dielectric elliptic cylinder with infinite length and a small antenna is treated as three-dimensional (3-D) polarized point source. First the three-dimensional problem of cylinder is resolved into 2-D problem by using Fourier transform and then this can be expanded in terms of Eigen functions in cylindrical coordinates. By using Mathieu function exact expression of electric field distribution near the human body is deduced \[18\]. The propagation characteristics of cylindrical shaped human body have been derived using Dyadic Green’s functions. The model includes the cases of transmitter and receiver presents either inside or outside of the body and also provides simulation plots of Electric field with different values of angle $(\theta)$. All the above proposals describe the propagation characteristics of cylindrically shaped human model \[19\]. We have developed a simple but generic approach to body area propagation derived from Dyadic Green’s Function (DGF). This approach is for arm motion of human body. When the human arm is moved in $ r,\theta,\phi$ direction, propagation characteristics of spherical shaped have been derived using DGF. First, we use spherical vector Eigen functions for finding the scattering superposition. Four cases are considered for either transmitter or receiver is located inside or outside the body. Finally, simulated results of electric field distribution with different values of angle have shown. Mathematical Modeling for Arm Motion using Dyadic Green’s Function ================================================================== In this paper, spherical symmetry is used to represent in and around the arm of the human body. A point on body is a sensor, denoted by x which represents ($r$,$\Theta$,$\phi$) coordinates in the spherical coordinate system and $x_0$ is the location of transmitting antenna. ($r$,$\Theta$,$\phi$) are unit vectors along radial, angle of elevation from z-axis and azimuthal angle from x-axis as shown in figure 1. ![ **Human body model showing arm motion in 3D.**](arm-model.eps){height="7cm" width="8cm"} Electric Field Propagation Characteristics ------------------------------------------ Let $E(x)$ be electric field at point $x$ due to current source $J(x_0)$. The general formula for Electric field can be written as: $$\begin{aligned} E(x)=i \omega \mu_{p} \int \int \int_{V} G(x,x_{o})J(x,x_{0})dv\end{aligned}$$ $V$ is volume of source, $J(X_0)$ is the current source, $G(x,x_0)$ is the Dyadic Green’s function $'\omega'$ is the radian frequency of transmission and $'\mu_{p}'$ is magmatic permeability of the medium. A Dyadic Green’s function is a type of function used to solve inhomogeneous differential equations subject to specific initial conditions or boundary condition. Spherical Wave Vector Eigen Function ------------------------------------ As we are considering arm motion of human body, so spherical symmetry is used by taking shoulder as center. For this, spherical eigen functions are used to write the Dyadic Green’s function. Dyadic Green’s function is basically depends on the spherical vector eigen functions \[14\]. These eigen functions are $L_{nhk}(\chi)$, $M_{nhk}(\chi)$ and $N_{nhk}(\chi)$, where $k$ is the wave number of medium, $n$ is an integer, $h$ is a real number and $x$ is a point in space. These all are the solutions to the Helmholtz equation having three components in $r$, $\Theta$ and $\phi$. These vector eigen functions are given by \[19\]: $$\begin{aligned} L_{nhk} (\chi)=\nabla[\Psi_{nhk}(\chi)]\end{aligned}$$ $$\begin{aligned} M_{nhk}(\chi)=\nabla\times[\Psi_{nhk}(\chi)]\end{aligned}$$ $$\begin{aligned} % \nonumber to remove numbering (before each equation) N_{nhk}(\chi)=\frac{1}{k}\nabla\times\nabla[\Psi_{nhk}(\chi)]\end{aligned}$$ In above eigen functions, Laplacian operator in the spherical coordinate system is $\nabla$. It’s mathematical expression is given as: $$\begin{aligned} \nabla=\frac{\partial}{\partial r} + \frac{\partial}{r \partial \theta} + \frac{\partial}{r \sin \theta \partial \phi}\end{aligned}$$ $x$ represents the point in space having components $r$, $\Theta$ and $\phi$. Solution of Helmoltz equation is $\Psi_{nhk}(x)$ which is the scalar eigen function \[19\]. $$\begin{aligned} % \nonumber to remove numbering (before each equation) [\Psi_{nhk}(\chi)]= Z_{n}(\eta r) P^{h}_{n}(\cos\theta)_{\sin}^{\cos} h \phi\end{aligned}$$ $Z_{n}$ is a general spherical function of order $n$. For sphere we use Hankle function of first and second order which are defined as: $$\begin{aligned} % \nonumber to remove numbering (before each equation) [Z_{n}(\eta r)]= (-1)^{n}(\eta r)(\frac{d}{dr\eta^{2} r})^n(\frac{\sin(\eta r)}{\eta r})^{n}\end{aligned}$$ $\eta$ is the propagation constant in direction of $\phi$, whereas $k^2=\eta^2 + h^2$. The laplace operator is applied and find the eigen values $L_{nhk}$, $M_{nhk}$ and $N_{nhk}$ by using eigen function. The vector eigen function in (2), (3) and (4) becomes: $$\begin{aligned} \begin{split} % \nonumber to remove numbering (before each equation) L_{nhk}(\chi)=\frac{\partial Z_{n}(\eta r)}{\partial r}P^{h}_{n}(\cos\theta)_{\sin}^{\cos} h \phi+ \frac{z_{n}(\eta r)}{r}\\\frac{\partial}{\partial\theta}P^{h}_{n}(\cos\theta)_{\sin}^{\cos} h \phi + \frac{h Z_{n}(\eta r)}{r\sin\theta}P^{h}_{n}(\cos\theta)_{\cos}^{\sin} h \phi \end{split}\end{aligned}$$ $$\begin{aligned} \begin{split} % \nonumber to remove numbering (before each equation) M_{nhk}(\chi)=\mp\frac{h Z_{n}(\eta r)}{\sin\theta}P^{h}_{n}(\cos\theta)_{\cos}^{\sin} h \phi- Z_{n}(\eta r) \\\frac{\partial}{\partial\theta}P^{h}_{n}(\cos\theta)_{\sin}^{\cos} h \phi \end{split}\end{aligned}$$ $$\begin{aligned} \begin{split} % \nonumber to remove numbering (before each equation) N_{nhk}(\chi)=\frac{n Z_{n}(\eta r)}{k r}P^{h}_{n}(\cos\theta)_{\sin}^{\cos} h \phi + \frac{1}{k r} \\\frac{\partial r Z_{n}(\eta r)}{\partial r}P^{h}_{n}(\cos\theta)_{\sin}^{\cos} h \phi \mp \frac{h}{\sin\theta}P^{h}_{n}(\cos\theta)_{\cos}^{\sin} h \phi \end{split}\end{aligned}$$ These three vector eigen function are perpendicular among themselves as well as with respect to each other \[11\]. In the form of matrices, vector Eigen functions can be written in this form, $$\begin{aligned} L_{nhk}(\chi) = \begin{pmatrix} \frac{\partial Z_{n})(\eta r)}{\partial r}P^{h}_{n}(\cos\theta)_{\sin}^{\cos} h \phi \\ \frac{Z_{n}(\eta r)}{r}P^{h}_{n}(\cos\theta)_{\sin}^{\cos} h \phi \\ \frac{h Z_{n}(\eta r)}{\sin\theta}P^{h}_{n}(\cos\theta)_{\cos}^{\sin} h \phi \\ \end{pmatrix}\end{aligned}$$ $$\begin{aligned} M_{nhk}(\chi) = \begin{pmatrix} 0 \\ \mp\frac{h Z_{n}(\eta r)}{r}P^{h}_{n}(\cos\theta)_{\sin}^{\cos} h \phi \\ - Z_{n}(\eta r) \frac{\partial P^{h}_{n}(\cos\theta)_{\cos}^{\sin} h \phi} {\partial\theta} \\ \end{pmatrix}\end{aligned}$$ $$\begin{aligned} N_{nhk}(\chi) = \begin{pmatrix} \frac{h Z_{n}(\eta r)}{k r}P^{h}_{n}(\cos\theta)_{\sin}^{\cos} h \phi \\ \frac{\partial_{n}(\eta r)}{kr \partial r}\frac{P^{h}_{n}(\cos\theta)_{\sin}^{\cos} h \phi} {\partial\theta} \\ \mp\frac{h}{\sin\theta}P^{h}_{n}(\cos\theta)_{\cos}^{\sin} h \phi \\ \end{pmatrix}\end{aligned}$$ Scattering Superposition ------------------------ In scattering problems, it is desirable to determine an unknown scattered field that is due to a known incident field. Using the principle of scattering superposition we can write Dyadic Green’s equation as superposition of direct wave and scattering wave. In Figure 2, concept of scattering superposition is shown in which there is a sensor located inside the arm of body considered as sphere. The sensor transmits the wave to antenna which is divided in two parts as Direct wave and Scattered wave. The Direct wave is considered as wave directly transmits from sensor to transmitter and scattered wave is composed of reflection and transmission waves. Therefore, general equation of scattering superposition is illustrated as: $$\begin{aligned} % \nonumber to remove numbering (before each equation) G(x,x_0)= G_{d}(x,x_0) + G_{s}(x,x_0)\end{aligned}$$ ![image](spheric.eps){height="9cm" width="18cm"} Dyadic Green’s equation is divided in to two parts as direct wave $[G_d(x,x_0)]$ and scattered wave $[G_s(x,x_0)]$. The direct wave corresponds to direct from source to measuring point and scattered is the reflection and transmission waves due to presence of dielectric interface. Superposition of Direct Wave ---------------------------- The direct component of DGF is given as \[11\]: $$\begin{aligned} \begin{split} % \nonumber to remove numbering (before each equation) G_{d}(x,x_0)= \frac{rr}{k^2}(\delta(x-x_0)+\frac{\jmath}{8\pi}\int_{-\infty}^{\infty} dh\sum_{n=-\infty}^{\infty}\frac{1}{n^2} x \times\\ \begin{cases} M^{(1)}_{nhk}(X)\bigotimes M_{nhk}^\ast(X_0)+ N^{(1)}_{nhk}(X)\bigotimes N_{nhk}^\ast(X_0)\\ M_{nhk}(X)\bigotimes M^{(1\ast)}_{nhk}(X_0)+ N_{nhk}(X)\bigotimes N^{(1\ast)}_{nhk}(X_0)\\ \end{cases} \end {split}\end{aligned}$$ In the above equation of DGF, $r>r_0$ is for first case and $r<r_0$ is second case.The $\ast$ denotes the conjugation and $\bigotimes$ is for the Dyadic product. Here we introduces superscript (1) for outgoings wave and other for standing waves. If the vector eigen function has the superscript (1) then, $H^{(1)}_{n}$ is chosen for $Z_{n}$ and $J_{n}$ should be used otherwise. Superposition of Scattered Wave ------------------------------- Here we discuss four different scenarios for the scattering components of DGF along with boundary conditions $G_s {(x,x_0)}$. (i) Both receiver and transmitter are inside the body. (ii) The receiver is located outside and transmitter is located inside the body. (iii) The receiver is located inside and transmitter is outside the body. (iv) Both transmitter and receiver are located outside the body. Receiver and transmitter are in the order: $1$ denotes the medium inside human body and $2$ is for free space medium. Transmitter and Receiver Located Inside Body -------------------------------------------- In this case, Receiver and Transmitter both located inside the body so we can write Dyadic Green’s equation as, $$\begin{aligned} \begin{split} % \nonumber to remove numbering (before each equation) G^{(11)}_{s}(x,x_0)=\frac{\jmath}{8\pi}\int_{-\infty}^{\infty} dh\sum_{-\infty}^{\infty}\frac{1}{\eta^2} x\\ \times [M_{nhk1} N_{nhk1}] R_12 \times \begin{cases} N_{nhk1}(X_0)^T\\ M_{nhk1}(X_0)^T \end{cases} \end{split}\end{aligned}$$ $R_{12}$ contains reflection coefficients. $R_{12}$ is calculated in literature using boundary conditions, its matrix is given by \[16\]: $$\begin{aligned} \begin{split} R_{12}= [J_n(\eta_1d)H_n(\eta_2d)-H_n(\eta_2d)J_n(\eta_1d)]^-1\\ \times[H_n(\eta_2d)H_n(\eta_1d)-H_n(\eta_1d)J_n(\eta_2d)]^-1 \end{split}\end{aligned}$$ In the above equation of reflection coefficient ’d’ represents radius of spherical body model, $\eta^{2}_{1}=k^{2}_{1}-h^2,\eta^{2}_{2}=k^{2}_{2}-h^2,k_{1}^{2}=\omega^{2}\mu_{1}\epsilon_{1},k_{2}^{2}=\omega^{2}\mu_{2}\epsilon_{2}$. The $2x2$ matrices for $j_{n}(\eta d)$ and $H_{n}(\eta d)$ are expressed as: $$\begin{aligned} \begin{split} B_{n}(\eta_{p} d) = \frac{1}{\eta_{p}^{2} d}\times \begin{pmatrix} \jmath\omega\epsilon_{p}\eta_{p}d B_{n}(\eta_{p} d)& -nh B_{n}(\eta_{p} \\ \ -nh B_{n}(\eta_{p} & - \jmath\omega\mu_{p}\eta_{p}d B_{n}(\eta_{p} d) \\ \end{pmatrix} \end{split}\end{aligned}$$ $B_{n}$ is either $H_{n}^{(1 or J_{n})}$, $B(.)$ is the derivative of $B$ w.r.t the whole argument, and p=1,2 Transmitter Located Inside and Receiver Located Outside Body ------------------------------------------------------------ In this case DGF can be written as : $$\begin{aligned} \begin{split} % \nonumber to remove numbering (before each equation) G_{s}^{(21)}(x,x_0)=\frac{\jmath}{8\Pi}\int_{-\infty}^{\infty} dh\sum_{n=-\infty}^{\infty}\frac{1}{\eta^2}\\ \times [ N_{nhk}. M_{nhk}]T_{12} \begin{pmatrix} N^{\ast}_{nhk1}(x_0)^{T}\\ M^{\ast}_{nhk1}(x_0)^{T} \\ \end{pmatrix} \end{split}\end{aligned}$$ In the above equation $T_{12}$ is a transmission coefficient Matrix and given as: $$\begin{aligned} \begin{split} T12=\frac{2\omega}{\pi\eta_{1}^{2} d} [J_n(\eta_1 d )H_n(\eta_2 d)-H_n(\eta_2 d)J_n(\eta_1 d)]^-1\\ \times \begin{pmatrix} \varepsilon_{1}& 0\\ 0&\varepsilon \\ \end{pmatrix} \end{split}\end{aligned}$$ Both Transmitter and Receiver Located Outside Body -------------------------------------------------- $$\begin{aligned} \begin{split} % \nonumber to remove numbering (before each equation) G_{s}(x,x_0)=\frac{\jmath}{8\Pi}\int_{-\infty}^{\infty} dh\sum_{n=-\infty}^{\infty}\frac{1}{n^2}\\ \times [M_{nhk} N_{nhk}] R_21\\ \begin{cases} N_nhk(X_0)^T M_nhk(X_0)^T \end{cases} \end{split}\end{aligned}$$ Similarly as $R_{12}$, $R_{21}$ is the reflection coefficient matrix and it is given as: $$\begin{aligned} \begin{split} R21= [J_n(\eta_1d)H_n(\eta_2d)-H_n(\eta_2d)J_n(\eta_1d)]^-1\\ \times[J_n(\eta_2d)J_n(\eta_1d)-J_n(\eta_1d)J_n(\eta_2d)] \end{split}\end{aligned}$$ Transmitter Located Outside and Receiver Inside Body ---------------------------------------------------- In this case, we can write DGF as: $$\begin{aligned} \begin{split} % \nonumber to remove numbering (before each equation) G_{s}(x,x_0)=\frac{\jmath}{8\Pi}\int_{-\infty}^{\infty} dh\sum_{n=-\infty}^{\infty}\frac{1}{n^2}\\ \times [M_{nhk} N_{nhk}] T_{21}\\ \begin{pmatrix} N^{\ast}_{nhk1}(x_0)^{T}\\ M^{\ast}_{nhk1}(x_0)^{T} \\ \end{pmatrix} \end{split}\end{aligned}$$ $T_{12}$ is the transmission coefficient matrix, given as: $$\begin{aligned} \begin{split} T21= \frac{2\omega}{\Pi \eta d}[J_n(\eta_1 d)H_n(\eta_2 d)-H_n(\eta_2 d)J_n(\eta_{2}^{2} d)]^-1\\ \times \begin{pmatrix} \varepsilon_{2}& 0\\ 0&-\mu_{2}\\ \end{pmatrix} \end{split}\end{aligned}$$ Transmitter and Receiver Located Outside of the Body ==================================================== In this section we presents the equation which is required for simulation. With the help of simulation it will be easy to study the propagation characteristics of arm motion making spherical pattern. $$\begin{aligned} \begin{split} % \nonumber to remove numbering (before each equation) G_{s}(x,x_0)=\frac{\jmath}{8\Pi}\int_{-\infty}^{\infty} dh\\ \sum_{n=-\infty}^{\infty}\frac{1}{n^2} G_{nh}(x,x_0) dh\\ \end{split}\end{aligned}$$ $G_{nh}(x,x_0)$ is stated as: $$\begin{aligned} \begin{split} G_{nh}(x,x_0)= \begin{pmatrix} N_{nhk}(X)^1 & M_{nhk}(X)^1 \end{pmatrix} \times R21 \\ \begin{pmatrix} N_{nhk}(X_0)^T M_{nhk}(X_0)^T \end{pmatrix} \end{split}\end{aligned}$$ Simulations =========== As we have defined earlier, arm motion at different angles are presenting spherical pattern. Therefore, we simulate the radio propagation environment having radius $ d=15cm $, megnatic permeability for human body (assume that permeability of human body is approximately equal to air) $\mu_{2}=1.256 \times 10^-6$, similarly electric permittivity $\varepsilon_{2}=2.563\times10^-10$. The dielectric constant is mean value of all tissues of human body. We take the surrounding homogeneous medium to be air with megnatic permeability $\mu_{1}=1.256 \times 10^-6$ and electric permittivity $\varepsilon_{1}=8.8542\times10^-12$. Frequency up to GHz is used for BAN communication, which is for ISM band. The Transmission frequency for simulation is 1GHz. We assumed that the transmitter is acting as point source at $x_{0}=(16cm,\frac{\pi}{2},0)$. The radial distance of receiver is $ r_{0}=18cm$ from the central spherical axis of shoulder. For the simulation, we assumed that receiver move along the azimuthal angle for varying values of $\phi_{0}$ and different heights from the center of shoulder. For simulation, we consider equation (25) in which $G_{nh}(x,x_0)$ is used in matrix form of eigen functions. This equation has an integration which is not possible so we approximate it to summation. Thus, we approximate equation (25) in to this form: $$\begin{aligned} \begin{split} % \nonumber to remove numbering (before each equation) G_{s}(x,x_0)=\frac{\jmath}{8\Pi}\sum_{l=-L}^{L}\\ \sum_{n=-Q}^{Q}\frac{1}{n^2} G_{nh}(x,x_0) dh \\ \end{split}\end{aligned}$$ $L$ and $Q$ are the truncation limits and $\Delta H$ are the step size of integration. $N$ and $\Delta H$ are so small that could be ignored and has no effect on calculations. We only presents electric propagation of multi-path reflection and transmission waves of scattering DGF.This is more significant to represent the attribute of arm motion as compared to the direct DGF. Figure 2,3 and 4 show the scattering DGF (simulation) of electric field with the change in $\theta$. ![ Magnitude of scattered field component $E_{\phi}$ versus angle $\phi$,with different values of $d$ and the angle is $\theta=\frac{\pi}{6}$ ](untitler.eps){height="9cm" width="9cm"} Using equation (27), we have three components in $r$,$\theta$ and $\phi$ direction. Every Component of electric field is plotted as a function of azimuthal angle $\phi$. The values of $\phi$ is (0 to 2$\pi$), whereas at z coordinate different values of receiver has been plotted. The electric field is plotted, which is vector addition of three components. These all parameters are shown in the simulation graph. By taking the value of $\theta=\frac{\pi}{6}$, figure $2$ shows that magnitude of electric field $(E_{\phi})$ is decreasing as the distance of receiving antenna is increasing from the sensor (transmitting antenna). The plot shows electric field component at different values of $\phi$, varying from $0$ to $2 \pi$. In this case, $E_{\phi}$ is decreasing from ($4080$ to $4065$)dB by replacing the receiving antenna from $0$ cm to $10$ cm. ![Magnitude of scattered field component $E_{\phi}$ versus angle $\phi$,with different values of $d$ and the angle is $\theta=\frac{\pi}{3}$ ](theta2.eps){height="9cm" width="9cm"} In Figure $3$, when we take value of $\theta=\frac{\pi}{3}$, magnitude of electric field $(E_{\phi})$ again decreases as the antenna moves away from sensor. For the values of $\phi$ from $0$ to $2 \pi$, $E_{\phi}$ has different values from ($4060$ from $4068$)dB. By changing position of receiving antenna from $0$ cm to $10$ cm. ![Magnitude of scattered field component $E_{\phi}$ versus angle $\phi$,with different values of $d$ and the angle is $\theta=\pi $ ](theta3.eps){height="9cm" width="9cm"} The values of distance and $\phi$ are same, as described in the above graphs by only replacing the parameter $\theta=\pi$. Similarly in figure $4$ values of $E_{\phi}$ change from ($4082$ to $4074$)dB by moving the position of receiver away from transmitting antenna, which in return decreases the electric field intensity. Conclusion ========== We have proposed a generic approach to derive an analytical channel modeling and propagation characteristics of arm motion as spherical model. To predict the electric field around body, we have formulated a two step procedure based on Dyadic Green’s function. First, we derive Eigen functions of spherical model then calculated the scattering superposition to come across reflection and transmission waves of antenna. The model includes four cases where transmitter or receiver is located inside or outside of the body. This model is presented to understand complex problem of wave propagation in and around arm of human body. Simulation shows that Electric field decreases when receiver moves away from the shoulder with change of angle $\theta$. [1]{} T. Zasowski, F. Althaus, M. Stager, A. Wittneben, and G. Troster, “Uwb for noninvasive wireless body area networks: Channel measurements and results,” Proc. IEEE Conf. on Ultra Wideband Systems and Technologies, pp. 285-289, Nov 2003. A. Fort, J. Ryckaert, C. Desset, P.D. Doncker, P. Wambacq, and L.V. Biesen, “Ultra-wideband channel model for communication around the human body,” IEEE Journal on Selected Areas in Communications, vol. 24, no. 4, pp. 927-933, April 2006. H. Ghannoum, C. Roblin, and X. Begaud,“Inves- tigationoftheuwbon-bodypropagationchannel,” http://uei.ensta.fr/roblin/papers/WPMC2006HGBANmodel.pdf, 2005. D. Nierynck, C. Williams, and A. Nix, M. Beach, “Channelcharacterisationforpersonalareanetworks,” http://rose.bris.ac.uk/dspace/bitstream/1983/893/1/TD-05-115.pdf, Nov. 2007. A. Alomainy, Y. Hao, X. Hu, C.G. Parini, and P.S. Hall, “Uwb on- body radio propagation and system modelling for wireless body-centric networks,” IEE Proc. Commun., vol. 153, no. 1, pp. 107-114, 2006. Y. Zhao, Y. Hao, A. Alomainy, and C. Parini, “Uwb on-body radio channel modelling using ray theory and sub-band fdtd method,” IEEE Trans. On Microwave Theory and Techniques, Special Issue on Ultra- Wideband, vol. 54, no. 4, pp. 1827-1835, 2006. J. Ryckaert, P.D. Doncker, R. Meys, A.D.L. Hoye, and S. Donnay, “Channel model for wireless communication around human body,” Electronic Letters, vol. 40, no. 9, 2004. I.Z. Kovacs, G.F. Pedersen, P.C.F. Eggers, and K. Olesen, “Ultra wideband radio propagation in body area network scenarios,” IEEE 8th Intl. symp. on Spread Spectrum Techniques and Applications, pp. 102-106, 2004. J.A. Ruiz, J. Xu, and S. Shiamamoto, “Propagation characteristics of intra-body communications for body area networks,” 3rd IEEE Conf. on Consumer Communications and Networking, vol. 1, pp. 509-503, 2006. T. Zasowski, G. Meyer, F. Althaus, and A. Wittneben, “Propagation effects in uwb body area networks,” IEEE Intrenational Conference on 7UWB, pp. 16-21, 2005. Z. Xiang and Y. Lu, “Electromagnetic dyadic green’s function in cylindrically multilayered media,” IEEE Trans. on Microwave Theory and Techniques, vol. 44, no. 4, pp. 614-621, 1996. P.G. Cottis, G.E. Chatzarakis, and N.K. Uzunoglu, “Electromagnetic energy deposition inside a three-layer cylindrical human body model caused by near-?eld radiators,” IEEE Trans. on Microwave Theory and Techniques, vol. 38, no. 8, pp. 415-436, 1990. S.M.S Reyhani and R.J. Glover, “Electromagnetic modeling of spherical head using dyadic green’s function,” IEE Journal, , no. 1999/043, pp. 8/1-8/5, 1999. T.Zasowski, F. Althaus, M. Stager, A. Wittneben and G. Troster, “UWB for noninvasive wireless body area networks: channel measurement and results.”in 2003 IEEE conference on Ultra-Wide band system and technologies,2003.pp.285-289. A. Alomainy, Y. Hao, X.Hu,C.G. Parini and P.S. Hall, “UWB on-body radio propagation and system modeling for body centric networks,” in IEEE communication proceeding, vol. 153, no. 1, February 2006, pp. 107-114. D. A. Macnamara, C, Pistorius and J. Malherbe, In troduction to the uniform geometrical theory of diffraction. Artech House:Boston, 1991. C.T. Tai, Dyadic Green’s Functions in Electromagnetic Theory, IEEE, New York, 1993. Le-Wei Li, Senior Member, IEEE, Mook-Seng Leong, Senior Member, IEEE, Pang-Shyan Kooi, Member, IEEE,and Tat-Soon Yeo, Senior Member, IEEE Astha Gupta, Thushara D. Abhayapala, “ Body Area Networks: Radio Channel Modelling and Propagation Charaterstics”.
--- abstract: 'We prove that in sparse graphs of average degree $d$, the vector chromatic number (the relaxation of chromatic number coming from the [Lovàsz ]{}theta function) is typically $\tfrac{1}{2}\sqrt{d} + o_d(1)$. This fits with a long-standing conjecture that various refutation and hypothesis-testing problems concerning $k$-colorings of sparse graphs become computationally intractable below the ‘Kesten-Stigum threshold’ $d_{{\textsc{ks}},k} = (k-1)^2$. Along the way, we use the celebrated Ihara-Bass identity and a carefully constructed non-backtracking random walk to prove two deterministic results of independent interest: a lower bound on the vector chromatic number (and thus the chromatic number) using the spectrum of the non-backtracking walk matrix, and an upper bound dependent only on the girth and universal cover. Our upper bound may be equivalently viewed as a generalization of the Alon-Boppana theorem to irregular graphs.' author: - | Jess Banks [^1]\ Dept. of Mathematics\ University of California-Berkeley - | Luca Trevisan\ Dept. of Computer Science\ University of California-Berkeley bibliography: - 'ER-theta-function.bib' title: 'Vector Colorings of Random, Ramanujan, and Large-Girth Irregular Graphs' --- Introduction ============ Random graph coloring is one of the central and most studied problems in average case complexity, with over three decades of research interleaving the techniques and sensibilities of theoretical computer science, statistical physics, and combinatorics. Many of the most striking phenomena occur in the case of sparse random graphs, and we will focus here on the model ${{\mathcal{G}}({n}, {d/n})}$, where $d$ fixed and constant and each edge is included independently and with probability $d/n$. The full phenomenology of this model is far beyond the scope of this paper to survey (we refer the reader to, for instance, [@zdeborova2007phase] for a more complete account), but its key aspect is a series of *phase transitions* in the limit $n\to \infty$: for fixed $k$, there are critical thresholds in $d$ at which certain combinatorial and algorithmic attributes of the coloring problem change abruptly. The most famous of these is the *colorability transition*, the threshold $d_{{\textsc{col}},k}$ below which graphs from ${{\mathcal{G}}({n}, {d/n})}$ are with high probability $k$-colorable (that is, with probability $1 - o_n(1)$ as $n\to \infty$), and above which they are not. Sophisticated refinements of the first and second moment methods [@achlioptas-naor; @coja-oghlan-vilenchik; @coja2013upper] have shown that $$2k\log k - \log k - 1 + o_k(1) \triangleq d_{{\textsc{first}},k} \ge d_{{\textsc{col}},k} \ge d_{{\textsc{second}},k} \triangleq 2k\log k - \log k - 2\log 2 - o_k(1).$$ These results pin down to within a small additive gap the threshold at which an exponential-time exhaustive search algorithm can find a coloring. What if, on the other hand, we care only about efficient algorithms, say those running in polynomial time? There are a number of algorithmic tasks that one can consider—distinguishing whether a graph was drawn from ${{\mathcal{G}}({n}, {d/n})}$ or from model with a ‘planted’ $k$-coloring, finding exact or approximate colorings in graphs drawn from the latter, etc.—but all of them seem to become efficiently soluble only when $$d > d_{{\textsc{ks}},k} \triangleq (k-1)^2;$$ see [@massoulie2014; @bordenave-lelarge-massoulie; @abbe-sandon-more-groups; @mns-colt; @non-backtracking] for some examples, many of which are phrased in the related and more general case of *community detection* which we do not treat here. It is conjectured that this point, known as the Kesten-Stigum threshold, is a universal barrier at which polynomial-time algorithms break down. The purpose of this paper is to add modest evidence to this conjecture, by studying a classic semidefinite programming algorithm for the problem of *refutation*: given a graph $G \sim {\mathcal{G}}(n,d/n)$, we are to efficiently produce a certificate that $G$ is not $k$-colorable or declare failure. As one cannot hope to refute $k$-colorability of $G$ when $d < d_{{\textsc{second}},k}$, the Kesten-Stigum threshold conjecture in our case asserts that when $d_{{\textsc{first}},k} < d < d_{{\textsc{ks}},k}$, refutation is possible but inaccessible to polynomial time algorithms, whereas it is efficiently soluble when $d_{{\textsc{ks}},k} < d$. e programming algorithm for refuting $k$-colorings. To introduce our refutation algorithm, let us define a *$k$-vector coloring* of an undirected graph $G=(V,E)$ as an assignment of a unit vector $v_i$ to each vertex $i\in V$, such that ${\left\langle v_i , v_j \right\rangle} \leq -(k-1)^{-1}$ for every edge $(i,j) \in E$. This notion was introduced by Karger, Motwani, and Sudan in [@karger1998approximate], and equivalent quantities date back to seminal works of [Lovàsz ]{}and Schrijver [@lovasz1979shannon; @schrijver1979comparison]. The vector chromatic number of $G$, which we will denote ${\chi_v}(G)$, is the smallest $k$ (integer or otherwise) such that a $k$-vector coloring exists. If $G$ is $k$-colorable, then it is also $k$-vector-colorable (for instance by associating to each color one of the unit vectors pointing to the corners of a simplex in ${\mathbb{R}}^{k-1}$), so the vector chromatic number is a relaxation of the chromatic number. More importantly, it is a polynomial-time computable relaxation since it can be formulated as the following semidefinite program: $$\begin{aligned} \label{chromatic-sdp} {\chi_v}(G) = \min_P \, \kappa \qquad {\text{s.t.}}\qquad P &\succeq 0 \\ P_{i,i} &= 1 & & \forall i \nonumber \\ P_{i,j} &\le -(\kappa-1)^{-1} & & \forall (i,j) \in E \nonumber\end{aligned}$$ A number of authors have studied the behavior of this and related semidefinite programs on sparse random graphs. In [@coja2005lovasz], Coja-Oghlan shows concentration of the [Lovàsz ]{}$\vartheta$ function for $G \sim {{\mathcal{G}}({n}, {d/n})}$, and an additional result that translates in our setting to ${\chi_v}(G) = \Theta(\sqrt d)$, albeit with non-optimal constants. Montanari and Sen in [@montanari16] study an semidefinite programming algorithm for the problem of distinguishing ${{\mathcal{G}}({n}, {d/n})}$ from a planted model guaranteed to have a coloring or community structure, calculating its likely value up to an additive $o_d(1)$; the SDP that they consider is similar but incomparable with ours, as they are not concerned with refutation. Our main theorem characterizes the vector chromatic number of sparse graphs up asymptotically inconsequential terms as the average degree tends to infinity. This strengthens [@coja2005lovasz], pinning down the constant exactly and substantially simplifying the method of proof. \[thm:main\] When $G \sim {{\mathcal{G}}({n}, {d/n})}$, with probability $1 - o_n(1)$, $$\frac{d^{3/2}}{2d - 1} + 1 - o_n(1) \le \chi_v(G) \le \max\left\{\frac{d+1}{2\sqrt d} + 2, 4\right\}.$$ In other words, we determine that the threshold in $k$ below which the vector chromatic number can prove $G \sim {{\mathcal{G}}({n}, {d/n})}$ is not $k$-colorable, and above which it cannot, is $k = \tfrac{1}{2}\sqrt d + 1 + o_d(1)$. The careful reader will note that, although this matches the scaling of the Kesten-Stigum threshold, the constant factor out front is different: we have shown that refutation with the vector chromatic number becomes impossible when the average degree $d \gtrsim 4 d_{{\textsc{ks}},k}$. This shows that the conjectured “hard regime” $d_{{\textsc{first}},k} < d < d_{{\textsc{ks}},k}$ indeed stymies our refutation algorithm. Our result complements a result of Banks, Kleinberg, and Moore [@banks2019lovasz], who have proved that in random $d$-regular graphs, $\chi_v(G)$ is similarly concentrated, and fails to refute $k$-coloring as well at four times that model’s KS threshold. Together, these two papers raise a natural question: is this $4d_{{\textsc{ks}},k}$ scaling a fundamental barrier for efficient refutation, or can more elaborate methods (perhaps constantly many rounds of the Sum-of-Squares algorithm) succeed all the way down to the Kesten-Stigum threshold itself? Roadmap and Results {#sec:roadmap_and_results} =================== Banks et al. prove a lower bound on the vector chromatic number with a spectral argument, relying on Friedman’s theorem [@friedman2003proof] to bound the smallest eigenvalue of the adjacency matrix of a random $d$-regular graph. The upper bound comes from an explicit construction of a feasible solution for the semidefinite program, using orthogonal polynomials. However, neither their upper nor lower bound extend to the ${{\mathcal{G}}({n}, {d/n})}$ model: the spectrum of the adjacency matrix is poorly behaved in random graphs, and the use of orthogonal polynomials requires the graph to be regular. Instead, we will prove Theorem \[thm:main\] by way of two deterministic results bounding the vector chromatic number of generic graphs. Both bounds are proved by way of non-backtracking walks. To state our results, let $G = (V,E)$ be an undirected graph on $|V| = n$ vertices, and denote by $A$, $D$, and $B$ its adjacency, diagonal degree, and non-backtracking matrices. We will introduce $B$ in detail below, but for now it is important only that it is a non-normal matrix with zero-one entries. Although its spectrum may be complex-valued, we verify in the sequel that the Perron-Frobenius theorem guarantees one real eigenvalue equal to the spectral radius, which we will denote $\operatorname{spr}(B) \triangleq \rho$. This quantity coincides with the growth rate of $G$’s universal covering tree, and its square root is the spectral radius of the non-backtracking operator on this infinite graph [@angel2015non; @terras2010zeta]. Our first deterministic result is that the spectrum of $B$ can certify non-colorability. \[thm:lower\] If $r$ is any lower bound on the smallest real eigenvalue of $B$, and $d_{\operatorname{avg}}$ is the average degree of $G$, then $$\chi_v(G) \ge \frac{|rd_{\operatorname{avg}}|}{r^2 + d_{\operatorname{avg}} - 1} + 1$$ To prove this lower bound, we use the celebrated Ihara-Bass identity (forthcoming in Theorem \[thm:ihara\]) to relate the spectrum of $B$ to a family of symmetric matrices, $$L(z) \triangleq z^2{\mathbbm{1}}- zA + D - {\mathbbm{1}}\qquad z\in{\mathbb{C}}$$ known variously as the *deformed Laplacian* or Bethe Hessian [@saade2014spectral; @kotani2000zeta; @angel2015non; @bass1992ihara; @hashimoto1989zeta]. It is observed in [@fan2017well p.13] that spectral assumptions on $B$ imply positive-definiteness of $L(z)$ for certain $z$ on the real line; we use these PSD matrices in a dual argument to lower bound $\chi_v(G)$. By a corollary of Bordenave et al. [@bordenave-lelarge-massoulie], when $G\sim {{\mathcal{G}}({n}, {d/n})}$ we can with probability $1 - o_n(1)$ take $r \approx -\sqrt d$, giving the lower bound in Theorem \[thm:main\]. Second, we derive a girth-dependent lower bound on ${\chi_v}(G)$. \[thm:upper\] If $\operatorname{girth}(G) \ge 2m + 1$, $${\chi_v}(G) \le \frac{\rho + 1}{2(1 - 1/m)\sqrt\rho} + 1.$$ The feasible vector coloring we construct in the proof of Theorem \[thm:upper\] assigns a $n$-dimensional unit vector $v_i$ to each vertex $i\in V$, whose coordinates we think of as again being indexed by $V$. In our construction, the coordinate $(v_i)_j$ is proportional to the *square root of the probability of going from $i$ to $j$ in a certain non-backtracking random walk* of length equal to the distance between $i$ and $j$. This builds on the key idea in Srivastava and Trevisan’s lower bound results for spectral sparsification [@srivastava2018alon], and in the $d$-regular case recovers the result from Banks et al. [@banks2019lovasz]. Graphs drawn from ${{\mathcal{G}}({n}, {d/n})}$ have $\rho \approx d$, and this holds even if we condition on the constant probability event that the girth is any large constant of our choosing. Thus we can, with small albeit constant probability, construct $k$-vector colorings with $k$ arbitrarily close to $k = \tfrac{d+1}{2\sqrt{d}} + 1$. Finally, we adapt a well-known martingale technique developed in [@shamir1987sharp; @luczak; @achlioptas-moore-reg; @banks2019lovasz] to guarantee, with high probability, a solution of similar cost. The above construction can be used to prove two notable corollaries. First, it is also a near-optimal solution to the Goemans-Williamson relaxation of <span style="font-variant:small-caps;">MaxCut</span> in ${{\mathcal{G}}({n}, {d/n})}$ random graphs [@goemans1995improved]. Rounding with random hyperplanes yields a cut of cost $$|E| \cdot \left( \frac 12 + \frac {2-o_d(1) }{\pi} \cdot \frac 1 {\sqrt d} \right),$$ which we believe is the strongest known algorithmically attainable lower bound to the maximum cut in ${{\mathcal{G}}({n}, {d/n})}$ random graphs (a tight bound is known, but the argument is not algorithmic [@dembo2017extremal]). In fact, this extends to any high-girth graph: \[cor:maxcut\] If $\operatorname{girth}(G) \ge 2m + 1$, $$\operatorname{\textup{\textsc{MaxCut}}}(G) \ge |E|\left(\frac{1}{2} + \frac{2(1 - 1/m)\sqrt\rho}{\pi(\rho + 1)}\right).$$ Second, the vectors from Theorem \[thm:upper\] can be used to prove a kind of generalized Alon-Boppana type theorem concerning the deformed Laplacian $L(z)$. The standard Alon-Boppana theorem [@nilli1991second] states that $d$-regular graphs with high diameter have have eigenvalues arbitrarily close to $2\sqrt{d-1}$; it has been refined and extended in numerous ways [@davidoff2003elementary §1.3-3][@friedman1993some §3][@nilli2004tight], and our result generalizes the fact that regular graphs of large *girth* have eigenvalues approaching $-2\sqrt{d-1}$. One can verify that these negative eigenvalues translate to eigenvalues of $L(z) = z^2{\mathbbm{1}}- zA + D - {\mathbbm{1}}$ close to $(z + \sqrt{d-1})^2$ for every $z< 0$. For regular graphs $d-1$ is, among other things, the spectral radius of $B$, and we prove a direct generalization in this sense. \[cor:alon\] If $G$ has girth at least $2m + 1$, then for every $z<0$, $$L(z) \not\succeq (z + \sqrt\rho)^2 - 2\sqrt\rho z/m.$$ We will prove Theorems \[thm:lower\] and \[thm:upper\] in §\[sub:lower-pf\]-\[sub:upper-pf\] after first developing some preliminary results on non-backtracking walks in §\[sub:preliminary\_material\]. Having done so, we prove Theorem \[thm:main\] in §\[sub:main-pf\] and wrap up in §\[sub:corollaries\] with the two corollaries above. Optimality and Irregular Ramanujan Graphs {#sub:optimality_ramanujan_graphs_and_further_questions} ----------------------------------------- The best possible setting of $r$ in Theorem \[thm:lower\] is $-\sqrt{d_{\operatorname{avg}} - 1}$, at which point we obtain the bound $${\chi_v}(G) \ge \frac{d_{\operatorname{avg}}}{2\sqrt{d_{\operatorname{avg}} - 1}} + 1.$$ In the case of $d$-regular Ramanujan graphs—those for which the nontrivial eigenvalues of the adjacency matrix have magnitude at most $2\sqrt{d-1}$—this matches the standard spectral bound on the chromatic number. For regular graphs, the Ramanujan property is euqivalent to every nontrivial eigenvalue of $B$ having magnitude at most $\sqrt{d-1}$; since $\rho = d-1$ in the regular case, some authors to define an irregular graph as Ramanujan if its nontrivial non-backtracking eigenvalues have modulus at most $\sqrt\rho$ [@bordenave-lelarge-massoulie; @lubotzky1995cayley]. If a graph is Ramanujan in this sense, we can take $r = -\sqrt\rho$, giving $${\chi_v}(G) \ge \frac{d_{\operatorname{avg}}\sqrt\rho}{\rho + d_{\operatorname{avg}} - 1} + 1;$$ this could only match our upper bound in the case $\rho = d_{\operatorname{avg}} - 1$, which is true for regular graphs, approximately true for random graphs, and fails generically. What “Ramanujan” assumption on the spectrum of $B$ implies the converse of Theorem \[thm:upper\]? Is it enjoyed, either approximately or exactly, by random graphs? Proofs {#sec:proofs} ====== Notation and Non-backtracking Preliminaries {#sub:preliminary_material} ------------------------------------------- We will write $\operatorname{Spec}X$ for the unordered set of eigenvalues of a matrix $X$, $\operatorname{spr}X$ for the modulus of its largest eigenvalue, and use the standard notation $X \succeq 0$ to indicate that a (Hermitian) matrix is positive semidefinite, or in other words that $\operatorname{Spec}X \subset {\mathbb{R}}_{\ge 0}$. For two matrices $X$ and $Y$, $X\odot Y$ will denote the entry-wise product and $\langle X,Y \rangle = \operatorname{tr}YX^\ast = \sum_{i,j} \overline{X_{i,j}}Y_{i,j}$ the Frobenius inner product. It is a standard lemma that $X,Y \succeq 0$ implies $X\odot Y \succeq 0$ as well, and that $\langle X,Y\rangle \ge 0$. The set of integers $\{1,...,k\}$ will be denoted by $[k]$. To an unweighted, undirected, and connected graph $G = (V,E)$ on $n$ vertices, we will associate an *adjacency matrix* $A$, *diagonal degree matrix* $D$, and shortest path distance metric $\operatorname{dist}: V \times V \to {\mathbb{N}}$. Although $G$ is undirected, it will be useful to think of each edge $(i,j) \in E$ as a pair of directed edges $i\to j$ and $j\to i$; we’ll call the set of these directed edges ${\accentset{\rightharpoonup}{E}}$. For each vertex $i$, write $\partial i$ for the set of neighbors of $i$. The central object in our proofs will be the *non-backtracking matrix* associated to $G$; this is a linear operator on ${\mathbb{C}}^{2m}$, which we will think of as the vector space of functions ${\accentset{\rightharpoonup}{E}}\to {\mathbb{C}}$. Indexing the standard basis of ${\mathbb{C}}^{2m}$ by the elements of ${\accentset{\rightharpoonup}{E}}$, $$B_{i\to j, k\to \ell} = 1 \qquad \text{ if $j = k$ and $i \neq \ell$},$$ and zero otherwise. True to its name, the powers of $B$ encode walks on $G$ which are forbidden from returning along the same edge that they have just traversed. The reader may verify that $B$ is a non-normal operator, and therefore its spectrum is in general a complicated subset of the complex plane. Since its entries are nonnegative, however, we can apply the Perron-Frobenius theorem after carefully analyzing the reducibility and periodicity of $B$. The following result, collating [@terras2010zeta Corollary 11.12] and [@kotani2000zeta Proposition 3.1], characterizes these attributes. \[prop:perron\] Let $G$ be connected. The spectrum of $B$ depends only the $2$-core of $G$, and once we restrict to this core, $B$ is reducible if and only if $G$ is a cycle. Finally, $B$ has even period if and only if $G$ is bipartite, and odd period $p$ if and only if $G$ is a *subdivision*, e.g. if it is obtained by replacing in a smaller graph $H$ every edge with a path of length $p$. From the perspective of coloring, bipartite graphs and subdivisions are are uninteresting, and vertices outside the $2$-core cannot impact the chromatic number, so let us assume from this point that $G$ is non-bipartite and non-subdivided, with minimum degree two. In this case, the Perron-Frobenius theorem tells us that $\operatorname{spr}B \triangleq \rho \in \operatorname{Spec}B$, and that the corresponding left and right eigenvectors have positive entries; this positivity will be important, and is the reason we stated Proposition \[prop:perron\] in such detail. An invaluable tool for further analyzing the spectral properties of $B$ is a classic result relating its characteristic polynomial to the determinant of a quadratic matrix-valued function involving $A$ and $D$ and due in various forms to Ihara, Bass, and Hashimoto; see [@kotani2000zeta; @angel2015non; @bass1992ihara; @hashimoto1989zeta], to name just a few. \[thm:ihara\] For any graph $G$, $$\det(z{\mathbbm{1}}- B) = (z^2 - 1)^{|E| - |V|}\det(z^2{\mathbbm{1}}- zA + D - {\mathbbm{1}}).$$ We will refer to the matrix-valued quadratic $$L(z) \triangleq z^2{\mathbbm{1}}- zA + D - {\mathbbm{1}}$$ as the *deformed Laplacian*; note that when evaluated at $z = \pm 1$ it returns the standard and ‘signless’ Laplacians $D \pm A$. The former is always singular, and the latter if and only if $G$ is bipartite, so given our assumptions $B$ has an eigenvalue at $+1$ with multiplicity $|E| - |V| + 1$, and one at $-1$ with multiplicity $|E| - |V|$. The remaining eigenvalues correspond to $z \in {\mathbb{C}}$ for which $L(z)$ is singular. The key lemma for Theorem 2 relates the spectrum of $B$ to the semidefiniteness of $L(z)$ for negative $z$; we first encountered it in [@fan2017well p13]. \[lem:L-pos\] For any lower bound $r \in {\mathbb{R}}$ on the smallest real eigenvalue of $B$, $L(r) \succeq 0$. For $r \in {\mathbb{R}}$, the matrices $L(r)$ are symmetric with real spectrum. When $r \ll 0$, $L(r) \succeq 0$ by a simple diagonal dominance argument. It is a standard result that the eigenvalues of a matrix are continuous functions in its entries, so as we increase $r$, the only way $L(r)$ can fail to be PSD is for one of its eigenvalues to cross zero. However, by Theorem \[thm:ihara\] $L(r)$ cannot be singular for any real $r$ smaller than the smallest real eigenvalue of $B$. Theorem \[thm:lower\]: The Ihara-Bass Identity and Deformed Laplacian {#sub:lower-pf} --------------------------------------------------------------------- Let $P \succeq 0$ be any positive semidefinite matrix. Writing $r_\ast$ for the smallest real eigenvalue of $B$, Lemma \[lem:L-pos\] implies $$\label{eq:innerprod} 0 \le \langle P, L(r)\rangle = r^2\operatorname{tr}P - r\langle P, A \rangle + \langle P,D - {\mathbbm{1}}\rangle.$$ for every $r \le r_\ast$. One can check that, subject to the constraint $r\le r_\ast$, this function is minimized at the smaller of $r_\ast$ and $-\sqrt{\langle X, D - {\mathbbm{1}}\rangle}$. As an aside, we’ve shown: \[lem:real-ram\] If $G$ is non-bipartite, and $B$ has no real eigenvalues other than $\pm 1$ and $\rho$, then for any $P \succeq 0$, $$\langle A,P \rangle \ge -2\operatorname{tr}P \sqrt{\langle D-{\mathbbm{1}}, P \rangle},$$ In the $d$-regular case, Theorem \[thm:ihara\] implies that a non-bipartite $G$ graph is Ramanujan if and only if $B$ has no real eigenvalues besides $\pm 1$ and $\rho = d-1$, and that this condition implies $\langle A, P \rangle \ge -2\operatorname{tr}P\sqrt{d-1}$. Thus Lemma \[lem:real-ram\] suggests that this condition on the spectrum of $B$ may be a natural notion of the Ramanujan property for irregular graphs. The proof of Theorem \[thm:lower\] will follow from a stronger result: $$\begin{aligned} \chi_v(G) \ge \max_{r < r_\ast} \max_W \frac{-r\langle W, A \rangle}{r^2 + \langle W, D - {\mathbbm{1}}\rangle} + 1 \qquad \textup{s.t.} \qquad W &\succeq 0 \\ \operatorname{tr}W &= 1 \nonumber \\ W_{i,j} &\ge 0 \text{ for all $(i,j) \in E$} \nonumber\end{aligned}$$ Let $W$ satisfy the three conditions above, and assume that $X \succeq 0$ is the Gram matrix witnessing ${\chi_v}(G) = \kappa$, so that $X$ has ones on its diagonal and $X_{i,j} \ge -(\kappa - 1)^{-1}$ if $(i,j) \in E$. We can set $P = X \odot W$ in , so that $$0 \le \langle X \odot W, L(r) \rangle = r^2 + \frac{r}{\kappa - 1}\langle W,A \rangle + \langle W,D - {\mathbbm{1}}\rangle.$$ To prove Theorem \[thm:lower\], set $W_{i,j} = 1/n$. It is a priori possible that, by carefully tuning $W$, this result could be improved to meet the high-girth limit of the upper bounds in Theorem \[thm:upper\]. We have observed numerically, however, that this is not the case. Theorem \[thm:upper\]: A Non-backtracking Random Walk {#sub:upper-pf} ----------------------------------------------------- To prove Theorem \[thm:upper\], we need to produce unit vectors ${v}_i$ for every $i \in V$, so that the maximum of ${\left\langle {v}_i , {v}_j \right\rangle}$ over all $(i,j) \in E$ is as negative as possible. Assume that $\operatorname{girth}(G) \ge 2m + 1$, so that in particular if any vertices are at distance less than $m$, they are connected by a unique non-backtracking (and, indeed, self-avoiding) walk of length $\operatorname{dist}(i,j)$. Borrowing an insight of [@srivastava2018alon], we will construct these vectors from non-backtracking random walk on the vertices of $G$. By this we mean a random walk which, started at some vertex $i$, chooses on its first step one of the neighbors of $i$, and on subsequent steps makes only non-backtracking moves. Write $X_s$ for the random variable encoding the position of the walk at time $s$, and ${\mathbb{P}}_i$ for its distribution upon starting the walk at vertex $i$. We will remain for the moment agnostic as to the actual transition probabilities, so that it is clear which portions of the argument depend on them, and which do not. The $v_i$ will be built as follows: set each to have one coordinate for each $j \in V$, with $$({v}_i)_j = \frac{1}{\sqrt{m}}(-1)^{\operatorname{dist}(i,j)}\sqrt{{\mathbb{P}}_i[X_{\operatorname{dist}(i,j)} = j]} \qquad \text{if $1 \le \operatorname{dist}(i,j) \le m$, and zero otherwise.}$$ We’ve arranged things so that $$\|{v}_i\|^2 = \frac{1}{m}\sum_{s\in [m]} \sum_{j : \operatorname{dist}(i,j) = s} {\mathbb{P}}_i[X_s = j] = 1,$$ since after $s$ steps the walk has probability one of reaching *some* vertex at distance $s$ from its starting point. It remains to study the inner products between pairs of vectors at neighboring vertices. For any $(i,j) \in E$, the inner product depends only on vertices at distance less than $m$ from both $i$ and $j$. Because of our initial girth assumption, the depth-$m$ neighborhoods of $i$ and $j$ together form a tree in which every vertex $\ell$ satisifies $|\operatorname{dist}(i,\ell) - \operatorname{dist}(j,\ell)| = 1$, and we can divide this into a portion $L$ of vertices closer to $i$ than $j$, and its counterpart $R$ closer to $j$ than $i$. Let us further segment $L$ into layers $\{i\} = L_0,L_1,...,L_{m-1}$ according to distance from $i$, and similarly for $R$. =\[circle,fill=black!25,minimum size=17pt,inner sep=0pt\] \(i) at (0,0) [$i$]{}; (j) at (1,0) [$j$]{}; (l11) at (-1,.5) ; (l12) at (-1,-.5) ; (l21) at (-2,1) ; (l22) at (-2,0) ; (l23) at (-2,-1) ; (r11) at (2,.5) ; (r12) at (2,-.5) ; (r21) at (3,1) ; (r22) at (3,-1) ; \(i) – (j); (l11) – (i); (l12) – (i); (l21) – (l11); (l22) – (l12); (l23) – (l12); (r11) – (j); (r12) – (j); (r21) – (r11); (r22) – (r12); [background]{} plot\[tension=.2\] coordinates [(-1.6,1.5) (-2.4,1.5) (-2.4,-1.5) (-1.6,-1.5)]{}; (l2) at (-2,-2) [$L_2$]{}; plot\[tension=.2\] coordinates [(-.6,1.5) (-1.4,1.5) (-1.4,-1.5) (-.6,-1.5)]{}; (l1) at (-1,-2) [$L_1$]{}; plot\[tension=.2\] coordinates [(.4,1.5) (-.4,1.5) (-.4,-1.5) (.4,-1.5)]{}; (l0) at (0,-2) [$L_0$]{}; plot\[tension=.2\] coordinates [(1.4,1.5) (.6,1.5) (.6,-1.5) (1.4,-1.5)]{}; (r0) at (1,-2) [$R_0$]{}; plot\[tension=.2\] coordinates [(1.6,1.5) (2.4,1.5) (2.4,-1.5) (1.6,-1.5)]{}; (r1) at (2,-2) [$R_1$]{}; plot\[tension=.2\] coordinates [(2.6,1.5) (3.4,1.5) (3.4,-1.5) (2.6,-1.5)]{}; (r2) at (3,-2) [$R_2$]{}; Then, directly computing, $$\begin{aligned} {\left\langle {v}_i , {v}_j \right\rangle} &= \frac{1}{m}\sum_{\ell:\operatorname{dist}(i,\ell),\operatorname{dist}(j,\ell) \in [m]} (-1)^{\operatorname{dist}(i,\ell) + \operatorname{dist}(j,\ell)} \sqrt{{\mathbb{P}}_i[X_{\operatorname{dist}(i,\ell)} = \ell]{\mathbb{P}}_j[X_{\operatorname{dist}(j,\ell)} = j]} \\ &= \frac{-1}{m}\sum_{s\in[m-1]}\left(\sum_{\ell \in L_s}\sqrt{{\mathbb{P}}_i[X_s = \ell]{\mathbb{P}}_j[X_{s+1} = \ell]} + \sum_{\ell \in R_s} \sqrt{{\mathbb{P}}_i[X_{s+1} = \ell]{\mathbb{P}}_j[X_s = \ell]}\right).\end{aligned}$$ The non-backtracking structure of the random walk, and the local tree-like configuration nearby $i$ and $j$, allow us to simplify this expression further. When $s\ge 1$ Bayes rule implies $${\mathbb{P}}_i[X_{s+1} = \ell] = {\mathbb{P}}_j[X_{s+1} = \ell\mid X_1 = i]{\mathbb{P}}[X_1 = i].$$ Now, by non-backtracking, the probability of reaching $\ell$ in $s+1$ steps starting from $j$, conditional on reaching $i$ on the first step, is the same as the probability of reaching $\ell$ in $s$ steps starting at $i$, conditional on the first step not hitting $j$. We can use Bayes again to write $${\mathbb{P}}_i[x_s = \ell \mid X_1 \neq j] = \frac{{\mathbb{P}}_i[X_s = \ell, X_1 \neq j]}{{\mathbb{P}}_i[X_1 \neq j]}.$$ Finally, again by non-backtracking, the information that $X_1 \neq j$ is redundant once we know that it starts at $i$ and reaches $s$ in $\ell$ steps, so ${\mathbb{P}}_i[X_s = \ell, X_1 \neq j] = {\mathbb{P}}_i[X_s = \ell]$. Putting together these steps gives us $$\begin{aligned} {\mathbb{P}}_j[X_{s+1} = \ell] = \frac{{\mathbb{P}}_i[X_s = \ell]}{{\mathbb{P}}_i[X_1 \neq j]}{\mathbb{P}}_j[X_1 = i], $$ and thus $$\begin{aligned} {\left\langle {v}_i , {v}_j \right\rangle} &= -\frac{1}{m}\left(\sum_{s\in[m-1]}\left(\sum_{\ell \in L_s} \sqrt{\frac{{\mathbb{P}}_j[X_1 = i]}{{\mathbb{P}}_i[X_1\neq j]}}{\mathbb{P}}_i[X_s = \ell] + \sum_{\ell \in R_s}\sqrt{\frac{{\mathbb{P}}_i[X_1 = j]}{{\mathbb{P}}_j[X_1\neq i]}}{\mathbb{P}}_j[X_s = \ell]\right)\right) \\ &= -(1-1/m)\left(\sqrt{{\mathbb{P}}_j[X_1 = i]{\mathbb{P}}_i[X_1 \neq j]} + \sqrt{{\mathbb{P}}_i[X_1 = j]{\mathbb{P}}_j[X_1 \neq i]}\right).\end{aligned}$$ We now choose the transition probabilities for our random walk, having simplified the dependence on them of the inner products we are interested in. Recall from the Perron-Frobenius theorem that, under our assumptions on $G$ (simple, minimum degree $2$, non-subdivided), $\rho$ is a simple eigenvalue of $B$, and that its corresponding left and right eigenvectors have strictly positive entries. Let’s denote the right eigenvector by ${\phi}$, and record explicitly that $$\label{eq:eig} \sum_{k \in \partial j \setminus i} \phi_{j\to k} = \rho \phi_{i \to j} \qquad \forall i\to j \in {\accentset{\rightharpoonup}{E}}.$$ It will be useful to overload notation and define $\phi_i \triangleq \sum_{j \in \partial i} \phi_{i \to j}$, observing that \[eq:eig\] implies $\phi_i = \phi_{i \to j} + \rho \phi_{j \to i}$ for every $j \in \partial i$. We will set the transition probabilities of our random walk proportional to the coordinates of ${\phi}$. In other words, $${\mathbb{P}}_i[X_1 = j] = \frac{\phi_{i\to j}}{\phi_i} \qquad \text{if $i\to j \in {\accentset{\rightharpoonup}{E}}$}$$ and $${\mathbb{P}}_i\left[X_s = \ell \mid X_{s-1} = k, X_{s-2} = j \right] = \frac{\phi_{k\to \ell}}{\rho\phi_{j\to k}} \qquad \text{if $s > 1$ and $j\to k \to \ell$ is non-backtracking}.$$ Normalization follows immediately from the fact that $\phi$ is a right eigenvector. Returning to the inner product between $v_i$ and $v_j$, $$\begin{aligned} {\left\langle {v}_i , {v}_j \right\rangle} &= -(1-1/m)\left(\sqrt{{\mathbb{P}}_j[X_1 = i]{\mathbb{P}}_i[X_1 \neq j]} + \sqrt{{\mathbb{P}}_i[X_1 = j]{\mathbb{P}}_j[X_1 \neq i]}\right) \\ &= -(1-1/m)\frac{\sqrt\rho(\phi_{i\to j} + \phi_{j\to i})}{\sqrt{\phi_i \phi_j}} \\ &= -(1-1/m)\frac{\sqrt\rho}{\rho + 1}\frac{\phi_i + \phi_j}{\sqrt{\phi_i\phi_j}} & &\text{from \eqref{eq:eig} and discussion}\\ &\le -(1-1/m)\frac{2\sqrt\rho}{\rho + 1},\end{aligned}$$ with the final line following (for instance) the inequality of arithmetic and geometric means. Theorem \[thm:main\] {#sub:main-pf} -------------------- We are now prepared to study the vector chromatic number of $G \sim {\mathcal{G}}(n,d/n)$. To prove Theorem \[thm:main\], we first need to supply a lower bound on ${\chi_v}(G)$—this will follow immediately from Theorem 2, and an established result on the spectrom of $B$ in the case [@bordenave-lelarge-massoulie Theorem 3]: (Bordenave, Lelarge, and Massoulie) When $G \sim {\mathcal{G}}(n,d/n)$, with probability $1 - o_n(1)$, the spectrum of $B$ consists of a Perron eigenvalue at $d \pm o_n(1)$, and remaining eigenvalues of magnitude at most $\sqrt{d} + o_n(1)$. This result in hand, we know w.h.p. the smallest real eigenvalue of $B$ is no smaller than $-\sqrt d - o_n(1)$, and so Theorem \[thm:lower\] tells us $${\chi_v}(G) \ge \frac{d^{3/2}}{2d - 1} + 1 + o_n(1)$$ w.h.p. as well. We need to show how to apply Theorem \[thm:upper\] to bound ${\chi_v}(G)$ from above. It is a standard lemma that for any constant $\gamma$, $\operatorname{girth}(G) \ge \gamma$ with constant probability. On this event, the results of Theorem 6 on the spectrum of $G$ still hold with probability $1 - o_n(1)$, and the average degree of $G$ is still $d \pm o_n(1)$, so we can apply Theorem 1 and deduce that, for any $\epsilon$ and any $d$, $${\chi_v}(G) \le \frac{d+1}{2\sqrt d} + 1 + \epsilon$$ with probability bounded away from zero. We now employ a martingale technique and combinatorial argument due to a string of papers establishing concentration for the chromatic number of graphs [@shamir1987sharp; @luczak; @achlioptas-moore-reg], and employed in [@banks2019lovasz] for a purpose analogous to ours; the presentation is indebted as well to [@balachandran Theorem 79]. Set $\kappa > 2$ and define a random variable $\Lambda\subset V$ as the largest set of vertices inducing a subgraph of $G$ with vector chromatic number $\kappa$. By Proposition 1, for any $\epsilon$, if we set $\kappa = \frac{d^{3/2}}{2d - 1} + 1 + \epsilon$ then $|\Lambda| = n$ with probability at least $\mu$, for some $\mu \in (0,1)$. Think of the random graph $G$ as being sampled in $n$ steps, where on the $i$th one we decide which of the edges will exist between vertex $i$ and the prior $i-1$. If we call $G_i$ the induced subgraph on vertices $[i]\subset V$, then the the random variables $G_1,...,G_n = G$ induce an increasing sequence of sigma algebras, and the sequence $\operatorname*{\mathbb{E}}[|\Lambda| \mid G_i]$ is a martingale. The central claim in every application of this martingale method is that, as at each step we are revealing data about the neighborhood of a single vertex, the conditional expectation of $|\Lambda|$ can change by at most one: once the edges between $i$ and the previous vertices are revealed, we can simply delete $i$ from the graph, and our data about the remaining edges is unchanged. By Azuma’s inequality, then, $${\mathbb{P}}\left[|(n-|\Lambda|) - \operatorname*{\mathbb{E}}(n-|\Lambda|)| > t\sqrt n\right] \le 2e^{-t^2/2}.$$ Choosing $t$ so that $2e^{-t^2/2}<\mu$, we immediately have $0 \in (\operatorname*{\mathbb{E}}[n-|\Lambda|] - t\sqrt n,\operatorname*{\mathbb{E}}[n - |x|] + t\sqrt n)$, and thus $n - |\Lambda| \le 2t\sqrt n$ with probability at least $1 - \mu$. Now, let $\Upsilon \triangleq V\setminus \Lambda$ be the set of vertices which we *cannot* $\kappa$-vector color. We will show that this set can be expanded to one which induces a three-colorable subgraph of $G$, and whose boundary with the remaining $\kappa$-vector colorable portion of $G$ is an independent set. If there are two vertices $i,j \in \Upsilon$ which are (1) not connected to one another by an edge and (2) are both connected to vertices in $\Upsilon$, form a set $\Upsilon_1 = \Upsilon \cup \{i,j\}$, and repeat this process to produce sets $\Upsilon \subset \Upsilon_1 \subset \cdots \subset \Upsilon_M$ until there are no such vertices to add. The boundary of $\Upsilon_M$ is an independent set (or else our expansion process could have continued for another step). Initially, $\Upsilon$ induces a subgraph with at least $|\Upsilon|/2$ edges (because if there were an isolated vertex, we could easily extend the vector coloring to it), and at each step, $|\Upsilon_t| = 2t + |\Upsilon|$, and $|E(\Upsilon_t)| = 3t + |E(\Upsilon)| \ge 3t + |\Upsilon|/2$. If our process progressed long enough for $|\Upsilon_t| = \alpha n$ for some $\alpha$, we’d have $t = (\alpha n - |\Upsilon|)/2$ and $$|E(\Upsilon_t)| \ge 3/2 (\alpha n - |\Upsilon|) + |\Upsilon|/2 = 3/2\alpha n - |\Upsilon|.$$ Since $|\Upsilon| = o(n)$, this means the average degree of the subgraph induced by $\Upsilon_t$ would be $3(1 - o(1))$. A union bound shows, though, that small enough subgraphs of size linear in $n$ w.h.p. do not have average degree this high, so the process must terminate when $|\Upsilon_M| = o(n)$. Applying this union bound again, every subgraph of $|\Upsilon_M|$ must have average degree smaller than three, so $\Upsilon_M$ induces a subgraph with no three-core, and can be colored with three colors. We now need to produce a valid vector coloring on the entire graph, exploiting the preceding decomposition of $G$ into a subgraph with ${\chi_v}= \kappa$, one with $\chi = 3$, and a independent set separating them. Call $\{{v}_i\}_{i \in \Lambda}$ the vector coloring on $\Lambda$, and (perhaps by increasing the ambient dimension) let ${w}_1,{w}_2,{w}_3$ be three unit vectors pointing to the corners of a unilateral triangle, and ${\zeta}$ be a vector orthogonal to ${v}_i$ and ${w}_j$. Writing $\sigma : \Upsilon_M \to [3]$ for a valid three-coloring of $\Upsilon_M$, our vector coloring will be $$\begin{aligned} {z}_i = \begin{cases} \frac{\sqrt{\kappa^2 - 1}}{\kappa}{v}_i - \frac{1}{\kappa}{\zeta} & i \in \Lambda \\ {\zeta} & i \in \delta \Upsilon_M \\ \frac{\sqrt 8}{3}{w}_{\sigma(i)} - \frac{1}{3}{\zeta} & i \in \Upsilon_M \end{cases}\end{aligned}$$ One can now directly verify that $$\begin{aligned} {\left\langle {z}_i , {z}_j \right\rangle} &\le \begin{cases} -\frac{1}{\kappa} & \text{$i$ or $j$ is in $\Lambda$} \\ -\frac{1}{4} & \text{$i$ or $j$ is in $\Upsilon_M$}. \end{cases}\end{aligned}$$ Corollaries {#sub:corollaries} ----------- Our vectors ${v}_i$ from the proof of Theorem \[thm:upper\] can be used as input to the Goemans-Williamson rounding algorithm [@goemans1995improved] for producing large cuts in $G$. Let $X$ be the Gram matrix of the ${v}_i$, sample ${g} \sim {\mathcal{N}}(0, X)$, and partition vertices according to the sign of the coordinates of ${g}$. Calculation of the expected size of such a cut is standard: our vectors $v_i$ have inner product at most $-(1-1/m)\frac{2\sqrt\rho}{\rho + 1}$, so $$\begin{aligned} \operatorname*{\mathbb{E}}|\text{cut}| &= \sum_{(i,j)\in E} {\mathbb{P}}[\text{$g_i$ and $g_j$ have different signs}] \\ &= \sum_{(i,j)\in E} \frac{1}{\pi}\arccos{\left\langle v_i , v_j \right\rangle} \\ &\ge \sum_{(i,j)\in E} \left(\frac{1}{2} - \frac{{\left\langle v_i , v_j \right\rangle}}{\pi}\right) \\ &\ge |E|\left(\frac{1}{2} + \frac{1}{\pi}(1 - 1/m)\frac{2\sqrt\rho}{\rho + 1}\right) \end{aligned}$$ In the ${\mathcal{G}}(n,d/n)$ case, our martingale calculation guarantees with high probability a vector coloring whose inner products satisfy $${\left\langle v_i , v_j \right\rangle} \le - \frac{2\sqrt d}{(\sqrt d + 1)^2},$$ giving us a cut involving at least $$|E|\left(\frac{1}{2} + \frac{2}{\pi}\frac{\sqrt d}{(\sqrt d + 1)^2}\right) \approx |E|\left(\frac{1}{2} +0.63662\frac{\sqrt d}{(\sqrt d + 1)^2}\right)$$ edges. One can compare this to a non-algorithmic result of Dembo, Montanari, and Sen [@dembo2017extremal] that the actual maximum cut severs $$\approx|E|\left(\frac{1}{2} + 0.7632 \frac{1}{\sqrt{d}} + o_d(\sqrt d)\right)$$ edges with high probability. To prove Theorem 3, it suffices to produce a matrix $X \succeq 0$ with unit trace, and for which $\langle L(z), X \rangle$ is small. Returning to the vectors ${v}_i$ from the proof of Theorem 1, $$X_{i,j} = \sqrt{\phi_i\phi_j}{\left\langle {v}_i , {v}_j \right\rangle},$$ so that $X_{i,i} = \phi_i$ and $X_{i,j} = -(1 - 1/m)\sqrt\rho (\rho + 1)^{-1}(\phi_i + \phi_j)$ for $(i,j) \in E$. Let us scale $\phi$ so that $\operatorname{tr}X = \sum_i \phi_i = 1$. We will need one additional fact. Writing $d_i$ for the degree of vertex $i$, then from and surrounding discussion, $$\begin{aligned} \sum_i \phi_i d_i = \sum_i \sum_{j\in\delta i}(\phi_{i\to j} + \rho\phi_{j\to i}) = (\rho + 1)\sum_i \phi_i = \rho + 1. \end{aligned}$$ Using this and our calculations from the proof of Theorem \[thm:upper\], $$\begin{aligned} \langle X,L(z) \rangle &\le z^2 -(1-1/m)\frac{\sqrt\rho}{\rho + 1}\sum_{(i,j) \in E} (\phi_i + \phi_j) + \sum_i \phi_i(d_i - 1) \\ &= z^2 - 2(1-1/m)\sqrt\rho \, z + \rho. \end{aligned}$$ Writing out $L(z) = z^2{\mathbbm{1}}- zA + D - {\mathbbm{1}}$ and rearranging finishes the proof. Notice also that we’ve shown Lemma 4 is asymptotically tight on high-girth graphs: $$\langle A, X \rangle \ge -2(1-1/m)\sqrt\rho = -2(1-1/m)\sqrt{\langle D - {\mathbbm{1}}, X\rangle}.$$ Acknowledgements {#sec:acknowledgements .unnumbered} ================ We are grateful to Nikhil Srivastava, Archit Kulkarni, Satyaki Mukherjee for illuminating conversations. J.B. is supported by the NSF Graduate Research Fellowship Program under Grant DGE-1752814; L.T is supported by NSF Grant CCF-1815434. [^1]: Corresponding Author
--- abstract: | In this paper we show that if $n\geq 5$ and $G$ is any of the groups $SU_n(q)$ with $n\neq 6,$ $Sp_{2n}(q)$ with $q$ odd, $\Omega_{2n+1}(q),$ $\Omega_{2n}^{\pm}(q),$ then $G$ and the simple group $\overline G=G/Z(G)$ are not 2-coverable. Moreover the only 2-covering of $Sp_{2n}(q),$ with $q$ even, has components $ O^-_{2n}(q)$ and $O^{+}_{2n}(q) .$ address: - | D. Bubboloni\ Dipartimento di Matematica per le Decisioni, Università degli Studi di Firenze\ Via C. Lombroso 6/17\ I-50134 Firenze, Italy - | M.S.Lucido\ Dipartimento di Matematica ed Informatica\ Università degli Studi di Udine\ Via delle Scienze 206\ I-33100 Udine, Italy - | Th.Weigel\ Dipartimento di Matematica ed Applicazioni\ Università degli Studi Milano-Bicocca\ U5-3067, Via R.Cozzi, 53\ I-20125 Milano, Italy author: - 'D. Bubboloni' - 'M. S. Lucido' - 'Th. Weigel' title: '$2$-coverings of classical groups' --- Introduction {#s:intro} ============ It is well known that a finite group $G$ is never the set-theoretical union of the $G$-conjugates of a proper subgroup. However there are examples of groups which are the set-theoretical union of the $G$-conjugates of two proper subgroups.\ Let $G$ be a group and let $H,\ K$ be proper subgroups of $G.$ If every element of $G$ is $G$-conjugate to an element of $H$, or to an element of $K$ i.e. $$G=\bigcup_{g\in G} H^g \cup \bigcup_{g\in G} K^g,$$ then $\delta=\{H,\ K\}$ is called a [*$2$-covering*]{} of $G$, with [*components*]{} $H,\ K$ and $G$ is said [*$2$-coverable*]{}. For any $H<G,$ we use the notation $[H]$ for $\bigcup_{g\in G} H^g.$ Observe that we can assume $H,\ K$ maximal subgroups of $G.$ In what follows the components of a $2$-covering will be tacitly assumed as maximal subgroups of $G.$ If $H$ is a maximal subgroup of $G,$ we write $H< \cdot\, G.$\ In [@bl], it has been proved that the linear groups $GL_{n}(q),$ $SL_{n}(q)$ and the projective groups $PGL_{n}(q)$, $PSL_{n}(q)$ are $2$-coverable if and only if $2 \le n \le 4$. Here we consider the other classical groups. Our Main Theorem is a collection of the statements in the sections 3-7 (see Propositions \[simplettici\], \[unitari\], \[ortogonali dispari\], \[ortogonali +\], \[ortogonali-\]) and of Remark \[centre\].\ [ **Main Theorem**]{} [*Let $n\geq 5$ and $G$ be any of the groups $SU_n(q)$ with $n\neq 6,$ $Sp_{2n}(q)$ with $q$ odd, $\Omega_{2n}^{+}(q)$, $\Omega_{2n+1}(q),$ $\Omega_{2n}^{-}(q).$ Then $G$ and the simple group $\overline G=G/Z(G)$ are not 2-coverable. If $q$ is even the group $Sp_{2n}(q)$ has only a 2-covering which has components isomorphic to $ O^-_{2n}(q)$ and $O^{+}_{2n}(q) .$*]{}\ R.H. Dye in [@Dye] proved that in fact $\,\{O^-_{2n}(q),\ O^{+}_{2n}(q)\}\,$ is a $2$-covering for $Sp_{2n}(q),$ with $q$ even. Preliminary facts ================= Let $q=p^f$ be a prime power and $n \ge 3$. We consider the following [*classical groups*]{} groups $G$: $$SU_n(q) \ \hbox{ with }\ q=q_0^2,\ \quad Sp_{2n}(q),\quad \Omega_{2n+1}(q)\ \quad \Omega_{2n}^{\pm}(q).$$ The corresponding [*general classical groups*]{} $\tilde G$ are: $$GU_n(q),\ \quad Sp_{2n}(q),\quad O_{2n+1}(q),\ \quad O_{2n}^{\pm}(q).$$ Observe that $\tilde {G}'=G.$\ Let $V$ be the natural ${\mbox{${\mathbb F}_{q^{}}$}}\tilde{ G}$-module endowed with the suitable non degenerate form and put $d=dim_{_{\,{\mbox{${\mathbb F}_{q^{}}$}}}}V.$ Sometimes, when we need to put in evidence the dimension $d$ and the field ${\mbox{${\mathbb F}_{q^{}}$}},$ we will use the notation $G_d(q)$ instead of $G.$ If the action of $\,\langle g \rangle \leq \tilde {G}\,$ decomposes $V$ into the direct sum of irreducible submodules $V_i$ of dimensions $d_i,\ i=1,\dots,k\,$ we shortly say that [*the action of $g\in \tilde {G}$ is of type $d_1\oplus\cdots\oplus d_k.$*]{} Note that $g$ operates irreducibly if and only if its characteristic polynomial is irreducible.\ Since we want to decide when there exist maximal subgroups $H,\ K$ of $G$ such that any element of $G$ belongs to a conjugate of $H$ or $K$ in $G,$ it is natural to adopt the systematic description of the maximal subgroups of the classical groups given by M. Aschbacher in [@a]. There, several families $\mathcal{C}_i,\ i=1,\dots,8$ of subgroups were defined in terms of the geometric properties of their action on the underlying vector space $V$ and the main result states that any maximal subgroup belongs to $\bigcup_{i=1}^{8}\mathcal{C}_i$ or to an additional family $\mathcal{S}.$ For the notation, the structure theorems on these maximal subgroups and other details of our investigation we refer to the book of P. B. Kleidman and M. W. Liebeck ([@kl]): in particular we use the definitions given there of the families $\mathcal{C}_i.$ We consider some special elements to identify the components of a $2$-covering of a classical group. First of all we recall the elements in $\tilde {G}$ or $G$ of maximal order with irreducible action on $V,$ the so called [*Singer cycles*]{}. In Table \[1\] we collect the general classical groups $\tilde{G}$ for which the Singer cycles exist, the order of the Singer cycle in $\tilde G$ and in the classical group $G=\tilde{G}'$ ([@hu]).   $\widetilde{G}$    order order in $ G=\widetilde{G}'$  --------------------- ----------- ------------------------------- $GL_n(q)$ $q^n-1$ $(q^n-1)/(q-1)$ $Sp_{2n}(q)$ $q^n+1$ $q^n+1$ $GU_{n}(q_0^2),$ $q_0^n+1$ $(q_0^n+1)/(q_0+1)$ $O_{2n}^-(q)$ $q^n+1$ $(q^n+1)/(2,q-1)$ : Orders of the Singer cycles[]{data-label="1"} Recall that the Singer cycles are always, up to conjugacy, linear maps $\pi_a:V\rightarrow V$ of $V={\mbox{${\mathbb F}_{q^{d}}$}}$ given by multiplication $\pi_a(v)=av$ in the field by a suitable $a\in {\mbox{${\mathbb F}_{q^{d}}$}}^*.$ To manage expressions of the type $q^a\pm 1,$ it is useful to state also an easy, technical lemma. \[aritme\] Let $q$ a prime power and $a,\ b,\ n\in \mathbb{N}.$ Then we have the following: - $(q^{a}-1, q^{b}-1)=q^{(a,b)} -1$; - if $a$ is odd then $(q^{a} +1)/(q+1)$ is odd; - $\bigg (\frac{q^{a}-1}{q-1}, q-1\bigg )$ divides $(a, q-1);$ - if $a$ is odd, then $\bigg (\frac{q^{a}+1}{q+1}, q+1\bigg )$ divides $(a, q+1)$ and $(q^a-1,q+1)=(2,q-1).$ - if $a$ is odd and $(a,b)=1$, then $\bigg (\frac{q^{a}+1}{q+1}, q^{b}+1\bigg )$ divides $(a, q+1);$ - if $a$ is odd and $(a,b)=1$, then $\bigg (\frac{q^{a}+1}{q+1}, q^{b}-1\bigg )$ divides $(a, q+1)$; - $(q^n+1,q-1)=(2,q-1).$ If $n$ is even then $(q^n+1,q+1)=(2,q-1).$ The notion of irreducibility of action on the natural module $V$ is connected very strictly to that of primitive prime divisor, which we recall here, for convenience of the reader.\ Let $t\geq 2$ be a natural number. A prime $q_t$ is said to be a [*primitive prime divisor*]{} of $q^t-1$ if $q_t$ divides $q^t-1$ and $q_t$ does not divide $q^i-1$ for any $1\leq i<t.$ It was proved by Zsigmondy in [@zs] that if $t\geq 3$ and the pair $(q,t)$ is not $(2,6),$ then $q^t-1$ has a primitive prime divisor. If $t=2$ a primitive prime divisor exists if and only if $q\neq 2^i-1.$\ Clearly if $q_t$ is a primitive prime divisor of $q^t-1$, then $q$ has order $t$ modulo $q_t$ and thus $t$ divides $q_t-1.$ In particular $q_t \ge t+1.$ Let $P_t(q)$ denote the set of primitive prime divisors $q_t$. \[primitivi\] Let $P_t(q)$ be the set of primitive prime divisors of $q^t-1.$ Then: - for any $k\in {\mathbb{N}},\ P_{kt}(q)\subseteq P_t(q^k).$ In particular if $q=q_0^2,$ then $P_{2t}(q_0)\subseteq P_t(q);$ - if $t\neq s$, then $P_t(q)\cap P_s(q)=\varnothing;$ - $q_a\in P_t(q)$ divides $q^b-1$ if and only if $a$ divides $b;$ if $q_a$ divides $q^b+1,$ then a divides $2b.$ The natural embeddings of classical (or general classical) groups $G$ of dimension $d'$ into the corresponding classical groups of dimension $d>d',$ as well as the embeddings $O_{2n}^{\pm}(q)< O_{2n+1}(q)$ and $O_{2(n-1)}^{\pm}(q)<O_{2n}(q)^{\mp},$ give rise to some interesting elements $\sigma$ which we call $low$-$Singer\ cycles.$ Their action is always reducible and it decomposes $V$ into the direct sum of a trivial submodule and an irreducible submodule $T.$ Observe that the natural component for a low-Singer cycle is a suitable subgroup in $\mathcal{C}_1$ of type orthogonal sum. We call $t=dim_{_{\,{\mbox{${\mathbb F}_{q^{}}$}}}}T$ the [*rank*]{} of $\sigma.$ If $t$ is the highest dimension of an irreducible ${\mbox{${\mathbb F}_{q^{}}$}}G$-submodule of $V,$ we call $\sigma$ a [*low-Singer cycle of maximal rank*]{}. In Theorem 1.1 of [@msw], are determined the maximal subgroups containing a low-Singer cycle of rank $n-1$ in $SL_n(q)$, in $SU_n(q)$ with $n$ even and in $\Omega_{2n+1}(q).$ The well known diagonal embeddings of $GL_n(q)$ into $Sp_{2n}(q),\ GU_{2n}(q),\ O_{2n}^{+}(q)$ brings into those groups, elements of order $q^n-1$ and action $n\oplus n.$ We refer to them as *[linear Singer cycles]{} of *[dimension]{} $2n$.\ We also need to introduce the fundamental facts about the theory of the $ppd(d,q;e)$-$elements$ developed by R. Guralnick, T. Penttila, C. E. Praeger and J. Saxl in [@gpps]. Through this paper, that theory will be the main tool in finding the maximal subgroups containing elements with order divisible by certain “large” primes.\ An element of $GL_d(q)$ is called a $ppd(d,q;e)$-$element$ if its order is divisible by some $q_e\in P_e(q)$ with $d/2 < e \le d.$ A subgroup $M$ of $GL_d(q)$ containing a $ppd(d,q;e)$-element is said to be a $ppd(d,q;e)$-$group.$ In [@gpps] the complete list of the $ppd(d,q;e)$-groups is described.\ Clearly, if $M\in \bigcup_{i=1}^{8}\mathcal{C}_i \bigcup \mathcal{S}$ is a $ppd(d,q;e)$-group, then it belongs exactly to an Example of the list: in particular if $M\in \mathcal{S},$ then $M$ is described in Examples 2.6-2.9.** \[main\] [@gpps Main Theorem] Let $q$ be a prime power and $d$ an integer, $d \ge2$. Then $M\le GL_d(q)$ is a $ppd(d,q;e)$-group if and only if $M$ is one of the groups in Examples 2.1-2.9. Moreover - $M\not \in \mathcal{C}_4\cup \mathcal{C}_7;$ - if $e \le d-4,$ then $M$ is not one of the groups described in the Examples 2.5, 2.6 b), 2.6 c), 2.7, 2.8, 2.9; - if $e=d-3$ and $M$ is one of the groups described in the Examples 2.5, 2.6 b), 2.6 c), then $d$ is odd. We shall use the the theory of the $ppd(d,q;e)$-$elements$ for some special elements.\ Let $n\geq 5.$ If $n \ge 8,$ then by the Bertrand Postulate, there exists a prime $t$ such that $n/2< t \leq n-3.$ If $n=5,\ 6,\ 7$ we consider respectively $t=3,\ 4,\ 5,$ getting $\,n/2 < t \le n-2.$ Moreover if $n\neq 6,$ then $t$ is an odd prime with $(n,t)=1$ and if $n \ge 7,$ then $t \ge 5$ . We call $t$ a [*Bertrand number*]{} for $n$ ([*a Bertrand prime*]{} if $n\neq 6$). Note that if $t$ is a Bertrand prime for $n$, then $\bigg (q^{t}+1, q^{n-t}+(-1)^{n} \bigg )= q+1.$\ Given a Bertrand number we can define as in Table \[2\], an element $z$ in the classical groups, called a [*Bertrand element*]{}. Then $z$ is a $ppd(d,q;e)$-$elements$, as described in Table \[2\]. ----------------------------------------------------------------------------------- $G$ $order\ of \ z$ $d$ $ e$ --------------------------- --------------------------------------- -------- ------ $SU_n(q_0^2)$, $n \ne 6$, $\frac{( q_0^t +1)( $n$ $t$ q_0^{n-t} +(-1)^n)}{ q_0+1}$ $Sp _{2n}(q)$ $\frac{(q^t + 1)(q^{n-t} + 1)}{(q^t + $2n$ $2t$ 1,q^{n-t} + 1)}$ $\Omega_{2n+1}( q)$ $\frac{(q^t + 1)(q^{n-t}+ 1)}{(q^t + $2n+1$ $2t$ 1,q^{n-t} + 1)}$ ----------------------------------------------------------------------------------- : Orders of the Bertrand elements $z$[]{data-label="2"} Let $G=Sp_{2n}(q),\ \Omega_{2n+1}(q).$ Then $q_{2t}$ divides $|z|,$ for any $q_{2t}\in P_{2t}(q),$ hence if $M$ is a maximal subgroup of $G$ containing $z$, as described in Table \[2\], then $M\in \bigcup_{i=1}^{8}\mathcal{C}_i \bigcup \mathcal{S}$ has the $ppd(d,q;e)$-property and we can use the description given in Theorem \[main\] to determine the maximal groups containing a Bertrand element.\ Moreover, since $e \le d-4$ we reduce to the Examples 2.1, 2.2, 2.3, 2.4 and 2.6 a).\ We will often be concerned with a particular class of $ppd(d,q;e)$-elements: we say that an element of $GL_d(q)$ is a $strong$ $ppd(d,q;e)$-$element$ if its order is divisible by any $q_e\in P_e(q)$ with $d/2 < e \le d.$ Those elements cannot involve the class $\mathcal{C}_5,$ in the sense of the following:\ \[no-c5\] Let $G_d(q)$ be a classical group and let $q=\tilde q^r,$ for some prime $r$. If $M<\cdot\ G_d(q)$ with $M\in\mathcal{C}_5$ is of type $G_d(\tilde q),$ then $M$ contains no strong $ppd(d,q;e)$-element. Let $y$ be a strong $ppd(d,q;e)$-element and $y\in M <\cdot\ G_d(q)$ with $M\in\mathcal{C}_5$ of type $G_d(\tilde q)$. Then $d/2 < e \le d$ and $\, q_e$ divides $|y|\,$ for any $q_e \in P_e(q).$ By Remark \[primitivi\] we have $P_{re}(q)\subseteq P_{e}(\tilde q^r)=P_e(q)$ and by the structure of $M$ given in [@kl], we get $(\tilde q)_{re}$ divides $|G_d(\tilde q)|$ for any $(\tilde q)_{re}\in P_{re}(\tilde q),$ which gives $re\leq d$ against $re\geq 2e>d.$ Since we usually work with semisimple elements, we shall use some facts about the maximal tori of the classical groups, in particular their orders and their action on the natural module $V$, which can be easily found in [@GLS3]. We close this section by observing that a $2$-covering for a classical group exists if and only if it exists for the corresponding projective group. \[centre\] i)Let $G$ be a perfect group. If $M<\cdot\ G,$ then $M\geq Z(G).$\ ii) Let $G$ be a classical group. Then $G$ is 2-coverable if and only if $G/Z(G)$ is 2-coverable. i\) Assume $M<\cdot\ G,$ with $G$ perfect and $Z=Z(G)\not\leq M.$ Then we have $G=MZ$ and $M\lhd G$ with $G/M \cong Z/Z\cap M$ abelian. Hence $M\geq G'=G,$ a contradiction.\ ii) Let $G$ be a classical group and $\{H,\ K\}$ be a $2$-covering of $G$ with $H,\ K<\cdot\ G.$ Then $G$ is perfect and i) applies, yielding $H,\ K \geq Z(G).$ Thus, using the bar notation to take quotient modulo $Z(G),$ we observe that $\overline H,\ \overline K\neq \overline G$ are components of a $2$-covering of $\overline G.$ The converse is clear. Finally, when considering the simple groups $S$ in ATLAS [@atlas], we often use the fact that if $M<\cdot\, S$ and $x\in S,$ then $x\in [M],$ if and only if $\chi_M(x)\neq 0.$ Symplectic groups ================= Let $G=Sp_{2n}(q)\ $ with $n\geq 5$ and let $s\in G$ be a Singer cycle. Then $|s|=q^{n}+1 $ and the maximal subgroups of $G$ containing $s$ are known. [@msw Theorem 1.1]\[msw-sp\] Let $G=Sp_{2n}(q)$, $n \ge 5$ and $M$ a maximal subgroup of $G$ containing a Singer cycle. Then, up to conjugacy, one of the following holds: - $M= Sp_{2n/k}(q^k).k$, with $k|n$ a prime; - $q$ is even and $M= O_{2n}^-(q);$ - $nq$ is odd and $M = GU_n(q^2).2.$ In order to find the second component for a 2-covering, we consider a Bertrand element $z\in G$ of order $\frac{(q^{t} +1)(q^{n-t} +1)}{(q^{t} +1, q^{n-t} +1)}.$ Recall that when $n=6,$ by definition, $t=4.$ It is clear that, for $(n,q)\neq (5,2),\ z$ is a strong $ppd(\,2n,\,q;2t)$-element. The group $Sp_{10}(2)$ will be examined separately.\ Observe that for $(n-t,q)\neq (3,2)$ any $q_{2(n-t)}\in P_{2(n-t)}(q)$ divides $|z|;$ when $(n-t,q)=(3,2)$, we will say that $(n,t,q)$ belongs to the $critical\ case.$ \[bertrand-sp\] Let $G=Sp_{2n}(q),$ with $n \ge 5$ and $(n,q)\neq (5,2).$ If $M$ is a maximal subgroup of $G$ containing a Bertrand element then, up to conjugacy, one of the following holds: - $M = Sp_{2t}(q) \bot Sp_{2n-2t}(q)$; - $n$ is even, $q$ is odd and $M = GU_{n}(q^{2}).2;$ - $q$ is even and $M \cong O^{+}_{2n}(q).$ Let $M<\cdot\ G=Sp_{2n}(q),$ containing a Bertrand element $z.$ Since $2t\leq 2n-4,$ by Theorem \[main\] and Remark \[no-c5\], $M$ belongs to one of the classes $\mathcal C_i$, $i=1,2,3,8$ or to $\mathcal S$ as described in Example 2.6 a) of [@gpps]. $\mathcal{C}_1$.    Suppose first that $(n,t,q)$ does not belong to the critical cases. Then $q_{2t} \cdot q_{2(n-t)}$ divides the order of an element of $M$. Suppose that $M$ is of type $P_m$ with $1 \le m \le n$. Then, by Proposition 4.1.19 of [@kl], we have $P_m \cong q^a: GL_{m}(q) \times Sp_{2n -2m}(q)$ for some $a \in {\mathbb{N}}$. This implies $n-m \ge t$ and $m \ge 2n -2t$, which gives $n \le t$ against the definition of $t.$ Now suppose that $(n,t,q)$ belongs to the critical cases. Then $n\neq 6$ and we have again $n-m \ge t$, which gives $m \le 3$. But, since $3\mid (2^t+1),$ there is no element of order $|z|=3(2^t+1)$ in $P_{1}$, $P_{2}$ or $P_{3}$. Suppose now that $M$ is of type $Sp_{2m}(q) \bot Sp_{2n-2m}(q)$, with $ 1 \le m < n$ and $(n-t,q) \ne (3,2)$. Then $n-m \ge t$ and $n -t\le m$, which gives $m=n-t$. If $t=n-3$ and $q=2$, we also have $m \le 3.$ On the other hand $|z|=3(2^t+1)$ cannot be the order of an element in $Sp_2(2) \bot S_{2n-2}( 2)$ or in $ Sp_4(2) \bot Sp_{2n-4}( 2)$ and we are left only with $Sp_6(2) \bot Sp_{2t}( 2)$. $\mathcal{C}_2$.    By definition of the $\mathcal{C}_2$ class of the symplectic group, $M\cong Sp_m(q) \wr Sym(2n/m)$, preserves a direct sum decomposition $V=V_1\oplus\cdots\oplus V_k$ where each subspace $V_i$ has even dimension $m.$ On the other hand, $M$ is described in Example 2.3 of [@gpps]: in particular, by Lemma 4.1 in [@gpps], $m=1.$ Hence no case arises. $\mathcal{C}_3$.    If $M \cong Sp_{2n/r}(q^{r}).r$, where $r\mid n$ is a prime, then $q_{2t}$ cannot divide $|M|.$ Namely $$\pi(|M|)=\pi\bigg (p\, r \, \prod_{i=0}^{n/r}(q^{2ri} - 1) \cdot \bigg )$$ and if $q_{2t}$ would divide $q^{ri} \pm 1$, we should have $t \le i\le n/r$ against $t >n/2;$ moreover $q_{2t} > n \ge r$ implies $q_{2t} \ne r$. If $M \cong GU_{n}(q^{2}).2$ , then $z$ belongs to $M$ if and only if $n$ is even. $\mathcal{C}_8$.    Here $M= O_{2n}^{\pm}(q)$, with $q$ even and only $O_{2n}^{+}(q)$ contains an element of order divisible by $q_{2t}\,q_{2(n-t)}.$ $ \mathcal{S}.$    In the Example 2.6 a) of [@gpps] we found $M \le Sym(m) \times Z(G)$ with $m\in\{ 2n+1,\ 2n+2\}$ and $q_{2t}=2t+1.$ Assume $n \ge 7,$ then $t \ge 5$ and first suppose that $(n-t,q)=(3,2)$. Then there exists in $M$ a cyclic subgroup of order $2_{2t}\,9 $, against the fact that $2t +1 +9= 2n +4 > m$. Next let $(n-t, q) \ne (3,2)$: then there exists a primitive prime divisor $q_{2(n -t)}$ of $q^{2(n-t)} -1$ and $q_{2(n -t)} \ge 2n -2t +1.$ We observe that if $(t,q) \ne (5,2)$, then $(q^t +1)/(q+1) > 2t +1=q_{2t}.$ Since, by Lemma \[aritme\], $(q^t +1)/(q+1)$ is odd, there exists an odd prime $r$ such that $r \cdot q_{2t}$ divides $(q^t +1)/(q+1).$ Moreover, by Lemma \[aritme\], ${(q^{n-t} +1, \frac{q^t +1}{q+1})}$ divides $(t, q+1)$ which together with $n-t\geq 2,$ gives $r\neq q_{2(n-t)}.$ Thus if $r \ne q_{2t}$, we have that $r \cdot q_{2t} \cdot q_{2(n -t)}$ divides $|z|$ which requires an element of order the product of these three primes in $Sym(m),$ and therefore $m\geq r + q_{2t} + q_{2(n -t)} \ge 3 + 2n -2t +1 + 2t+1 = 2n+5,$ against $m\leq 2n+2.$ If $r=q_{2t}$ we have an element of order $q_{2(n-t)}\,q_{2t}^2$ in $Sym(m),$ which implies the impossible relation $m \geq q_{2(n-t)}+q_{2t}^2> 2n+2.$ In the case $(t,q)=(5,2),$ we have $n\in \{7,\ 8,\ 9\}$, but it is easily checked that the corresponding $Sym(m)$ do not contain elements of order $|z|$. Now assume $n=5$ and $q\ne 2$. Then $t=3,$ $q_{2t}=7$, and $m=11$ or $12$. Let $\sigma$ be an element of $M$ of order 7: then $(q^2 +1)/(2,q-1)$ divides $|C_M(\sigma)|,$ hence $q_4$ divides the order of $C_{Sym(m)\times Z(G)}(\sigma)\cong Sym(m-7) \times C_7 \times Z(G).$ Since $7\neq q_4\geq 5 ,$ we get $m=12,\ q_4=5.$ Thus we would have an element $z$ of order divisible by $35$ in $Sym(12),$ which implies the impossible relation $|z|=\frac{(q^3+1)(q^2+1)} {(2,q-1)}=35.$\ Finally observe that if $n=6,$ then $t=4$ and no case arises since $2t+1=9$ is not a prime. \[$Sp_10(2)$\] If $\{H,\ K\}$ is a 2-covering of $Sp_{10}(2),$ then $$H= O^-_{10}(2)\ \ \ and \ \ \ K= O^{+}_{10}(2).$$ Let $\{H,\ K\}$ be a 2-covering of $G=Sp_{10}(2),$ with $H$ containing a Singer cycle of order 33. By Lemma \[msw-sp\], we get $$H\in \{Sp_2(2^5).5,\ O^{-}_{10}(2)\}.$$ Let $y\in G$ of order $17\cdot 3$ and action of type $8\oplus 2.$ Then $y$ is a $ppd(\,d,q;e\,)$-element for $d=10,\ q=2,\ e=8.$ Observe that $e=d-2,\ q_8=2e+1=17.$ The maximal subgroups $M$ of $G$ containing $y$ belong to one of the classes $\mathcal C_i$, $i=1,\,2,\,3,\,6,\,8$ or to $\mathcal S$ and are described closely in the Examples of [@gpps]. Since $y$ has no eigenvalue, the only $M\in \mathcal C_1,$ is the natural $M=Sp_2(2)\bot Sp_8(2);$ no case arises in classes $\mathcal C_2,\ \mathcal C_6,$ since the corresponding Examples 2.3 and 2.5 in [@gpps] require $q_e=e+1;$ we get no case also in $\mathcal C_3$ by arithmetical reasons and finally in $\mathcal C_8,$ due to the action of $y,$ we get only $M=O^{+}_{10}(2).$ On the other hand the examination of the Examples 2.6-2.9 in [@gpps] for the class $\mathcal S,$ easily show that there is no possibility for $M$ in $\mathcal S.$ Thus we reach $$K\in\{Sp_2(2)\bot Sp_8(2),\ O^{+}_{10}(2)\}.$$ Let $z\in G$ be a Bertrand element of order 45 and observe that its action is of type $6\oplus 4.$ The only subgroup among our candidates $H$ and $K$ which contain $z$ is $O^{+}_{10}(2).$ Moreover $O^{+}_{10}(2)$ and $Sp_2(2^5).5$ cannot constitute the components of a 2-covering, since $G$ contains elements of order 35 which none of them contains. Thus we are left with the only possibility $H= O^-_{10}(2)$ and $ K= O^{+}_{10}(2). $ In [@Dye],R. H. Dye showed that in even characteristic the symplectic group admit always a 2-covering:\ [@Dye]\[Dye\] The group $Sp_{2n}(2^f)$ is 2-coverable by $$H= O^-_{2n}(q)\quad\hbox{and }\quad K= O^{+}_{2n}(q).$$ \[simplettici\] Let $G=Sp_{2n}( q))$, $n\ge 5$. - If $q$ is odd, then $G$ is not 2-coverable; - if $q$ is even, then the only 2-covering $\{H,K \}$ of $G$ is given by $$H= O^-_{2n}(q)\ \ \ and \ \ \ K= O^{+}_{2n}(q). \ \ $$ By Remark \[$Sp_10(2)$\] and Theorem \[Dye\], the result is true if $(n,q)= (5,2).$ Let $n \ge 5$ with $(n,q)\neq (5,2)$ and $\{H,\ K \}$ be a 2-covering of $G=Sp_{2n}( q)$. We can assume that $H$ contains a Singer cycle and therefore it is described in Lemma \[msw-sp\]. On the other hand, the maximal subgroups of $G$ containing a Bertrand element are described in Lemma \[bertrand-sp\] and, since there is no overlap between these two lists, we can assume $K$ as described there. Thus we have two choices for $H$ and two choices for $K$ both if $q$ is odd and if $q$ is even. Let $y\in G$ of order $\frac{(q^{n-1} +1)(q+1)}{(q^{n-1} +1,q+1)}$ and action of type $2(n-1)\oplus 2.$\ Suppose first that $q$ is odd. Then $|y|$ does not divide $|H|$. Moreover $|y|$ does not divide $|Sp_{2t}(q) \bot Sp_{2n-2t}(q)|.$ Hence if $n$ is odd we have finished. If $n$ is even, the only possibility to contain $y$ is given by the choice $K= GU_{n}(q^{2}).2.$ But it is easily observed that an element of order $q^{n-1} -1$ is not contained neither in $H=Sp(n/k,q^k).k$ nor in $K=GU_{n}(q^{2})$. Thus we get no 2-covering when $q$ is odd.\ Now suppose that $q$ is even. Then $K =O_{2n}^+(q),$ since no other candidate component can contain elements with the order and action of $y.$ Thus we have two possible coverings given by $$\delta_1=\{H=O_{2n}^-(q),\ \ K =O_{2n}^+(q)\}$$ and $$\delta_2=\{H=Sp_{2n/k}(q^k).k,\ \ K =O_{2n}^+(q)\},$$ where $k|n$ is a prime. Finally we see that $\delta_2$ is never a 2-covering. We first suppose that $n$ is odd. Then $(q^{2} +1,q^{n-2}-1)=1$, since $n-2$ is odd and $q$ is even. Then there exists an element $u\in G$ of order $(q^{2} +1)(q^{n-2}-1)$ and $q_{n-2}$ divides $|u|$. Since $n-2$ divides $q_{n-2}-1$ we get $q_{n-2}> n \ge k,$ hence $q_{n-2}$ does not divide $|Sp_{2n/k}(q^k).k|.$ Moreover $u$ cannot belong to $O_{2n}^+(q).$ Now suppose that $n$ is even. Then $(q^{n-1} -1,q +1)=1,$ since $q$ is even and $n-1$ is odd. Then there exists $u\in G$ of order $(q^{n-1}-1)(q +1)$ and $q_{n-1}$ divides $|u|$. But $q_{n-1}>n$ does not divide $|Sp_{2n/k}(q^k).k|$ and there is no element of order $|u|$ in $O_{2n}^+(q).$ Unitary groups ============== Let $G=SU_{n}(q)\ $ with $n\geq 5,\ n\neq 6$ and $q=q_0^2.$ Let $s\in G$ be a Singer cycle of order $(q_0^{n}+1)/(q_0+1)$ if $n$ is odd and a low-Singer cycle of maximal rank of order $q_0^{n-1}+1$ if $n$ is even. [@msw Theorem 1.1]\[malleu\] Let $G=SU_n(q)$, $n \ge 5,\ n\neq 6.$ If $M$ is a maximal subgroup of $G$ containing $s$, then one of the following holds: - $n$ is odd, with $(n,q_0)\neq (5,2)$ and $$M = N_G(GU_{n/k} (q^k))\cong SU_{n/k}(q^k).\,\frac{q_0^k+1}{q_0+1}\,.\,k,$$ with $\ k|n$ prime; - $n$ is even and $M = GU_{n-1}(q);$ - $(M,G)= (PSL_2(11),\,SU_5(4)).$ \[$SU_5(4)$\] The group $SU_5(4)$ is not $2$-coverable. We use [@atlas] for the description of the conjugacy classes of elements in $G$ and for the maximal subgroups in $G.$ Let $\{H,K\}$ be a 2-covering of $G$ and observe that $G$ contains one conjugacy class of elements of order 8 and three conjugacy class of elements of order 9. By Lemma \[malleu\], we can assume $H=PSL_2(11)$ and since this group does not contain elements of order 8 or 9, $K$ must contain, up to conjugacy, all of them. But since the character $\chi_{_{P_1}}$ vanishes on the class 9C, we have $9C\notin[P_1].$ All the other maximal subgroups in $G$ do not contain elements of order 8. From now on we will always assume that $q_0\neq 2$ when $n=5.$\ Let $t$ be a Bertrand prime for $n$ and consider a Bertrand element $z\in SU_n(q)$ as in Table \[2\]. Then $|z|=\frac{(q_0^t +1)(q_0^{n-t} +(-1)^n)}{q_0+1}$ and the action on $V$ is of type $t\oplus\frac{(n-t)}{2}\oplus \frac{(n-t)}{2}$ if $n$ is odd and of type $t\oplus (n-t)$ if $n$ is even. For the unitary groups we have either $e \le d-3$, or $(d,e)\in \{(5,3),\ (7,5)\}.$ Hence Theorem \[main\] applies and we search $M$ among the groups in the Examples 2.1-2.9, for $n\geq 5.$ Recall that $t\geq 5$ for $n\neq 5$ and note that, for any $(q_0)_{2t}\in P_{2t}(q_0),$ we have $(q_0)_{2t} \mid |z|.$ Moreover if $n$ is even, for any $(q_0)_{2(n-t)}\in P_{2(n-t)}(q_0)$ we have $(q_0)_{2(n-t)} \mid |z|,$ while if $n$ is odd for any $(q_0)_{(n-t)}\in P_{(n-t)}(q_0)$ we have $(q_0)_{(n-t)}\mid |z|.$\ Note also that, due to the exceptions to Zsigmondy theorem, for $n$ odd with $n-t=6,\ q_0=2,$ the set $P_{(n-t)}(q_0)$ is empty and also when $n$ is even and $n-t=3,\ q_0=2,$ the set $P_{2(n-t)}(q_0)$ is empty. We refer to these two situations saying that $(n,t,q_0)$ belongs to the [*the critical cases*]{}.\ \[uni-z\] Let $G=SU_n(q)$ $n\ge 5$, $n \ne 6$. If $M$ is a maximal subgroup of $G$ containing a Bertrand element, then one of the following holds: - $M= SU_t(q)\bot GU_{n-t}(q);$ - $n$ is odd and $M =P_{(n-t)/2};$ - $(n,q_0) = (7,2)$, and $M = GU_{n-1}(q)$. Let $M$ be a maximal subgroup of $SU_n(q)$ containing the Bertrand element $z$. Then, by Remark \[primitivi\], $M\in \bigcup_{i=1}^{7}\mathcal{C}_i \bigcup \mathcal{S}$ has the $ppd(n,q;t)$-property, with $n \ge 8,$ $n/2 < t \le n-3$, or $(n,t)\in \{(5,3),\ (7,5)\}.$ Thus Theorem \[main\] applies and we search $M$ among the groups in the Examples 2.1-2.9, for $n\geq 5.$ But obviously $q_t \ne t+1$ and $(t,n)=1.$ Moreover, looking at the Tables 2-8 of [@gpps], it is easily checked that $q_t \neq 2 t+1$ since $t\leq n-2.$ These facts rules out the groups of the examples 2.3, 2.4, 2.5, 2.6, 2.7, 2.8, 2.9, because they are all given under at least one of the conditions: $t> n-3$ and $n\neq 5,\ 7,$ or $t=n-3$ even, or $q_t = t+1,$ or $q_t=2t+1.$ Hence we reduce to $M\in \mathcal{C}_1$ or $M \in \mathcal{C}_5.$ Let $M\in \mathcal{C}_1.$ If $M \cong SU_m(q) \bot \ GU_{n-m}(q))$, with $m<n/2,$ the condition $z \in M$, implies that $(q_0)_{2t}$ divides the order of a maximal torus either of $SU_m(q)$ or of $GU_{n-m}(q),$ hence $(q_0)_{2t}$ divides $$\alpha=\prod_{i=1}^k(q_0^{s_i} -(-1)^{s_{i}}) \prod_{i=1}^l(q_0^{r_i} -(-1)^{r_{i}}),$$ with $\sum_{i=1}^{k}s_{i}=m\ $ and $\ \sum_{i=1}^{l}r_{i}=n-m.$ Since $m<n/2<t,$ we must have $n-m\geq t.$ For $n=5,\ 7$ this produces $m=1$ or $m=2.$ On the other hand $GU_4(q)$ does not contain an element of order $(q_0^3+1)(q_0-1),$ if $q_0 \neq 2$ and $U(6,q)$ contains an element of order $(q_0^5+1)(q_0-1),$ only if $q_0 =2$. Then, with the exception of $SU_7(4),$ we get again $m=2=n-t.$ For $n>7,$ we suppose that $(n,t,q_0)$ do not belong to the critical cases. Hence if $m<n-t,$ the condition $(q_0)_{(n-t)}|\alpha$ for $n$ odd or $(q_0)_{2(n-t)}|\alpha$ for $n$ even couldn’t be fulfilled. Then we are left with $m=n-t.$ We now suppose that $(n,t,q_0)$ belongs to the critical cases. Let first $n$ be odd, $n-t=6$, $q_0=2$. Then $m \le 6$ and $|z|= (2^t +1) \cdot 21$. But $7 \cdot 2_{2t}$ divide the order of an element in $SU_m(4)\bot GU_{n-m}(4)$ if and only if $m=6=n-t$. If $n$ is even, $n-t=3$, $q_0=2$, then $m \le 3$ and $|z|= (2^t +1)\cdot 3$. Observe that if $m=1,2$ there is no element of order $|z|$ in $GU_{n-m}(q)\bot SU_m(q).$ Thus again $m=3=n-t.$ If $$M=P_m \cong q_0^{n(2n-3m)}\ :\ \frac{1}{q_0+1}(GL_m(q)\times GU_{n-2m}(q)),$$ we have that $$|M|_{q_0'}= (q_0^2-1) \prod_{i=2}^m(q^{i} -1)\ \prod_{i=2}^{n-2m}( q_0^{i} -(-1)^i)$$ and $m\leq n/2<t.$ By Lemma \[primitivi\], $(q_0)_{2t}$ is also a primitive prime divisor $q_t$ for $q^t-1$ and divides $|z|.$ Since $q_t$ does not divide $\prod_{i=2}^m(q^{i} -1)$ we need $(q_0)_{2t}$ divides $(q_0^i-(-1)^i)$ for some $i\leq n-2m$ which gives $m\leq (n-t)/2.$ Now suppose that $M$ is of type $P_m$ and that $(n,t,q_0)$ do not belong to the critical cases. Let $m\le (n-t)/2$ and $n$ even; then $(q_0)_{2t}\,(q_0)_{2(n-t)}$ divides $|z|$ and, there exists an element with order divisible by $(q_0)_{2t}\,(q_0)_{2(n-t)}$ in $GL_m(q)\times SU_{n-2m}(q).$ The possible order of such an element is a divisor of $$\beta=\prod_{i=1}^k(q^{r_i} -1) \prod_{j=1}^l(q_0^{s_j} -(-1)^{s_{j}}) ,$$ with $\sum_{i=1}^{k}r_{i}=m\ $ and $\ \sum_{j=1}^{l}s_{j}=n-2m\ (\geq t).$ Since $m<t$ the prime $(q_0)_{2t}=q_t$ does not divide $\prod_{i=1}^k(q^{r_i} -1)$ and we must have, say, $t\mid s_1$ and $\ \sum_{j=2}^{l}s_{j}\leq n-2m-t<n-t.$ Then $(q_0)_{2(n-t)}$ does not divide $\prod_{j=1}^l(q_0^{s_j} -(-1)^{s_{j}}) $ and we need $(q_0)_{2(n-t)}=q_{n-t}$ divides $q^{r_i}-1,$ hence the contradiction $n-t\leq r_i\leq m\leq (n-t)/2.$\ Let $m \le (n-t)/2$ and $n$ odd. If $n=5,\ 7$ we have finished. If $n\geq 9,$ then there exist primitive prime divisors $(q_0)_{2t},\ (q_0)_{(n-t)}$ and we have that $(q_0)_{2t}\,(q_0)_{(n-t)}$ divides $$\beta=\prod_{i=1}^k(q^{r_i} -1) \prod_{j=1}^l(q_0^{s_j} -(-1)^{s_{j}}),$$ with $\sum_{i=1}^{k}r_{i}=m\ $ and $\ \sum_{j=1}^{l}s_{j}=n-2m\ (\geq t).$ Hence $t\mid s_1$ and $\ \sum_{j=2}^{l}s_{j}\leq n-2m-t<n-t,$ which implies that $(q_0)_{(n-t)}=q_{(n-t)/2}$ divides $\prod_{i=1}^k(q^{r_i} -1)$ and then $(n-t)/2\leq m.$ Thus $m=(n-t)/2$ and $M=P_{(n-t)/2}.$ Now suppose that $(n,t,q_0)$ belongs to the critical cases. If $n$ is even, $n-t=3$ and $q_0=2,$ then $m \le (n-t)/2 =3/2$ gives the only case $m=1.$ On the other hand the action of type $t\oplus 3$ of $z$ on $V$ is not compatible with $z\in P_1.$ If $n$ is odd, $n-t=6$, $q_0=2$. Then $m \le 3$ and $|z|= (2^t +1)\cdot 21$. Then $7 \neq 2_{2t},$ since $2_{2t}\geq 2t+1\geq 11$ and $7\cdot 2_{2t}$ divide the order of an element of $P_m\cong q_0^{\frac{(n-m)(n+m-1)}{2}}\ :\ GL_m(q)\times SU_{n-2m}(q)$ if and only if $m=3=(n-t)/2.$\ Next let $M\in \mathcal{C}_5.$ If $M=N_G(GU_{n}(\tilde{q}^2))$ with $\tilde{q}^r=q_0$ and $r$ an odd prime, then arguing as in Remark \[no-c5\], we conclude that $M$ is not of this type. If $M$ is of type $SO_n^{\epsilon}(q_0)$, with $q_0$ odd, or of type $Sp_{n}(q_0)$, with $n$ even, we have $$\pi(|M|)\subseteq \pi(q_0 \prod_{i=1}^{[n/2]}(q^{i} -1))$$ and $(q_0)_{2t}=q_t$ divides $|M|$ gives $t\leq i\leq n/2,$ a contradiction. \[unitari\] Let $G= SU_{n}(q)$, $n\ge 5,$ with $n\neq 6.$ Then $G$ is not 2-coverable. Let $n \ge 5,$ with $n\neq 6$ and assume, by contradiction, that $\{H,K \}$ is a 2-covering of $G$. Let first $n$ be even. Then, by Lemma \[malleu\], $H \cong GU_{n-1}(q).$ Moreover, by Lemma \[uni-z\], we have $K=SU_t( q) \bot GU_{n-t}(q).$ We consider an element $y\in G$ of order $\frac{q_0^n-1}{q_0+1}$. Then $(q_0)_n$ divides $|y|.$ If $n/2$ is even, then $(q_0)_{n}$ divides neither $|H|$ nor $|SU_t( q) \bot U_{n-t}(q)|.$ If $n/2$ is odd, then there exists a primitive prime divisor $(q_0)_{n/2}$ and $(q_0)_{n/2}$ divides $|y|,$ while $(q_0)_{n/2}$ divides neither $|H|$ nor $|SU_t( q)\bot U_{n-t}(q)|.$ We now suppose that $n$ is odd. Then $H= SU_{n/k}(q^k).\,\frac{q_0^k+1}{q_0+1}\,.k,$ with $\ k|n$ a prime. Let first $n \ge 9;$ then $K=SU_t( q) \bot U_{n-t}( q)$ or $K=P_{(n-t)/2}.$ We consider a low-Singer cycle $y\in G$ of order $q_0^{n-2}+1$. Then $ (q_0)_{2(n-2)}$ divides the order of $y$, but it divides neither $|H|$ nor $|K|$.\ If $n=5,$ by Lemma \[$SU_5(4)$\], we can assume $q_0\neq 2$ and pick $y\in G$ with $|y|=q_0^4-1.$ Assume $y\in H\cong\frac{q_0^5+1}{q_0+1}\,. \,5;$ then $(q_0)_4=5,$ which gives $\frac {q_0^4-1}{5}$ divides $(\frac{q_0^5+1}{q_0+1},q_0^4-1)$ which in turn divides $ 5$ against $q_0\neq 2.$ It is also clear that $(q_0)_4$ does not divide $|P_1|,\ |SU_3(q)\bot GU_2(q)|.$ If $n=7$ and $q_0\neq 2,$ we consider $y\in G$ of order $\frac{q_0^4 -1)(q_0^3+1)}{q_0+1}$. Then $ (q_0)_4 \cdot (q_0)_6$ divides $|y|.$ Since $(q_0)_4\neq 7,$ it does not divide $\frac{q_0^7+1}{q_0+1}\,. \,7;$ on the other hand $(q_0)_4 \cdot (q_0)_6$ does not divide $|SU_5(q) \bot GU_2( q)|.$ Moreover $ P_1$ contains no element of order divisible by $(q_0)_4 \cdot (q_0)_6.$ If $n=7$, $q_0=2$, then $|y|=45$ does not divide the order of $|H|=43\cdot 7.$ It is also clear that no element of order 45 belongs to $SU_5(4)\bot U_2(4)$ or to $ P_1\cong 2^{21}:C_3\times SU_5(4)$ or to $GU_6(4)$. Orthogonal groups in odd dimension ================================== Let $G=\Omega_{2n+1}(q)$ with $n\geq 5$ and consider the maximal subgroups of $G$, containing a low-Singer cycle $x$ of maximal rank. Then $x$ has order $(q^{n}+1)/2,$ action of type $2n \oplus 1$ and the maximal groups containing $x$ are known. Recall that $q$ is automatically odd. [@msw Theorem 1.1]\[msw-odd\] Let $G=\Omega_{2n+1}(q)$, $n \ge 5$ and $x\in G$ be an element of order $(q^{n}+1)/2.$ If $M$ is a maximal subgroup of $G$ containing $x$ then, up to conjugacy, $M$ is isomorphic to $\Omega_{2n}^-(q).2.$ \[bertrand-odd\] Let $G=\Omega_{2n+1}(q)$, $n \ge 5$ and $M$ a maximal subgroup of $G$ containing a Bertrand element. Then, up to conjugacy, one of the following holds: - $M=\Omega_{2n+1}(q)\cap(O_{2t+1}(q)\, \bot\, O^{-}_{2(n-t)}(q))$; - $M=\Omega_{2n+1}(q)\cap(O_{2(n-t)+1}(q) \,\bot\, O^{-}_{2t}(q));$ - $M= \Omega_{2n+1}(q)\cap O_{2n}^+(q).$ Let $G=\Omega_{2n+1}(q)$, $n \ge 5$ and $M$ a maximal subgroup of $G$ containing a Bertrand element $z.$ Observe that, since $q$ is odd and $2n-2t \ge 4$, then there exist primitive prime divisors $q_{2t}$ and $q_{2(n-t)}$ and $q_{2t}\cdot q_{2(n-t)}$ divides $|z|$ for any $q_{2t}\in P_{2t}(q)$ and for any $q_{2(n-t)}\in P_{2(n-t)}(q).$ In particular $z$ is a strong $ppd(d,q;e)$-element, where $d=2n+1$ and $e=2t.$ By Theorem \[main\], Remark \[no-c5\] and by Table 3.5.D in [@kl], $M$ belongs to one of the classes $\mathcal C_i$, $i=1,2,3$ or to $\mathcal{S}$ and is described in Example 2.6 a) of [@gpps]. $\mathcal{C}_1$.    Suppose that $M$ is of type $P_m$ and that $q_{2t} \cdot q_{2(n-t)}$ divides the order of an element in $M.$ Then, by Proposition 4.1.20 of [@kl], we have $m \ge 2n -2t$ and $n-m \ge t$ which gives $n \le t$ against the definition of $t.$\ Suppose now that $M=\Omega_{2n+1}(q)\cap (O_{2k+1}(q) \bot O^{\epsilon}_{2(n-k)}( q))$ and refer to Proposition 4.1.6 in [@kl] for its structure. If $k=0$ we must choose $\epsilon=+1$. If $1 \le k \le n-1,$ then we must have either $2(n-k) \ge 2t$ and $n-t \le k$, which gives $k=n-t$ or $t \le k$ and $n-t\le n-k$, that is $k=t$. On the other hand it is clear that both these choices works if and only if we select the minus sign. $\mathcal{C}_2$.   These are the groups in the Example 2.3 of [@gpps]. Thus we have $q_{2t}=2t +1 $ and $M \le GL_1(q)\, \wr\, Sym(2n+1)$. To guarantee that $q_{2t}\cdot q_{2n-2t}$ divides the order of an element in $Sym(2n+1)$, we need $2n+1 \ge q_{2t} + q_{2n-2t} \ge 2t +1 +2n -2t +1=2n +2$, a contradiction. $\mathcal{C}_3$.    These groups are described in Example 2.4. Since $d\neq e+1$ we consider only the case b) of Example 2.4. Let $b>1$ a divisor of $(2n+1, 2t).$ Then $n\neq 6$ and $b=t=(2n+1)/3.$ Thus, by Proposition 4.3.17 in [@kl], $M=\Omega_3( q^t).\,t$ has order $\frac{q^{2t}(q^{2t}-1)\,t}{2}$ and the condition $q_{2(n-t)}$ divides $|M|$ implies that $q_{2(n-t)}=t.$ Thus $t$ does not divide $|\Omega_3( q^t)|$ and the only elements of order $t$ in $M$ are the field automorphisms $\alpha.$ However, $q_{2t} \not | \ |C_{\Omega_3( q^t)}(\alpha)| = |\Omega_3( q)|$ and we get no examples in this class. $\mathcal{S}$.    The maximal subgroups $M$ in Example 2.6 a) satisfy $ M \le Sym(m) ,$ with $m= 2n +2$, if $p$ does not divide $m$, or $m=2n+3$ if $p$ divides $m$. Moreover $q_{2t}= 2t +1$, and $ q_{2t} \le m$. Let first $n \ge 7.$ Then $t \ge 5$ and $(q^t +1)/(q+1) > 2t +1=q_e.$ Then arguing as in the symplectic case in Lemma \[bertrand-sp\], we get an odd prime $r\neq q_{2(n-t)}$ such that $r \cdot q_e \cdot q_{2n -2t}$ divides $|z|,$ which require the impossible relation $m>2n+3.$\ If $n=5$, then $q_6=7$ and $m=12$ or $13$ and again we can use the same argument as in the symplectic case. If $n=6$ no case arises since $2t+1=9$ is not a prime. \[ortogonali dispari\] Let $G=\Omega_{2n+1}(q)$, $n\ge 5$. Then $G$ is not 2-coverable. Let $n \ge 5$ and assume, by contradiction, that $\{H,K \}$ is a 2-covering of $G=\Omega_{2n+1}(q)$ with maximal components. Then, by Lemma \[msw-odd\] , $H=(\Omega_{2n}^-(q)).2$ and $K$ is given in Lemma \[bertrand-odd\]. If $$K\in \{\Omega_{2n+1}(q)\cap(O_{2t+1}(q)\, \bot\, O^{-}_{2(n-t)}(q)), \Omega_{2n+1}(q)\cap(O_{2(n-t)+1}(q) \,\bot\, O^{-}_{2t}(q))\}$$ we consider an element $y\in G$ of order $q^n -1:$ then $y$ is not contained neither in $H$ nor in $K.$ If $K=\Omega_{2n+1}(q)\cap O_{2n}^+(q),$ we observe that neither $H$ nor $K$ contains regular unipotent elements. Orthogonal groups with Witt defect $0$ ====================================== Let $G=\Omega^{+}_{2n}(q),$ with $n\geq 5.$ First of all, to control the action of some crucial elements in $G$ we need the following Lemma. \[action\]The generator of $\Omega_2^-(q)\cong C_{\frac{q+1}{(2,q-1)}}$ operates irreducibly on the natural module $V$ if and only if $q\neq 3.$ Let $\Omega_2^-(q)=<x>.$ If $q$ is even, then $x$ is the Singer cycle of $SL_2(q)$ and operates irreducibly. If $q$ is odd, we observe that $x=\pi_{u^{2(q-1)}},$ where $<u>={\mbox{${\mathbb F}_{q^{2}}$}}^*$ and, by Lemma 2.4 in [@bl], the minimal polynomial $m(x)$ is irreducible of degree $r\mid 2,$ minimal with respect to $\frac{q^2-1}{q^r-1}\mid 2(q-1).$ If $r=1,$ we obtain $q+1\mid 2(q-1),$ which implies that 2 is the only prime divisor of $q+1,$ that is $q=2^i-1$ and hence $q=3.$ In this case the action on $V$ decomposes it into two submodules of dimension 1. If $r=2$ clearly the action of $x$ is irreducible. Now observe that, from the embedding in $G=\Omega^{+}_{2n}(q)$ of $$\Omega^-_{2m}(q)\bot \Omega^-_{2(n-m)}(q),$$ with $1\leq m< n/2\,$ we derive an element $\xi\in G$ of order $$\,\frac{(q^{m}+1)(q^{n-m}+1)}{(q^{m}+1,q^{n-m}+1)(2,q-1)}.$$ If $(m,q)\neq (1,3)$ the action of $\xi$ is of type $\,2m\oplus \,2(n-m)$ and otherwise, by Lemma \[action\],it is of type $1\oplus 1 \oplus 2(n-1).$ [@msw Theorem 1.1]\[msw+\] Let $G=\Omega^+_{2n}(q)$, $n \ge 5.$ Let $x\in G$ of order $\,\frac{(q^{n-1}+1)(q+1)}{(q^{n-1}+1,q+1)(2,q-1)}\,$ and action of type $2(n-1)\,\oplus\, 2$ if $q\neq 3$ and action of type $2(n-1)\,\oplus\, 1\, \oplus 1$ if $q=3.$\ If $M$ is a maximal subgroup of $G$ containing $x$ then, up to conjugacy, one of the following holds: - $M=\Omega^+_{2n}(q) \cap(O^-_2( q)\bot\, O^-_{2(n-1)}(q));$ - $q=3$ and $M= \Omega_{2n-1}(3).2;$ - $n$ is even and $M=\Omega^+_{2n}(q) \cap (GU_n(q^2).2); $ - $nq$ is odd and $M=\Omega_{n}(q^2).2;$ We emphasize that the maximal subgroups of $G$, containing $x$ are obtained as a sublist of those containing a low-Singer cycle $\tilde{S}$ of order $\,\frac{(q^{n-1}+1)}{(2,q-1)}\,$ in [@msw Theorem 1.1]. We now look for the second component of a 2-covering. \[max-y\] Let $G=\Omega^{+}_{2n}(q),$ $n \ge 5$ and $y\in G$ of order $\frac{(q^{n-2} +1)(q^2+1)}{( q^{n-2}+1,\,q^2 +1)(2,q-1)}$ and action of type $2(n-2)\oplus 4.$ If $M$ is a maximal subgroup of $G$ containing $y$ then, up to conjugacy, one of the following holds: - $M=\Omega^+_{2n}(q) \cap(O^{-}_4(q) \bot\, O^{-}_{2(n -2)} (q))$; - $n$ is even and $M=\Omega^+_n( q^2).[4].$ Let $G=\Omega^{+}_{2n}(q)$, $n \ge 5$ and $M$ a maximal subgroup of $G$ containing $y,$ where $y\in G$ has order $\frac{(q^{n-2} +1)(q^2+1)}{( q^{n-2}+1,q^2 +1)(2,q-1)}$ and action of type $2(n-2)\oplus 4.$\ If $(n,q)= (5,2),$ the inspection in [@atlas] shows that the only maximal subgroups of $\Omega^{+}_{10}(2)$ containing an element of order 45 are conjugate to $\Omega^+_{10}(2) \cap (O^{-}_4(2) \bot \,O^{-}_{6} (2)).$\ If $(n,q)\neq (5,2),$ then $y$ is a strong $ppd(d,q;e)$-element, for $d=2n$ and $e=2(n-2).$ By Theorem \[main\] $M$ belongs to one of the classes $\mathcal C_i$, $i=1,\,2,\,3,\ 5$ or to $\mathcal{S}$ and is described in Example 2.6 a) of [@gpps]. $\mathcal{C}_1$.    Suppose that $M$ is of type $P_m$. Then, by Proposition 4.1.20 of [@kl], the condition $q_{2(n-2)}\mid |P_m|$ forces $m=1.$ On the other hand the characteristic polynomial of $y$ is the product of two irreducible factors of degree $4,\, 2(n-2);$ thus $y$ has no eigenvalues and it cannot belong to a conjugate of $P_1.$\ Assume now $M=\Omega^+_{2n}(q) \cap(O^{\epsilon}_m(q) \bot O^{\epsilon}_{2n-m}(q))$, with $ 1 \le m < n$ . Then $2n-m \ge 2n-4,$ hence $m \le 4.$ Since $2(n-2)>4,$ the action of $y$ is compatible only with the choice $m=4,\ \epsilon=-.$\ Finally if $M=Sp_{2(n-1)}( 2^f),$ then it is the stabilizer of a non-degenerate subspace of dimension 1 and again the action of $y$ excludes this opportunity. $\mathcal{C}_2$.    The maximal groups $M\in \mathcal{C}_2$ are described in Example 2.3 of [@gpps]. Thus $M \le GL(1,q) \wr Sym(2n) $ and $q_e= e +1= 2n -3$. Observe that $5\leq q_4\neq q_e$ and $(q_e q_4, q-1)=1$: hence $q_e \cdot q_4$ is the order of an element in $Sym(2n)$ which implies $2n\geq q_e+q_4\geq 2n+2,$ a contradiction. $\mathcal{C}_3$.     If $M$ is of type $GU_n( q^2)$, $n$ even, then $n-2$ is even and $q^{n-2} +1$ cannot divide $|M|$. If $M$ is of type $O_n( q^2)$, $n$ odd, then $$\pi(|M|)=\pi\bigg(p \cdot \prod_{i=0}^{(n-1)/2}(q^{4i} - 1)\bigg )$$ and the condition $q_{e}$ divides $|M|$, implies $n-2 \mid 2i$ that is $n-2$ divides $i \le (n-1)/2$, which is impossible since $n \ge 5$. We are left only with $M=\Omega^+_{n}(q^2).[4]$, $n$ even. $\mathcal{C}_5$.    By Remark \[no-c5\], the only case to consider is $M$ of type $O ^-_{2n}( q_0)$, $q=q_0^2.$ Then $$\pi(|M|) \subseteq \pi\bigg(p \cdot \prod_{i=0}^{n}(q^{i} -1) \bigg),$$ and $q_e$ divides $|M|$ implies $2(n-2)\le i\le n$ against $n\geq 5.$ $\mathcal{S}$.    The subgroup $M$ is such that $Alt(m)\le M \le Sym(m) \times Z(G),$ with $m= 2n +1$, if $p$ does not divide $m$, or $m=2n+2$ if $p$ divides $m$. Moreover $q_e= 2n -3 \ge m-5.$ Let $\sigma\in M,$ a power of $y$ of order $q_e.$ Since $|Z(G)|=(4,q^n-1),$ it follows that we can assume $\sigma\in Sym(m)$ a $q_e-$cycle and $\frac{(q^2 +1)q_e}{(2,q-1)}$ is the order of an element in $C_{Sym(m)}(\sigma)\times Z(G)= \langle \sigma \rangle \times Sym(m-q_e)\times C_{(4,q^n-1)} \leq \langle \sigma \rangle \times Sym(5)\times C_{4}.$ Thus $q^2 +1/(2,q-1)$ is the order of an element in $Sym(5)\times C_{4},$ and thus $q=2, 3.$\ Let $q=3.$ Then to fulfill the condition $2n-3$ prime which divides $3^{n-2}+1,$ we need $n\geq 8.$ Moreover $\frac{|y|}{(2n-3)}$ is the order of an element in $Sym(5)\times C_{4}$ hence $\frac{(3^{n-2}+1)5}{(2n-3)(10,3^{n-2}+1) }\leq 24,$ that is $5(3^{n-2}+1)\leq 24(2n-3)(10,3^{n-2}+1),$ which gives no solution for $n\geq 8.$\ Let $q=2.$ If $n=5,$ then $2n-3=7$ does not divide $2^3+1;$ if $n=,\ 6,\ 9$ then $2n-3$ is not a prime. if $n=7$ then $11=2n-3$ divides $2^5+1$ but $|y|/11=15$ is not the order of an element in $Sym(5)\times C_4.$ If $n=8$ then $2n-3=13$ divides $2^6+1=65=|y|$ and $m=18.$ But it easily observed that $Alt(18)\not\leq \Omega^+_{16}(2):$ namely in $Alt(18)$ there is an element of order $11 \cdot 7$ but there is no element of such an order in $\Omega^+_{16}(2).$ Observe also that if $n=10,$ the condition $2_{16}=17$ is not realized. Finally, for $n\geq 11,$ there are no case since $\frac{(2^{n-2}+1)5}{(2n-3)(5,2^{n-2}+1) }\leq 24$ cannot hold. \[ortogonali +\] Let $G=\Omega^+_{2n}(q)$, $n\ge 5$. Then $G$ is not 2-coverable. Let $G=\Omega^+_{2n}(q),$ $n \ge 5$ and suppose, by contradiction, that $\{H,K \}$ is a 2-covering of $G$. We can assume that $H$ is described in Lemma \[msw+\] and that $K$ is described in Lemma \[max-y\].\ We first assume $n$ odd. Then $$H\in\{\Omega^+_{2n}(q) \cap(O^-_2( q)\bot\, O^-_{2(n-1)}(q)),\ \Omega_{n}(q^2).2 \}$$ or $q=3$ and $H=\Omega_{2n-1}(3).2,$ while $$K=\Omega^+_{2n}(q) \cap(O^-_4( q)\bot\, O^-_{2(n-2)}(q)).$$ Let $g\in G$ with $|g|=\frac{q^n-1}{(2,q-1)}.$ Then $|g|$ divides neither the order of $H$ nor the order of $K.$\ Next let $n$ be even. Then $$H\in\{\Omega^+_{2n}(q) \cap(O^-_2( q)\bot\, O^-_{2(n-1)}(q)),\ \Omega^+_{2n}(q) \cap (GU_n(q^2).2)\}$$ or $q=3$ and $H=\Omega_{2n-1}(3).2,$ while $$K\in\{ \Omega^+_{2n}(q) \cap(O^{-}_4(q) \bot\, O^{-}_{2(n -2)} (q)),\ \Omega^+_n( q^2).[4]\}.$$ Let $g\in G$ with $|g|=\frac{q^{n-1}-1}{(2,q-1)}.$ Note that $q_{n-1}$ does not divide the order any candidates $H$ or $K,$ with the exception of $q=3$ and $H=\Omega_{2n-1}(3).2.$\ So we can assume $n$ even, $q=3,$ $H=\Omega_{2n-1}(3).2 $ and $$K\in\{ \Omega^+_{2n}(3) \cap(O^{-}_4(3) \bot\, O^{-}_{2(n -2)} (3)),\ \Omega^+_n(9).[4]\}.$$ Pick in $G$ an element of order $\frac{3^{n/2}+1}{2}$ and action of type $n\, \oplus\, n,$ which forces to $K$ to be $\Omega^+_n(9).[4]$ and finally observe that neither $H=\Omega_{2n-1}(3).2$ nor $K=\Omega^+_n(9).[4]$ contain regular unipotent elements. Orthogonal groups with Witt defect 1 ==================================== Let $G =\Omega^{-}_{2n}(q),\ n\geq 5$ and let $s\in G$ be a Singer cycle. Then $|s|= \frac{q^{n}+1}{(2,q-1)}$ and the maximal subgroups of $G$ containing $s$ are known. [@msw Theorem 1.1]\[msw-\] If $M<\cdot\ \Omega^-_{2n}(q),\ n\geq 5$ contains a Singer cycle then, up to conjugacy, one of the following holds: - $M= \Omega^-_{2n/r}(q^r).r,$ where $r$ is a prime divisor of $n; $ - $n$ is odd and $M= \Omega^-_{2n}(q)\cap (GU_{n} (q^2).2).$ We now look for the second component of a 2-covering. \[max-y-\] Let $G=\Omega^{-}_{2n}(q)$, $n \ge 5 .$ Let consider, for any $n \ge 5 ,$ an element $y\in G$ with $$|y|=\frac{(q^{n-1} +1)(q-1)}{(2,\,q-1)^2}$$ and for any $n$ even, an element $z\in \Omega^{-}_{2n}(q)$ with $$|z|=\frac{(q^{n-1} -1)(q+1)}{(2,\,q-1)^2},$$ action of type $(n-1)\oplus (n-1)\oplus 2$ if $q\neq 3$ and action of type $(n-1)\oplus (n-1)\oplus 1\oplus 1$ if $q=3.$\ If $M<\cdot\ G$, up to conjugacy, contains $y $ and $z,$ then $n$ is odd and one of the following holds: - $M= P_1;$ - $q\geq 4,$ and $M=\Omega_{2n}^-(q)\cap (O_2^+(q)\bot O_{2n-2}^-(q);$ - $q=2,$ and $M=Sp_{2(n-1)}(2);$ - $q$ is odd and $M=\Omega_n( q^2).2;$ or - $q=3,5$ and $M=\Omega_{2n}^-(q)\cap (O_1(q)\bot O_{2n-1}(q)).$ Let $G=\Omega^{-}_{2n}(q),\ n \ge 5.$ Let $y\in G$ be an element of order $ {\displaystyle \frac{(q^{n-1} +1)(q-1)}{(2,\,q-1)^2}}$ and for any $n$ even, $z\in \Omega^{-}_{2n}(q)$ be an element with $|z|={\displaystyle\frac{(q^{n-1} -1)(q+1)}{(2,\,q-1)^2}}$ and action of type $(n-1)\oplus (n-1)\oplus 2$ if $q\neq 3$ and action of type $(n-1)\oplus (n-1)\oplus 1\oplus 1$ if $q=3.$ Then for $d=2n$ and $e=2(n-1)$, $\,y\,$ is a strong $ppd(d,q;e)$-element of $GL_d(q).$\ By Theorem \[main\], Remark \[no-c5\] and Table $3.5.$ F in [@kl], $M$ belongs to one of the classes $\,\mathcal C_i\,$, $i=1,2,3$ or to $\mathcal S$ and it is described in the Examples 2.6-2.9 of [@gpps]. $\mathcal{C}_1$.    If $M$ is of type $P_m,$ then, by Proposition 4.1.20 of [@kl], we have $n-m \ge n-1$, that is $m =1.$ The choice $M=P_1$ is excluded when $n$ is even, since $|z|$ does not divide $|P_1|.$\ Let $M=Sp_{2(n-1)}(q),$ with $q$ even. Since $Sp_{2(n-1)}(q)$ does not contain a semisimple element of order a proper multiple of $q^{n-1}+1,$ we get $q=2.$ But if $n$ is even, then $|z|$ is not the order of an element in $Sp_{2(n-1)}(2).$\ Let $M$ be of type $O_m^\epsilon(q) \bot O_{2n-m}^{-\epsilon}(q),$ with $1\leq m\leq n,$ $\,\epsilon \in \{+,\ -,\ \circ\}$ and $q$ odd when $m$ is odd. By the structure of $M$ given in Proposition 4.1.6 of [@kl], we deduce that $q_e\,\mid \,|O_m^\epsilon(q) \bot O_{2n-m}^{-\epsilon}(q)|,$ hence $m=2$ and $\epsilon=+$ or $m=1$ and $\epsilon=\circ,$ $q$ odd. This gives two possible structures for $M.$\ The first, related to $m=2$, is $$M=\Omega_{2n}^-(q)\cap(O_2^+(q)\bot O_{2n-2}^-(q)),$$ with $q\not \in\{2,\ 3\}$ it is the natural component for $y.$ When $n$ is even, this subgroup cannot contain $z.$\ By Table $3.5.$ H in [@kl], for $q=2$ we get again $M=Sp_{2(n-1)}(2)$ and for $q=3$ we have $M=\Omega_{2n}^-(3)\cap(O_1(3)\bot O_{2n-1}(3)),$ which contain a conjugate of $z.$\ The second structure for $M$, related to $m=1,$ is $$M=\Omega_{2n-1}(q).c$$ with $c\mid 2$ and $q\neq 3$ odd. The group $\Omega_{2n-1}(q)$ does not contain semisimple elements with order properly divisible by $\frac{q^{n-1} +1}{2},$ hence we get the condition $\frac{q-1}{2}\leq 2,$ which leaves us with $q= 5.$ $\mathcal{C}_2$.   Let $M\in \mathcal{C}_2.$ Then $M$ is described in Example 2.3 in [@gpps], $q_e= e +1= 2n -1$ and, by Proposition 4.2.15 in [@kl], $q\equiv 3(mod\ 4)$ is a prime, $n$ is odd and $M\leq 2^{2n}.Sym(2n).$ Thus we get $q^{n-1}+1\equiv 3^{n-1}+1(mod\ 4)\equiv 2(mod\ 4).$ On the other hand $(q^{n-1}+1)/2>2n-1;$ then we obtain an odd prime $r$ with $r\,q_e\mid |y|$ and an element of order $r\,q_e$ in $Sym(2n).$ Therefore $2n\geq r+q_e\geq 2n+2,$ a contradiction. $\mathcal{C}_3$.    If $M$ is of type $GU_n( q^2)$, $n$ odd, then $q_{n-1} $ does not divide $|M|$. If $M=\Omega_n(q^2).2$, with $qn$ odd, then there is an element of the required order. If $M=\Omega_{2n/r}^-(q^r).r$, with $r$ a prime dividing $n$, then $q_e\neq r$ does not divide $|M|$. $\mathcal{S}$.   We examine the Examples 2.6-2.9 in which $\mathcal{S}$ decomposes. Observe that we have $e=d-2$ and that, since $d\geq 10,$ we exclude Examples 2.6 [*(b), (c)*]{}. [*Example 2.6(a): S an Alternating Group.*]{}\ Here we have $ Alt(m)\le M \le Sym(m) \times Z(G),$ with $\,m-1= 2n\,,$ if $\,p\,$ does not divide $\,m\,$ or $\,m-2=2n\,$ if $\,p\,$ divides $\,m\,$. Moreover $q_e= e+1= 2n - 1\geq m-3$ which gives also $q_e \ge 7.$\ Let $\sigma$ be an element of $Sym(m)$ of order $q_e$. Then $\sigma$ is a $q_e-$cycle and $|y|$ divides the order of $|C_{ Sym(m) \times Z(G)}(\sigma)|$, which implies $q^{n-1}+1$ divides $ 24(2n-1).$ The condition $2n - 1$ prime gives $n\geq 6.$\ Let $n=6$ and $q=2;$ then $m\in \{13,14\}$ and $|y|=33$ requires $m=14,$ which leaves us with the case $ Alt(14)\le M \le Sym(14).$ But then $\Omega_{12}^-(2)$ would be a minimal module for $Alt(14),$ against the fact that $Alt(14)$ fixes no quadratic form on its minimal module (see Proposition 5.3.7 in [@kl]).\ If $n\geq 7$ and $q=2$ or if $n\geq 6$ and $q>2,$ the condition $q^{n-1}+1$ divides $ 24(2n-1)$ cannot hold. [*Example 2.7: $S$ a sporadic simple group.*]{}\ Recall that the centralizer of an element of a sporadic group can be easily checked in [@atlas]. There are five cases with $e=d-2$ and $d\geq 8$ even in Table 5. Observe that in the first column of that Table, we read $M'=C.S,$ where $C$ embeds in $Z(G).$ In particular $|C|$ divides $4.$ This reduces our analysis only to three cases. Observe also that $M\leq C.Aut(S).$\ If $n=6$ and $M'=2.M_{12}$, then $q_e=11$ and since the centralizer in $2.Aut(M_{12})\geq M$ of an element of order 11 has order $22$, this implies $\frac{(q^5 +1)(q-1)}{(2,q-1)^2} \le 22$, a contradiction.\ If $n=10$ and $M'=J_1=M$, $q_e=19$ and $|C_S(g)| =19$, for any $g$ of order 19 in $S$. This implies $\frac{(q^9 +1)(q-1)}{(2,q-1)^2} \le 19$, a contradiction.\ If $n=12$ and $M'= 2.Co_1$, $q_e=23$, then $|C_{2.Co_1}(g)|= 46$, for any $g$ element of order 23 in $S$ and we get $q^{11} +1 \le 46\cdot 2$, a contradiction. [*Example 2.8: S a simple group of Lie type in characteristic $p.$*]{}\ No case arises. [*Example 2.9: S a simple group of Lie type in characteristic different from $p.$*]{}\ In Table 7, we have the examples with $n=7$, $q_e=13,$ and $\frac{q^6 +1}{(2,q-1)}$ dividing either $8 \cdot 13$ or $12 \cdot 13,$ which never happens. We also have the case $n=9$, $q_e=17$ and $\frac{q^8 +1}{2}$ dividing $17\cdot 4$, which never happens. In Table 8, we have to consider the cases in which $e=d-2$, that is $M/Z(M)$ is isomorphic to a subgroup of $Aut(S)$ containing $S$ and either $S\cong PSp_{2a}(3)$, for some odd prime $a$, or $S \cong PSL_2(s),$ for some $s \ge 7$.\ In the first case $d=2n = (3^a +1)/2$, $e= 3(3^{a-1} -1)/2= 2(n-1)$, which implies $3$ divides $n-1.$ The order of $y$ must divide the order of the centralizer in $M$ of an element of order $q_e=(3^a -1)/2$ and clearly $M\cong b.PSp_{2a}(3).c$ where $b,\ c\mid 2.$ Since $(3^a -1)/2$ is the order of a maximal cyclic torus in $PSp_{2a}(3),$ this implies $|y|$ divides $4 \cdot q_e$ hence $\frac{q^{(n-1)/3} +1}{(2,q-1)}$ divides 4, thus $q^2\leq 7,$ which gives $q=2,$ but $2^{(n-1)/3}+1$ does not divide 4. If $S \cong PSL_2(s),$ $s \ge 7$, we examine the various subcases. If $2n=s=2^c$, with $c$ prime then $q_e= s-1$, and $M$ embeds in $SL_2(s).c,$ where the cyclic extension of order $c$ is conjugate to a field automorphism. Now observe that, since $q_e\neq c,$ an element $g$ of order $q_e$ in $SL_2(s).c$ is a cyclic torus in $SL_2(s)$ and that a field automorphism does not centralize it. Thus we get $|C_M(g)|\leq q_e$. This implies that $|y|=q_e$. But $n-1$ is odd, and therefore $q^{n-1} +1$ is divisible by $(q+1) \cdot q_e$, a contradiction.\ If $d=s+1= 2n,$ $s$ an odd prime and $q_e=s= e+1= 2n-1$ we have $M\leq SL_2(s).2.$ Let $g$ be an element of order $s$ in $M.$ Then $g^2$ is an element of order $s$ in $M$ and $|C_{M}(g)|=|C_{M}(g^2)|$ divides $4s.$ Thus we obtain the condition $\frac{(q^{n-1}+1)(q-1)}{(2,q-1)^2}$ divides $4(2n-1)$, which has no solution. The only case still to examine is $d=(s+1)/2= 2n$, $q_e=(s-1)/2=2n-1$, $s=r^f,\ r$ odd. Observe that here $n\geq 6$ and $M\leq SL_2(s).2f.$ Let $g\in M$ of order $q_e;$ since $q_e$ cannot divide $f,$ then $g^{2f}$ has order $q_e=(s-1)/2$ and belongs to $SL_2(s).$ Due to the partition of $PSL_2(s),$ it is clear that $g^{2f}$ is conjugate to the diagonal matrix $D$ with diagonal entries $\lambda^2,\ \lambda^{-2},$ where $<\lambda>=\mathbf F_s^*$ and that $|C_{SL_2(s)}(g^{2f})|=s-1.$ Moreover, a field automorphism does not centralize $D,$ hence $\,|C_M(g)|=|C_M(g^{2f})|$ divides $2|C_{SL_2(s)}(g^{2f})|=2(s-1)=4(2n-1).$ This implies, as before, the impossible relation $|y|$ divides $4(2n-1).$ \[ortogonali-\] Let $G=\Omega^-_{2n}(q)$, $n\ge 5$. Then $G$ is not 2-coverable. Let $\delta=\{H,K \}$ be a 2-covering of $G=\Omega^-_{2n}(q).$ We can assume $H$ as described in Lemma \[msw-\] and $K$ as described in Lemma \[max-y-\]. When $q=3, 5$ we observe that $K=\Omega_{2n}^-(q)\cap (O_1(q)\bot O_{2n-1}(q))$ does not contain regular unipotent elements and that no candidate $H$ can contain them. Thus, by Lemma \[max-y-\], we reduce our attention to $n$ odd and $K\neq \Omega_{2n}^-(q)\cap (O_1(q)\bot O_{2n-1}(q)).$ Let $u$ be an element of order $\,\frac{(q^{n-2} -1)(q^2 +1)}{(q^{n-2}-1, q^2+1)(2,q-1)}\,$ belonging to $\,\Omega_4^-(q)\bot \Omega_{2(n-2)}^+(q)<G$ and with action of type $\,4\oplus (n-2)\oplus (n-2)\, .$ Then $u$ has no eigenvalues and its order is divisible by $q_{n-2}$. But the matrices in any candidate $K,$ except $\Omega_n( q^2).2,$ admit an eigenvalue and $q_{n-2}$ does not divide the order of $\Omega_n( q^2).2,\ GU_{n}(q^2).2,\ \Omega^-_{2n/r}(q^r).r.$ [99]{} M. Aschbacher, [*On the maximal subgroups of the finite classical groups*]{}, Invent. Math. **76**(1984), 469-514. D. Bubboloni, M. S. Lucido, [*Coverings of linear groups*]{}, Comm. Algebra, n. [**30 (5)**]{} (2002), 2143-2159. D. Bubboloni, M. S. Lucido and T. Weigel, [*Generic 2-coverings of finite groups of Lie-type*]{}, Rend. Sem. Mat. Padova, vol. **115** (2006), 209-252. J. Conway, R. Curtis, S. Norton, R. Parker and R. Wilson, Atlas of finite Groups, Clarendon Press, Oxford, 1985. R. H. Dye, [*Interrelations of Symplectic and Orthogonal Groups in Characteristic Two*]{}, J. Algebra **59** (1979), 202-221. D. Gorenstein, R. Lyons, R. Solomon. The classification of the finite simple groups, Number 3. Amer. Math. Soc. Surveys and Monographs [**40**]{}, 3 (1998). R. Guralnick, T. Penttila, C. E. Praeger and J. Saxl, [*Linear groups with orders having certain large prime divisors*]{}, Proc. London Math. Soc. (3), n.[**78 (1)**]{} (1999), 167-214. B. Huppert, [*Singer-Zyklen in Klassischen Gruppen*]{}, Math.Z. [**117**]{}(1970), 141-150. P. B. Kleidman and M. W. Liebeck, The subgroup structure of the finite classical groups, London Math. Soc. Lecture Notes [**129**]{}, Cambridge University Press, 1990. G. Malle, J. Saxl and T. Weigel, [*Generation of classical groups,*]{} Geometriae Dedicata [**49**]{}(1994), 85-116. K. Zsigmondy, [*Zur Theorie der Potenzreste*]{}, Monathsh. Fur Math. u. Phys.**3** (1892), 265-284.
\ <span style="font-variant:small-caps;">[W[ł]{}adys[ł]{}aw A. Majewski]{}\ Institute of Theoretical Physics and Astrophysics\ Gda[ń]{}sk University\ Wita Stwosza 57\ 80-952 Gda[ń]{}sk, Poland</span>\ *E-mail address:* `[email protected]`\ <span style="font-variant:small-caps;">Abstract.</span> We present a discussion on local quantum correlations and their relations with entanglement. We prove that vanishing coefficient of quantum correlations implies separability. The new results on locally decomposable maps which we obtain in the course of proof also seem to be of independent interest. : Primary: 46L53, 46L60: Secondary: 46L45, 46L30 *Key words and phrases:* $C^*$-algebra, positive maps, separable states, quantum correlations. Introduction ============ Let ${{\mathcal A}}_1$ and ${{\mathcal A}}_2$ be $C^*$-algebras. For simplicity, we assume that either ${{\mathcal A}}_1$ or ${{\mathcal A}}_2$ is a nuclear $C^*$-algebra. This assumption is not particularly restrictive as most $C^*$-algebras associated with physical systems have this property. Moreover, the assumption leads to a unique construction of the $C^*$-tensor product of ${{\mathcal A}}_1$ and ${{\mathcal A}}_2$. Let ${{\mathcal A}}= {{\mathcal A}}_1 \otimes {{\mathcal A}}_2$. We write ${{\mathcal S}}({{\mathcal A}})$ (${{\mathcal S}}({{\mathcal A}}_1)$, ${{\mathcal S}}({{\mathcal A}}_2)$) for the set of all states on ${{\mathcal A}}_1 \otimes {{\mathcal A}}_2 \equiv {{\mathcal A}}$ (${{\mathcal A}}_1$, ${{\mathcal A}}_2$). We define, for a state $\omega$ in ${{\mathcal S}}({{\mathcal A}})$ the restriction maps: $$(r_1 \omega)(A) \equiv \omega(A \otimes {\bf 1}),$$ where $A \in {{\mathcal A}}_1$ and $$(r_2 \omega)(B) \equiv \omega({\bf 1} \otimes B),$$ where $B \in {{\mathcal A}}_2$. Obviously, $r_i \omega$ is a state in ${{\mathcal S}}({{\mathcal A}}_i)$, where $i=1,2$. Next, take a measure $\mu$ on ${{\mathcal S}}({{\mathcal A}})$. Using the restriction maps one can define measures $\mu_i$ on ${{\mathcal S}}({{\mathcal A}}_i)$ in the following way: for a Borel subset $F_i \subset {{\mathcal S}}({{\mathcal A}}_i)$ we put $$\mu_i(F_i) = \mu(r_i^{-1}(F_i)),$$ where $i=1,2$. Having measures $\mu_1$ and $\mu_2$, both originating from the given measure $\mu$ on ${{\mathcal S}}({{\mathcal A}})$ one can define new measure $\boxtimes \mu$ on ${{\mathcal S}}({{\mathcal A}}_1) \times {{\mathcal S}}({{\mathcal A}}_2)$ which encodes classical correlations between two subsystems described by ${{\mathcal A}}_1$ and ${{\mathcal A}}_2$ respectively (see [@M]). We first define $\boxtimes \mu$ for discrete measures $\mu^d = \sum_i \lambda^d_i \delta_{\rho^d_i}$ with $\lambda^d_i \ge 0$, $\sum_i \lambda^d_i =1$, $\rho^d_i \in {{\mathcal S}}({{\mathcal A}})$. $\delta_{\sigma}$ stands for Dirac measure. We introduce $\mu^d_1 = \sum_i \lambda^d_i \delta_{r_1\rho^d_i}$ and $\mu^d_{2} = \sum_i \lambda^d_i \delta_{r_2\rho^d_i}$. Define $$\label{gwiazdka2} \boxtimes \mu^d = \sum_i \lambda^d_i \delta_{r_1 \rho^d_i} \times \delta_{r_2 \rho^d_i}.$$ Next, let us take an arbitrary measure $\mu$ in $M_{\phi}({{\mathcal S}})$. Here, $M_{\phi}({{\mathcal S}}) = \{ \mu: \phi = \int_{{{\mathcal S}}}\nu d\mu(\nu)\}$; i.e. the set of all Radon probability measures on ${{\mathcal S}}({{\mathcal A}})$ with the fixed barycenter $\phi$. For the measure $\mu$, there exists net of discrete measures $\mu_k$ such that $\mu_k \to \mu$ ($^*$-weakly). Defining $\mu^k_1$ ($\mu_{2}^k$) analogously as $\mu_1$ ($\mu_{2}$ respectively), one has $\mu^k_1 \to \mu_1$ and $\mu^k_{2} \to \mu_{2}$ where the convergence is taken in $^*$-weak topology. Then define, for each $k$, $\boxtimes \mu^k$ as in (\[gwiazdka2\]). One can verify that $\{ \boxtimes \mu^k \}_k$ is convergent to a measure on ${{\mathcal S}}({{\mathcal A}}_1) \times {{\mathcal S}}({{\mathcal A}}_2)$, so taking the weak limit we arrive to the measure $\boxtimes \mu$ on ${{\mathcal S}}({{\mathcal A}}_1) \times {{\mathcal S}}({{\mathcal A}}_2)$. It follows easily that $\boxtimes \mu$ does not depend on the chosen approximation procedure. The measure $\boxtimes \mu$ leads to the concept of degree of local (quantum) correlations for $\phi \in {{\mathcal S}}({{\mathcal A}}), a_1 \in {{\mathcal A}}_1, a_2 \in {{\mathcal A}}_2$, which is defined as $$\begin{aligned} d(\phi, a_1, a_2)& = & \inf_{\mu \in M_{\phi}({{\mathcal S}}({{\mathcal A}}))} |\phi(a_1 \otimes a_2) \nonumber \\ && - (\int \xi d(\boxtimes \mu)(\xi))(a_1 \otimes a_2)|. \nonumber\end{aligned}$$ Recently, we have studied relations between the coefficient of quantum correlations and entanglement (cf [@M]). R. Werner has kindly pointed out that the proof of the statement saying that [*$d(\phi; a,b,)=0$ for all $a \in {{\mathcal A}}_1$, $b \in {{\mathcal A}}_2$ and a state $\phi$ on ${{\mathcal A}}$ implies separability of $\phi$*]{} contains a gap (see Proposition 5.3 in [@M]). The aim of this letter is to give the proof of the properly amended statement (Theorem 4.3, Section 4). To this end we also give a generalization of St[ø]{}rmer theory of locally decomposable maps (see Section 3) which seems to be of independent interest. All definitions and notations used here are taken from [@M]. Local separability 1. ===================== Assume $d(\phi; a,b) = 0$ for all $a \in {{\mathcal A}}_1$, $b \in {{\mathcal A}}_2$ and for a state $\phi$ on ${{\mathcal A}}$. Then as $\mu \mapsto (\int \xi d(\boxtimes \mu)(\xi))(a_1 \otimes a_2)$ is $^*$-weak continuous, there exists a measure $\mu \in M_{\phi}({{\mathcal S}})$ (Radon probability measures on ${{\mathcal S}}({{\mathcal A}}_1 \otimes {{\mathcal A}}_2)$ with barycenter $\phi$) such that $$\label{a} \phi(a \otimes b) = \int_{{{\mathcal S}}({{\mathcal A}}_1) \times {{\mathcal S}}({{\mathcal A}}_2)} \xi d(\boxtimes \mu) (a \otimes b).$$ Using the Riemann approximation property of the classical measure one has $$\label{b} \phi(a \otimes b) = \lim \sum_i \lambda_i(a,b)\xi^{(1)}_i(a) \xi^{(2)}_i(b),$$ where $\lambda_i(a,b)$ are non-negative numbers, depending on $a$ and $b$, $\sum_i \lambda_i(a,b) = 1$ and states $\xi_i^{(1)}$ ($\xi_i^{(2)}$) are defined on ${{\mathcal A}}_1$ (on ${{\mathcal A}}_2$ respectively) and depend on the chosen element $a \otimes b$. Let a state $\phi$ on ${{\mathcal A}}_1 \otimes {{\mathcal A}}_2$ have a representation of the form (\[a\]) with the measure $\mu$ depending on the chosen element $a \otimes b$. Such state will be called locally separable. In other words, one can say that if the coefficient of quantum correlations for a state $\phi$ vanishes on $a \otimes b$ then the state $\phi$ is locally separable. Now we wish to examine the property of local separability. Let us begin with a particular case: assume that $a$ is a normal element of ${{\mathcal A}}_1$ while $b$ is arbitrary one in ${{\mathcal A}}_2$. Let $\phi \in {{\mathcal S}}({{\mathcal A}}_1 \otimes {{\mathcal A}}_2)$. We observe that $$\phi(a \otimes b) = \phi|_{{{\mathcal A}}_1^0 \otimes {{\mathcal A}}_2^0} (a \otimes b),$$ where $\phi|_{{{\mathcal A}}_1^0 \otimes {{\mathcal A}}_2^0}$ is the restriction of $\phi$ to the subalgebra ${{\mathcal A}}_1^0 \otimes {{\mathcal A}}_2^0 \subset {{\mathcal A}}_1 \otimes {{\mathcal A}}_2$. Here, ${{\mathcal A}}_1^0 $ is the abelian $C^*$-algebra generated by $a$ and $\bf 1$ ($a$ was normal!) while ${{\mathcal A}}_2^0$ is the algebra, in general non-commutative, generated by $b$ and $\bf 1$. But in such case, each state in ${{\mathcal S}}({{\mathcal A}}_1^0 \otimes {{\mathcal A}}_2^0)$ is a separable one. Moreover, $\phi$ has the decomposition depending on $a$ and $b$. However, we wish to stress: the assumption of normality for $a$ was crucial. Namely, taking an arbitrary $a$ and $b$, the condition of vanishing of coefficient $d$ implies the uniformity of decomposition with respect to hermitian and antihermitian part of $a$ in $a \otimes b$. In that context it is worth adding that by the genuine separability we understand decomposition of type (\[a\]) which is uniform with respect to elements of algebra ${{\mathcal A}}$. To show that $d(\phi, \cdot) = 0$ can imply separability, we will use another property of entangled states. Namely, one of the intriguing features of non-separable states is their complicated behaviour under transformations by positive maps. To be more precise, one is interested in inspection of the functional $\phi \circ \alpha \otimes id_2 (\cdot)$, where $\phi$ is a state on ${{\mathcal A}}= {{\mathcal A}}_1 \otimes {{\mathcal A}}_2$, $\alpha: {{\mathcal A}}_1 \to {{\mathcal A}}_1$ is a linear, unital positive map while $id_2$ is the identity map on ${{\mathcal A}}_2$. To proceed with answering this question we need a description of locally decomposable maps and a modification of definition of coefficient of quantum correlations which will be given in the next sections. Locally decomposable maps ========================= This section is a fairly straightforward generalization of the St[ø]{}rmer concept of local decomposibility; see Definition 7.1 as well as Lemma 7.2 and Theorem 7.4 in [@S]. Let $\alpha$ be a linear positive map of a $C^*$-algebra ${{\mathcal A}}$ into ${{\mathcal B}}({{\mathcal H}})$, ${{\mathcal H}}$ being a Hilbert space. The map $\alpha$ is locally decomposable if for each normal state $\phi(\cdot) \equiv Tr\varrho(\cdot)$ on ${{\mathcal B}}({{\mathcal H}})$ there exists a Hilbert space ${{\mathcal H}}_{\varrho}$, and a linear map $V_{\varrho}$ of ${{\mathcal H}}_{\varrho}$ into ${{\mathcal H}}_0 = <{{\mathcal B}}({{\mathcal H}}) {{\varrho}^{1/2}}>^{cl}$ with property $||V_{{\varrho}}|| \le M$ for all ${\varrho}$ and a $C^*$-homomorphism $\pi_{{\varrho}}$ of ${{\mathcal A}}$ into ${{\mathcal B}}({{\mathcal H}}_{{\varrho}})$ such that $$V_{{\varrho}} \pi_{{\varrho}}(a) V^*_{{\varrho}} {{\varrho}^{1/2}}= \alpha(a) {{\varrho}^{1/2}},$$ for all $a \in {{\mathcal A}}$. We will need Let ${{\mathcal A}}$ be a $C^*$-algebra, ${{\mathcal H}}$ a Hilbert space, and ${\alpha}$ a positive unital linear map of ${{\mathcal A}}$ into ${{\mathcal B}}({{\mathcal H}})$. If ${\varrho}$ is a density matrix on ${{\mathcal H}}$ defining a normal state $\phi$ on ${{\mathcal B}}({{\mathcal H}})$ then there is a $^*$-representation $\pi$ of ${{\mathcal A}}$ as $C^*$-algebra on a Hilbert space ${{\mathcal H}}_{\pi}$, a vector $\Omega_{\pi} \in {{\mathcal H}}_{\pi}$ cyclic under $\pi({{\mathcal A}})$, and a bounded linear map $V$ of the set $\{ \pi(a) \Omega{_\pi}; a \in {{\mathcal A}}, a = a^*\}^{cl}$ into ${{\mathcal H}}_{{\varrho}} = <{\alpha}(a){{\varrho}^{1/2}}; a \in {{\mathcal A}}>^{cl}$ such that $$V\pi(a)V^* {{\varrho}^{1/2}}= {\alpha}(a) {{\varrho}^{1/2}},$$ for each self-adjoint $a$ in ${{\mathcal A}}$. Let $\omega(\cdot) = Tr {\varrho}{\alpha}(\cdot)$. Denote by $\pi_{\omega}$ the $^*$-representation of ${{\mathcal A}}$ induced by $\omega$ on ${{\mathcal H}}_{\omega}$ and let $\Omega$ be a cyclic vector for $\pi_{\omega}({{\mathcal A}})$ in ${{\mathcal H}}_{\omega}$ such that $\omega(\cdot) = (\Omega, \pi_{\omega}(\cdot) \Omega)$. For selfadjoint $a \in {{\mathcal A}}$, define $V\pi_{\omega}(a) \Omega = {\alpha}(a) {{\varrho}^{1/2}}.$ The set $\{\pi_{\omega}(a) \Omega; a = a^*, a \in {{\mathcal A}}\}^{cl}$ is a real linear subspace of ${{\mathcal H}}_{\omega}$ whose complexification is dense in ${{\mathcal H}}_{\omega}$. If $\pi_{\omega}(a) \Omega =0$ then $$0 = (\pi_{\omega}(a^2) \Omega, \Omega) = \omega(a^2) = Tr {\varrho}{\alpha}(a^2) \ge Tr {\varrho}({\alpha}(a))^2 \ge 0.$$ Hence ${\alpha}(a) {{\varrho}^{1/2}}=0$. It follows that $V$ is well defined and linear. Note that $$V \pi_{\omega}({\bf 1})\Omega = V \Omega = {\alpha}({\bf 1}) {{\varrho}^{1/2}},$$ and that $$(V^* {{\varrho}^{1/2}}, \pi_{\omega}(a) \Omega) = ({{\varrho}^{1/2}}, V \pi_{\omega}(a) \Omega) = ({{\varrho}^{1/2}}, {\alpha}(a) {{\varrho}^{1/2}}) = \omega(a) = (\Omega, \pi_{\omega}(a) \Omega),$$ for any self-adjoint $a \in {{\mathcal A}}$. Thus $V^* {{\varrho}^{1/2}}= \Omega$ and $V \pi_{\omega}(a) V^* {{\varrho}^{1/2}}= {\alpha}(a) {{\varrho}^{1/2}}$ for each self-adjoint $a \in {{\mathcal A}}$. Moreover $$||{\alpha}(a) {{\varrho}^{1/2}}||^2 = ({\alpha}(a)^2 {{\varrho}^{1/2}}, {{\varrho}^{1/2}}) \le ({\alpha}(a^2) {{\varrho}^{1/2}}, {{\varrho}^{1/2}}) = \omega(a^2) = ||\pi_{\omega}(a) \Omega||^2,$$ so that $||V|| \le 1$ and with the identification, $\pi = \pi_{\omega}$, $\Omega = \Omega_{\pi}$, the proof is complete. Now, we recall (see Lemma 7.3 in [@S]): If ${\alpha}: {{\mathcal A}}\to {{\mathcal B}}({{\mathcal H}})$ is unital, positive map then $$\label{jeden} {\alpha}(a^*a + a a^*) \ge {\alpha}(a^*) {\alpha}(a) + {\alpha}(a) {\alpha}(a^*),$$ for all $a \in {{\mathcal A}}$. Lemma 3.2 and the inequality (\[jeden\]) lead to Every unital positive linear map of a $C^*$-algebra ${{\mathcal A}}$ into ${{\mathcal B}}({{\mathcal H}})$ is locally decomposable. Let ${\varrho}$, $\omega$ and $\pi_{\omega}$ be as in Lemma 3.2. Define ${\pi_{\omega}^{\prime}}$ in terms of the right kernel as a $^*$-anti-homomorphism (i.e. $<a,b> = \omega(ab^*)$, ${\rm I}_{\omega} = \{a; <a,a>=0 \}$, ${\pi_{\omega}^{\prime}}(c) (a + {\rm I}_{\omega}) =ac + {\rm I}_{\omega}$) of ${{\mathcal A}}$ on the Hilbert space ${{\mathcal H}}_{\omega}^{\prime}$ and let ${\widetilde{\pi_{\omega}}}= \pi_{\omega} \oplus {\pi_{\omega}^{\prime}}$. Let ${\widetilde{{{\mathcal H}}}}$ be the Hilbert space ${{\mathcal H}}_{\omega} \oplus {{\mathcal H}}_{\omega}^{\prime}$ with the inner product $$(z \oplus z^{\prime}, y \oplus y^{\prime}) = 1/2[(z,y) +<z^{\prime}, y^{\prime}>],$$ where $y,z \in {{\mathcal H}}_{\omega}$, $y^{\prime}, z^{\prime} \in {{\mathcal H}}_{\omega}^{\prime}$. ${\widetilde{\pi_{\omega}}}$ is a $C^*$-homomorphism of ${{\mathcal A}}$ into ${{\mathcal B}}({\widetilde{{{\mathcal H}}}})$. With $\Omega$ and $\Omega^{\prime}$ the vacuum vectors of $\omega$ for $\pi_{\omega}$ and $\pi_{\omega}^{\prime}$ respectively, let ${\widetilde{\Omega}}= \Omega \oplus \Omega^{\prime}$. Define a map $V^{\prime}$ of the linear submanifold ${\widetilde{\pi_{\omega}}}({{\mathcal A}}) {\widetilde{\Omega}}$ of ${\widetilde{{{\mathcal H}}}}$ into $<{\alpha}({{\mathcal A}}){{\varrho}^{1/2}}>^{cl}$ by $$V^{\prime} {\widetilde{\pi_{\omega}}}(a) {\widetilde{\Omega}}= {\alpha}(a) {{\varrho}^{1/2}},$$ for each $a \in {{\mathcal A}}$. Note that if ${\widetilde{\pi_{\omega}}}(a){\widetilde{\Omega}}=0$ then $\pi_{\omega}(a) \Omega =0 = {\pi_{\omega}^{\prime}}(a) \Omega^{\prime}$. Thus $$\pi_{\omega}(a^*) \pi_{\omega}(a) \Omega = \pi_{\omega}(a^*a) \Omega = 0 = {\pi_{\omega}^{\prime}}(a^*) {\pi_{\omega}^{\prime}}(a) \Omega^{\prime} = {\pi_{\omega}^{\prime}}(aa^*) \Omega^{\prime},$$ so that $\omega(aa^*) = 0 = \omega(a^*a)$. Thus by \[jeden\] $$0 = (({\alpha}(a^*a)+{\alpha}(aa^*)) {{\varrho}^{1/2}}, {{\varrho}^{1/2}}) \ge (({\alpha}(a^*) {\alpha}(a) + {\alpha}(a) {\alpha}(a^*)) {{\varrho}^{1/2}}, {{\varrho}^{1/2}}) \ge 0.$$ Hence ${\alpha}(a) {{\varrho}^{1/2}}= 0$. Consequently, $V^{\prime}$ is well defined and linear. Moreover, $$||V^{\prime}|| = sup \{ ||{\alpha}(a) {{\varrho}^{1/2}}|| : ||{\widetilde{\pi_{\omega}}}(a) {\widetilde{\Omega}}|| = 1 \} = sup \{ ||{\alpha}(a) {{\varrho}^{1/2}}|| : ||\pi_{\omega}(a) \Omega \oplus \pi^{\prime}_{\omega}(a) \Omega^{\prime}||^2 =1 \}$$ $$= sup \{ ||{\alpha}(a) {{\varrho}^{1/2}}|| : (({\alpha}(a^*a) + {\alpha}(aa^*)){{\varrho}^{1/2}}, {{\varrho}^{1/2}}) = 2 \}.$$ By \[jeden\], if $({\alpha}(a^*a +aa^*) {{\varrho}^{1/2}}, {{\varrho}^{1/2}}) = 2$ then $(({\alpha}(a^*) {\alpha}(a) + {\alpha}(a) {\alpha}(a^*)) {{\varrho}^{1/2}}, {{\varrho}^{1/2}}) \le 2$. Hence $||{\alpha}(a) {{\varrho}^{1/2}}||^2 \le 2$. Consequently $||V^{\prime}|| \le 2^{{1/2}}$. We extend $V^{\prime}$ by continuity to all of the subspace ${\widetilde{{{\mathcal H}}}}_0 = <{\widetilde{\pi_{\omega}}}({{\mathcal A}}) {\widetilde{\Omega}}>^{cl}$ and call the extension $V^{\prime}$. Define the linear map of ${\widetilde{{{\mathcal H}}}}$ into $<{\alpha}({{\mathcal A}}) {{\varrho}^{1/2}}>^{cl}$ in the following way: $V$ restricted to ${\widetilde{{{\mathcal H}}}}_0$ equals $V^{\prime}$ and $V$ restricted to orthocomplement of ${\widetilde{{{\mathcal H}}}}_0$ is equal to $0$. Then $||V|| \le 2^{1/2}$. Moreover, repeating the corresponding argument given in the proof of Lemma 3.2 one can show $(V^{\prime})^* {{\varrho}^{1/2}}= {\widetilde{\Omega}}$ and this completes the proof. Local separability 2. ===================== Having the notion of locally decomposable maps one might be tempted to study local PPT (positive partial transposition) property, now without any restriction with respect to dimension. One can also study relations between local separability and locally decomposable maps. To proceed with these questions one should evaluate functionals and study the coefficient $d(\cdot)$ on an arbitrary positive element of ${{\mathcal A}}$. To this end we propose Let $\phi$ be a state on ${{\mathcal A}}= {{\mathcal A}}_1 \otimes {{\mathcal A}}_2$ and $A$ be an element in ${{\mathcal A}}$. The general coefficient of quantum correlations $d_0(\cdot)$ for $\phi$ and $A$ is defined as $$\label{nowa def} d_0(\phi, A) = \inf_{\mu \in M_{\phi}({{\mathcal S}})} | \int_{{{\mathcal S}}} \xi d\mu(\xi) (A) - \int_{{{\mathcal S}}_1 \times {{\mathcal S}}_2} \xi d(\boxtimes \mu)(\xi) (A)|.$$ 1,5cm To clarify this definition we recall that, by definition, $\mu_1$ and $\mu_{2}$ are probability measures on ${{\mathcal S}}({{\mathcal A}}_1)$ and ${{\mathcal S}}({{\mathcal A}}_2)$, respectively (they are basic ingredients of the definition of $\boxtimes\mu$; see Introduction or [@M]). Consequently, $\boxtimes \mu$ is a probability measure on ${{\mathcal S}}({{\mathcal A}}_1) \times {{\mathcal S}}({{\mathcal A}}_2)$. However, as ${{\mathcal S}}({{\mathcal A}}_1) \times {{\mathcal S}}({{\mathcal A}}_2) \subset {{\mathcal S}}$ is a measurable subset of ${{\mathcal S}}$ one can consider $\boxtimes \mu$ as a probability measure on ${{\mathcal S}}$ supported by ${{\mathcal S}}({{\mathcal A}}_1) \times {{\mathcal S}}({{\mathcal A}}_2)$. To summarize, $\int_{{{\mathcal S}}_1 \times {{\mathcal S}}_2} \xi d(\boxtimes \mu)(\xi)$ is a well defined element of ${{\mathcal S}}({{\mathcal A}})$. Therefore $\int_{{{\mathcal S}}_1 \times {{\mathcal S}}_2} \xi d(\boxtimes \mu)(\xi)(A) \equiv \sum_i \int_{{{\mathcal S}}_1 \times {{\mathcal S}}_2} \xi d(\boxtimes \mu)(\xi)(a_i \otimes b_i)$ is also well defined ($A=\sum_i a_i \otimes b_i$ is a general element of ${{\mathcal A}}$). Obviously, the just given definition of $d_0(\cdot)$ is equivalent to that given for $d(\cdot)$ (cf [@M]) if one restrict oneself to simple tensors! Moreover, it is worth noting that, in measure terms, separability of $\phi$ is equivalent to $\boxtimes \mu \in M_{\phi}({{\mathcal S}})$ (cf [@A]). Let us consider a state $\phi$ on ${{\mathcal A}}$ such that $d_0(\phi, A) =0$ for some fixed $A \in {{\mathcal A}}\equiv {{\mathcal A}}_1 \otimes {{\mathcal A}}_2$, where ${{\mathcal A}}_1, {{\mathcal A}}_2$ are finite dimensional $C^*$-algebras. This is the most important case considered within Quantum Information Theory. The general case needs more complicated arguments based on approximation procedures and it will be not considered here. We also assume that $A \ge 0$ and we suppose that the measure $\mu$ appearing in the condition $d_0(\phi, A) =0$ is finitely supported. This involves no loss of generality, as there exist (finite) optimal decompositions (cf [@M]). Then, there are states $\{ \phi^1_{A;i}\} \subset {{\mathcal S}}({{\mathcal A}}_1)$, and $\{ \phi^2_{A;i}\} \subset {{\mathcal S}}({{\mathcal A}}_2)$ and non-negative numbers $\lambda_i(A)$, $\sum_i \lambda_i(A) = 1$ such that: $$\phi(A) \equiv \phi(\sum_{kl} a^*_k a_l \otimes b^*_k b_l) = \sum_i \sum_{kl} \lambda_i(A) \phi^1_{A, i}( a^*_k a_l) \phi^2_{A, i}( b^*_k b_l).$$ Now, we are in position to analyse $\phi \circ \alpha \otimes id_2$ for a state $\phi$ on ${{\mathcal A}}$ having $d_0(\phi, A) =0$ for all $A \in {{\mathcal A}}$. Here, $\alpha$ is an arbitrary linear unital positive map on ${{\mathcal A}}_1$; $\alpha : {{\mathcal A}}_1 \to {{\mathcal A}}_1$. Moreover, we put $A \ge 0$ and again observe that $$\begin{aligned} \label{aa} (\phi \circ \alpha \otimes id_2)(A) = \sum_i \sum_{kl} \lambda_i(A) \phi_{A,i}^1(\alpha(a^*_k a_l)) \phi_{A,i}^2(b^*_k b_l) \nonumber \\ = \sum_i \sum_{kl} \lambda_i(A) \phi_{A,i}^1(V_{\phi, i, A}^* \pi_{\phi, i, A}(a^*_k a_l) V_{\phi, i, A}) \phi_{A,i}^2(b^*_k b_l),\end{aligned}$$ where $\pi_{\phi, i, A}(\cdot)$ is a $C^*$-morphism (cf Section 3). Our first remark on (\[aa\]) is that any $C^*$-morphism is, in fact, a sum of $^*$-morphism and $^*$-antimorphism (cf [@Tak] or [@BR]). The second observation says that $\{a^*_ka_l \}_{kl} $ and $\{b^*_k b_l \}_{kl}$ are positive semidefined matrices with ${{\mathcal A}}_1$ (${{\mathcal A}}_2$)-valued entries (cf [@Tak]). Taking states $\varphi^1$ and $\varphi^2$ on ${{\mathcal A}}_1$ and ${{\mathcal A}}_2$ respectively, one gets positive semidefined matrices $\{ \varphi^1(a^*_ka_l) \}_{kl}$ and $\{ \varphi^2(b^*_kb_l) \}_{kl}$ with entries in ${{\mathbb{C}}}$. The next remark is that the Hadamard product of positive semidefined matrices is a positive semidefined matrix (cf [@Ha]). Finally, we recall that the transposition of a positive semidefined matrix with complex valued entries is again positive semidefined. Taking all that into account one gets: \[lemat\] Assume that the antimorphism in the decomposition of $\pi_{\phi, i, A}$ is composed of a $^*$-morphism with transposition. Then, for any positive $A \in {{\mathcal A}}$ $(\phi \circ \alpha \otimes id_2)(A)$ is positive. Hence, provided that the assumption of this Lemma is satisfied, a state $\phi$ with $d_0(\phi, A)=0$ for any $A \in {{\mathcal A}}$ is the separable one. We have used the fact that only separable states are invariant (globally) with respect to “partially positive maps” (see [@W], [@P], [@H] and [@MM]). It is well known that any antimorphism can be represented as the composition of morphism and transposition (transposition is an antimorphism of order two, while the composition of two antimorphisms leads to a morphism). Thus, the assumption of Lemma \[lemat\] is always satisfied. As a conclusion one has that the condition $d_0(\phi,A)=0$ for any $A\in {{\mathcal A}}$ is the sufficient condition for separability of $\phi$. Hence, we got Assume ${{\mathcal A}}$ is the tensor product of finite dimensional $C^*$-algebras ${{\mathcal A}}_1$ and ${{\mathcal A}}_2$. Then, a state $\phi$ is separable if and only if $d(\phi; A) =0$ for any $A \in {{\mathcal A}}$. We have just proved, Lemma (\[lemat\]), that $d_0(\phi;A)=0$ for all $A \in {{\mathcal A}}$ implies separability of $\phi$. Conversely, the definition of separability implies that the coefficient $d_0$ is equal to zero (cf [@M]). This completes the proof. We want to close this section with an obvious remark that having a state $\phi$ with $d_0(\phi, A)=0$ for any $A \in {{\mathcal A}}$ , the positivity of partial transformation is the sufficient condition for separability. Acknowledgments ================ The author would like to thank the organisers of Conference on Quantum Probability and Infinite Dimensional Analysis, HPRN-CT-2002-00279, Greifswald Germany, in particular Michael Schürmann, and the participants for a very nice and interesting conference. He thanks Reinhard Werner, Louis Labuschagne and Marcin Marciniak for useful discussions on separability and quantum correlations. He would like also to acknowledge the support of the KBN grant PB/1490/PO3/2003/25 [99]{} E. Alfsen, [*Compact convex sets and boundary integrals*]{}, Springer Verlag, (1971) O. Bratteli and D. W. Robinson, [*Operator Algebras and Quantum Statistical Mechanics*]{}, Springer Verlag, New York-Heidelberg-Berlin, vol. I (1979) M. Horodecki, P. Horodecki, R. Horodecki, Separability of mixed states: necessary and sufficient conditions, [*Phys. Lett A*]{} [**223**]{} (1996), 1-8 P. R. Halmos, [*Finite dimensional Hilbert spaces*]{}, second edition, D. Van Nostrand Company, INC (1958) W. A. Majewski, On entanglement of states and quantum correlations, in [*Operator algebras and Mathematical Physics*]{}, Eds. J.M. Combes, J. Cuntz, G.A. Elliott, G. Nenciu, H. Siedentop, S. Stratila; [*Theta*]{}, Bucharest, 2003. e-print, LANL math-ph/0202030 W. A. Majewski and M. Marciniak, On a characterization of positive maps, [*J. Phys, A.: Math. Gen.*]{} [**34**]{} (2001), 5863-5874 A. Peres, Separability criterion for density matrices, [*Phys. Rev. Lett*]{} [**77**]{} (1996), 1413 E. St[ø]{}rmer, Positive linear maps of operator algebras, [*Acta Mathematica*]{} [**110**]{} (1963), 233-278 M. Takesaki, [*Theory of operator algebras*]{}, Springer Verlag, Berlin-Heidelberg-New York, (1979) G. Wittstock, [*Ordered Normed Tensor Products*]{} in “Foundations of Quantum Mechanics and Ordered Linear Spaces” (Advanced Study Institute held in Marburg) A. Hartkämper and H. Neumann eds. [*Lecture Notes in Physics*]{} vol. [**29**]{}, Springer Verlag 1974.
--- abstract: 'We combine a recently developed [*ab initio*]{} many-body approach capable of describing simultaneously both bound and scattering states, the [*ab initio*]{} NCSM/RGM, with an importance truncation scheme for the cluster eigenstate basis and demostrate its applicability to nuclei with mass numbers as high as 17. Using soft similarity renormalization group evolved chiral nucleon-nucleon interactions, we first calculate nucleon-$^4$He phase shifts, cross sections and analyzing power. Next, we investigate nucleon scattering on $^7$Li, $^7$Be, $^{12}$C and $^{16}$O in coupled-channel NCSM/RGM calculations that include low-lying excited states of these nuclei. We check the convergence of phase shifts with the basis size and study $A=8$, $13$, and $17$ bound and unbound states. Our calculations predict low-lying resonances in $^8$Li and $^8$B that have not been experimentally clearly identified yet. We are able to reproduce reasonably well the structure of the $A=13$ low lying states. However, we find that $A=17$ states cannot be described without an improved treatment of $^{16}$O one-particle-one-hole excitations and $\alpha$ clustering.' author: - 'Petr Navr[á]{}til$^1$, Robert Roth$^2$, and Sofia Quaglioni$^1$' title: '[*Ab initio*]{} many-body calculations of nucleon scattering on $^4$He, $^7$Li, $^7$Be, $^{12}$C and $^{16}$O' --- Introduction ============ Nuclei are quantum many-body systems with both bound and unbound states. A realistic [*ab initio*]{} description of light nuclei with predictive power must have the capability to describe both classes of states within a unified framework. Over the past decade, significant progress has been made in our understanding of the properties of the bound states of light nuclei starting from realistic nucleon-nucleon ($NN$) interactions, see e.g. Ref. [@benchmark] and references therein, and more recently also from $NN$ plus three-nucleon ($NNN$) interactions [@Nogga00; @GFMC; @NO03]. The solution of the nuclear many-body problem becomes more complex when scattering or nuclear reactions are considered. For $A=3$ and 4 nucleon systems, the Faddeev [@Witala01] and Faddeev-Yakubovsky [@Lazauskas05] as well as the hyperspherical harmonics (HH) [@Pisa] or the Alt, Grassberger and Sandhas (AGS) [@Deltuva] methods are applicable and successful. However, [*ab initio*]{} calculations for unbound states and scattering processes involving more than four nucleons in total are quite challenging. The first [*ab initio*]{} many-body neutron-$^4$He scattering calculations were performed within the Green’s Function Monte Carlo method using the Argonne $NN$ potential and the Illinois $NNN$ interaction [@GFMC_nHe4]. Also, resonances in He isotopes were investigated within the coupled-cluster method using the Gamow basis [@Ha07]. In a new development, we have recently combined the [*ab initio*]{} no-core shell model (NCSM) [@NCSMC12] and the resonating-group method (RGM) [@RGM; @RGM1; @RGM2; @RGM3; @Lovas98; @Hofmann08], into a new many-body approach [@NCSMRGM; @NCSMRGM_PRC] ([*ab initio*]{} NCSM/RGM) capable of treating bound and scattering states of light nuclei in a unified formalism, starting from fundamental inter-nucleon interactions. The NCSM is an [*ab initio*]{} approach to the microscopic calculation of ground and low-lying excited states of light nuclei with realistic two- and, in general, three-nucleon forces. The RGM is a microscopic cluster technique based on the use of $A$-nucleon Hamiltonians, with fully anti-symmetric many-body wave functions built assuming that the nucleons are grouped into clusters. Although most of its applications are based on the use of binary-cluster wave functions, the RGM can be formulated for three (and, in principle, even more) clusters in relative motion [@RGM1]. The use of the harmonic oscillator (HO) basis in the NCSM results in an incorrect description of the wave-function asymptotic and a lack of coupling to the continuum. By combining the NCSM with the RGM, we complement the ability of the RGM to deal with scattering and reactions with the use of realistic interactions, and a consistent [*ab initio*]{} description of the nucleon clusters, achieved via the NCSM. Presently the NCSM/RGM approach has been formulated for processes involving binary-cluster systems only. However, extensions of the approach to include three-body cluster channels are feasible, also in view of recent developments on the treatment of both three-body bound and continuum states (see, e.g., Refs. [@3bbound1; @3bcont1; @3bcnfr; @3bbound2; @3bcont2]). As described in detail in Refs. [@NCSMRGM; @NCSMRGM_PRC], the [*ab initio*]{} NCSM/RGM approach has been already applied to study the $n\,$-${}^3$H, $n\,$-${}^4$He, $n\,$-${}^{10}$Be, and $p\,$-${}^{3,4}$He scattering processes, and address the parity inversion of the $^{11}$Be ground state, using realistic $NN$ potentials. In that work, we demonstrated convergence of the approach with increasing basis size in the case of the $A=4$ and $A=5$ scattering. The $n\,$-${}^{10}$Be calculations were, on the other hand, perfomed only in a limited basis due to the computational complexity of the NCSM calculations of the $^{10}$Be eigenstates. It is the purpose of the present paper to expand the applicability of the NCSM/RGM beyond the lightest nuclei by using sufficiently large $N\hbar\Omega$ HO excitations to guarantee convergence of the calculation with the HO basis expansion of both the cluster wave functions and the localized RGM integration kernels. The use of large $N\hbar\Omega$ values is now feasible due to the recent introduction of the importance truncated (IT) NCSM scheme [@IT-NCSM; @Roth09]. It turns out that many of the basis states used in the NCSM calculations are irrelevant for the description of any particular eigenstate, e.g., the ground state or a set of low-lying states. Therefore, if one were able to identify the important basis states beforehand, one could reduce the dimension of the matrix eigenvalue problem without losing predictive power. This can be done using an importance truncation scheme based on many-body perturbation theory [@IT-NCSM]. We make use of the IT NCSM wave functions for the cluster eigenstates, in particular the eigenstates of the target nucleus of the binary nucleon-nucleus system, and calculate the one- and two-body densities that are then used to obtain the NCSM/RGM integration kernels. We benchmark the IT approach in basis sizes accessible by the full calculation and apply it within still larger basis sizes until convergence is reached for target nuclei as heavy as $^{12}$C or $^{16}$O. In this study, we employ a similarity renormalization group (SRG)  [@SRG; @Roth_SRG; @Roth_PPNP] evolved chiral N$^3$LO $NN$ potential [@N3LO] (SRG-N$^3$LO) that is soft enough to allow us reach convergence within about $14-16\hbar\Omega$ HO excitations in the basis expansion. In Sect. \[formalism\], we briefly overview the NCSM/RGM formalism and present for the first time the IT NCSM scheme that includes both ground and low-lying excited states in the set of reference states. Next, we present scattering calculation results for the $n$-$^4$He and $p$-$^4$He systems in Sect. \[n4He\]. In particular, we compare the calculated phase shifts to an R-matrix analysis of experimental data and, further, calculated differential cross sections and analyzing powers in the energy range  6-19 MeV to the corresponding experimental data. Neutron elastic and inelastic scattering on $^7$Li and proton elastic and inelastic scattering on $^7$Be are investigated in Sect. \[n7Li\]. We present phase shifts, cross sections and scattering lengths. We predict resonances in $^8$Li and $^8$Be that have not been clearly identified in experiments yet. In Sect. \[n12C\], we discuss nucleon-$^{12}$C results for both bound and unbound states of $^{13}$C and $^{13}$N, obtained including two $^{12}$C bound states, the ground and the first $2^+$ state, in the NCSM/RGM coupled-channel calculations. In Sect. \[n16O\], we present results for the nucleon-$^{16}$O system. In the NCSM/RGM coupled-channel calculations, we take into account the $^{16}$O ground state and up to the lowest three $^{16}$O negative-parity states. Conclusions are given in Sect. \[conclusions\]. Formalism ========= NCSM/RGM {#NCSMRGM} -------- The [*ab inito*]{} NCSM/RGM approach was introduced in Ref. [@NCSMRGM] with details of the formalism given in Ref. [@NCSMRGM_PRC]. Here we give a brief overview of the main points. In the present paper, we limit ourselves to a two-cluster RGM, which is based on binary-cluster channel states of total angular momentum $J$, parity $\pi$, and isospin $T$, $$\begin{aligned} |\Phi^{J^\pi T}_{\nu r}\rangle &=& \Big [ \big ( \left|A{-}a\, \alpha_1 I_1^{\,\pi_1} T_1\right\rangle \left |a\,\alpha_2 I_2^{\,\pi_2} T_2\right\rangle\big ) ^{(s T)}\nonumber\\ &&\times\,Y_{\ell}\left(\hat r_{A-a,a}\right)\Big ]^{(J^\pi T)}\,\frac{\delta(r-r_{A-a,a})}{rr_{A-a,a}}\,.\label{basis}\end{aligned}$$ In the above expression, $\left|A{-}a\, \alpha_1 I_1^{\,\pi_1} T_1\right\rangle$ and $\left |a\,\alpha_2 I_2^{\,\pi_2} T_2\right\rangle$ are the internal (antisymmetric) wave functions of the first and second cluster, containing $A{-}a$ and $a$ nucleons ($a{<}A$), respectively. They are characterized by angular momentum quantum numbers $I_1$ and $I_2$ coupled together to form channel spin $s$. For their parity, isospin and additional quantum numbers we use, respectively, the notations $\pi_i, T_i$, and $\alpha_i$, with $i=1,2$. The cluster centers of mass are separated by the relative coordinate $$\vec r_{A-a,a} = r_{A-a,a}\hat r_{A-a,a}= \frac{1}{A - a}\sum_{i = 1}^{A - a} \vec r_i - \frac{1}{a}\sum_{j = A - a + 1}^{A} \vec r_j\,,$$ where $\{\vec{r}_i, i=1,2,\cdots,A\}$ are the $A$ single-particle coordinates. The channel states (\[basis\]) have relative angular momentum $\ell$. It is convenient to group all relevant quantum numbers into a cumulative index $\nu=\{A{-}a\,\alpha_1I_1^{\,\pi_1} T_1;\, a\, \alpha_2 I_2^{\,\pi_2} T_2;\, s\ell\}$. The former basis states can be used to expand the many-body wave function according to $$|\Psi^{J^\pi T}\rangle = \sum_{\nu} \int dr \,r^2\frac{g^{J^\pi T}_\nu(r)}{r}\,\hat{\mathcal A}_{\nu}\,|\Phi^{J^\pi T}_{\nu r}\rangle\,. \label{trial}$$ As the basis states (\[basis\]) are not anti-symmetric under exchange of nucleons belonging to different clusters, in order to preserve the Pauli principle one has to introduce the appropriate inter-cluster anti-symmetrizer, schematically $\hat{\mathcal A}_{\nu}=\sqrt{\frac{(A{-}a)!a!}{A!}}\sum_{P}(-)^pP\,,$ where the sum runs over all possible permutations $P$ that can be carried out among nucleons pertaining to different clusters, and $p$ is the number of interchanges characterizing them. The coefficients of the expansion (\[trial\]) are the relative-motion wave functions $g^{J^\pi T}_\nu(r)$, which represent the only unknowns of the problem. To determine them one has to solve the non-local integro-differential coupled-channel equations $$\sum_{\nu}\int dr \,r^2\left[{\mathcal H}^{J^\pi T}_{\nu^\prime\nu}(r^\prime, r)-E\,{\mathcal N}^{J^\pi T}_{\nu^\prime\nu}(r^\prime,r)\right] \frac{g^{J^\pi T}_\nu(r)}{r} = 0\,,\label{RGMeq}$$ where the two integration kernels, the Hamiltonian kernel, $${\mathcal H}^{J^\pi T}_{\nu^\prime\nu}(r^\prime, r) = \left\langle\Phi^{J^\pi T}_{\nu^\prime r^\prime}\right|\hat{\mathcal A}_{\nu^\prime}H\hat{\mathcal A}_{\nu}\left|\Phi^{J^\pi T}_{\nu r}\right\rangle\,,\label{H-kernel}$$ and the norm kernel, $${\mathcal N}^{J^\pi T}_{\nu^\prime\nu}(r^\prime, r) = \left\langle\Phi^{J^\pi T}_{\nu^\prime r^\prime}\right|\hat{\mathcal A}_{\nu^\prime}\hat{\mathcal A}_{\nu}\left|\Phi^{J^\pi T}_{\nu r}\right\rangle\,,\label{N-kernel}$$ contain all the nuclear structure and anti-symmetrization properties of the problem. In particular, the non-locality of the kernels is a direct consequence of the exchanges of nucleons between the clusters. We have used the notation $E$ and $H$ to denote the total energy in the center-of-mass frame, and the intrinsic $A$-nucleon microscopic Hamiltonian, respectively. The formalism presented above is combined with the [*ab initio*]{} NCSM in two steps: First, we note that the Hamiltonian can be written as $$\label{Hamiltonian} H=T_{\rm rel}(r)+ {\mathcal V}_{\rm rel} +\bar{V}_{\rm C}(r)+H_{(A-a)}+H_{(a)}\,,$$ where $H_{(A-a)}$ and $H_{(a)}$ are the ($A{-}a$)- and $a$-nucleon intrinsic Hamiltonians, respectively, $T_{\rm rel}(r)$ is the relative kinetic energy and ${\mathcal V}_{\rm rel}$ is the sum of all interactions between nucleons belonging to different clusters after subtraction of the average Coulomb interaction between them, explicitly singled out in the term $\bar{V}_{\rm C}(r)=Z_{1\nu}Z_{2\nu}e^2/r$ ($Z_{1\nu}$ and $Z_{2\nu}$ being the charge numbers of the clusters in channel $\nu$). We use identical realistic potentials in both the cluster’s Hamiltonians and inter-cluster interaction ${\mathcal V}_{\rm rel}$. Accordingly, $\left|A{-}a\, \alpha_1 I_1^{\,\pi_1} T_1\right\rangle$ and $\left |a\,\alpha_2 I_2^{\,\pi_2} T_2\right\rangle$ are obtained by diagonalizing $H_{(A-a)}$ and $H_{(a)}$, respectively, in the model space spanned by the NCSM $N_{\rm max}\hbar\Omega$ HO basis. Note that in the present paper we use soft SRG evolved $NN$ potentials. Therefore, there is no need to derive any further effective interaction tailored to the model space truncation as with these soft interactions our calculations converge in the model spaces we are able to reach. Second, we replace the delta functions in the localized parts of the Hamiltonian (\[H-kernel\]) and the norm (\[N-kernel\]) kernels with their representation in the HO model space. We use identical HO frequency as for the cluster eigenstate wave functions and a consistent model space size ($N_{\rm max}$). We emphasize that this replacement is performed only for the localized parts of the kernels. The diagonal parts coming from the identity operator in the antisymmetrizers, the kinetic term and the average Coulomb potential are treated exactly. In this paper, we apply the NCSM/RGM formalism in the single-nucleon projectile basis, i.e., for binary-cluster channel states (\[basis\]) with $a=1$ (with channel index $\nu = \{ A{-}1 \, \alpha_1 I_1^{\pi_1} T_1; \, 1\, \frac 1 2 \frac 1 2;\, s\ell\}$). As an illustration, let’s discuss in more detail the norm kernel that is rather simple in this basis: $$\begin{aligned} {\mathcal N}^{J^\pi T}_{\nu^\prime\nu}(r^\prime, r)& = &\left\langle\Phi^{J^\pi T}_{\nu^\prime r^\prime}\right|1-\sum_{i=1}^{A-1}\hat P_{iA} \left|\Phi^{J^\pi T}_{\nu r}\right\rangle \\ &=&\delta_{\nu^\prime\,\nu}\,\frac{\delta(r^\prime-r)}{r^\prime\,r}-(A-1)\sum_{n^\prime n}R_{n^\prime\ell^\prime}(r^\prime) R_{n\ell}(r)\nonumber\\ &&\times \left\langle\Phi^{J^\pi T}_{\nu^\prime n^\prime}\right|\hat P_{A-1,A} \left|\Phi^{J^\pi T}_{\nu n}\right\rangle\,.\label{norm}\end{aligned}$$ We can easily recognize a direct term, in which initial and final state are identical (corresponding to diagram $(a)$ of Fig. \[diagram-norm-pot\]), and a many-body correction due to the exchange part of the inter-cluster anti-symmetrizer (corresponding to diagram $(b)$ of Fig. \[diagram-norm-pot\]). We note that in calculating the matrix elements of the exchange operator $\hat P_{A-1,A}$ we replaced the delta function of Eq. (\[basis\]) with its representation in the HO model space as discussed above. This is appropriate as the transposition $\hat P_{A-1,A}$ operator acting on the traget wave function is short-to-medium range. On the contrary, the $\delta$-function coming from the identity is treated exactly. The presence of the inter-cluster anti-symmetrizer affects also the Hamiltonian kernel, and, in particular, the matrix elements of the interaction. For a $NN$ potential one obtains a direct term involving interaction and exchange of two nucleons only (see diagrams ($c$) and ($d$) of Fig. \[diagram-norm-pot\]), and an exchange term involving three-nucleons. Diagram ($e$) of Fig. \[diagram-norm-pot\] describes this latter term, in which the last nucleon is exchanged with one of the nucleons of the first cluster, and interacts with yet another nucleon. For more details on the integration kernels in the single-nucleon projectile basis we refer the readers to Ref. [@NCSMRGM_PRC]. Being translationally-invariant quantities, the norm and Hamiltonian kernels can be “naturally" derived working within the NCSM Jacobi-coordinate basis. However, by introducing Slater-determinant channel states of the type $$\begin{aligned} |\Phi^{J^\pi T}_{\nu n}\rangle_{\rm SD} &=& \Big [\big (\left|A{-}a\, \alpha_1 I_1 T_1\right\rangle_{\rm SD} \left |a\,\alpha_2 I_2 T_2\right\rangle\big )^{(s T)}\nonumber\\ &&\times Y_{\ell}(\hat R^{(a)}_{\rm c.m.})\Big ]^{(J^\pi T)} R_{n\ell}(R^{(a)}_{\rm c.m.})\,, \label{SD-basis}\end{aligned}$$ in which the eigenstates of the $(A{-}a)$-nucleon fragment are obtained in the SD basis (while the second cluster is still a NCSM Jacobi-coordinate eigenstate), it can be easily demonstrated that translationally invariant matrix elements can be extracted from those calculated in the SD basis of Eq. (\[SD-basis\]) by inverting the following expression: $$\begin{aligned} && {}_{\rm SD}\!\left\langle\Phi^{J^\pi T}_{\nu^\prime n^\prime}\right|\hat{\mathcal O}_{\rm t.i.}\left|\Phi^{J^\pi T}_{\nu n}\right\rangle\!{}_{\rm SD} = \nonumber\\ &&\nonumber\\ &&\sum_{n^\prime_r \ell^\prime_r, n_r\ell_r, J_r} \left\langle\Phi^{J_r^{\pi_r} T}_{\nu^\prime_r n^\prime_r}\right|\hat{\mathcal O}_{\rm t.i.}\left|\Phi^{J_r^{\pi_r} T}_{\nu_r n_r}\right\rangle\nonumber\\ && \times \sum_{NL} \hat \ell \hat \ell^\prime \hat J_r^2 (-1)^{(s+\ell-s^\prime-\ell^\prime)} \left\{\begin{array}{ccc} s &\ell_r& J_r\\ L& J & \ell \end{array}\right\} \left\{\begin{array}{ccc} s^\prime &\ell^\prime_r& J_r\\ L& J & \ell^\prime \end{array}\right\}\nonumber\\ &&\nonumber\\ && \times\langle n_r\ell_rNL\ell | 00n\ell\ell \rangle_{\frac{a}{A-a}} \;\langle n^\prime_r\ell^\prime_rNL\ell | 00n^\prime\ell^\prime\ell^\prime \rangle_{\frac{a}{A-a}} \,.\label{Oti} \end{aligned}$$ Here $\hat {\mathcal O}_{\rm t.i.}$ represents any scalar and parity-conserving translational-invariant operator ($\hat {\mathcal O}_{\rm t.i.} = \hat{\mathcal A}$, $\hat{\mathcal A} H \hat{\mathcal A}$, etc.). We exploited both Jacobi-coordinate and SD channel states to verify our results. The use of the SD basis is computationally advantageous and allows us to explore reactions involving $p$-shell nuclei, as done in the present work. In order to calculate the parts of the integration kernels depicted in Fig. \[diagram-norm-pot\] (b), (c) and (d), all information that we need from the SD basis calculation are one-body densities of the target eigenstates. For the (e) part of the integration kernel in Fig. \[diagram-norm-pot\], we need two-body densities of the target eigenstates obtained in the SD basis. Due to the presence of the norm kernel ${\mathcal N}^{J^\pi T}_{\nu^\prime\nu}(r^\prime, r)$, Eq. (\[RGMeq\]) does not represent a system of multichannel Schrödinger equations, and $g^{J^\pi T}_\nu(r)$ do not represent Schrödinger wave functions. The short-range non-orthogonality, induced by the non-identical permutations in the inter-cluster anti-symmetrizers, can be removed by introducing normalized Schrödinger wave functions $$\frac{\chi^{J^\pi T}_\nu(r)}{r} = \sum_{\gamma}\int dy\, y^2 {\mathcal N}^{\frac12}_{\nu\gamma}(r,y)\,\frac{g^{J^\pi T}_\gamma(y)}{y}\,,$$ where ${\mathcal N}^{\frac12}$ is the square root of the norm kernel, and applying the inverse-square root of the norm kernel, ${\mathcal N}^{-\frac12}$, to both left and right-hand side of the square brackets in Eq. (\[RGMeq\]). This procedure, explained in more detail in Ref. [@NCSMRGM_PRC], leads to a system of multichannel Schrödinger equations $$\begin{aligned} &&[\hat T_{\rm rel}(r) + \bar V_{\rm C}(r) -(E - E_{\alpha_1}^{I_1^{\pi_1} T_1} - E_{\alpha_2}^{I_2^{\pi_2} T_2})]\frac{\chi^{J^\pi T}_{\nu} (r)}{r} \nonumber\\[2mm] &&+ \sum_{\nu^\prime}\int dr^\prime\,r^{\prime\,2} \,W^{J^\pi T}_{\nu \nu^\prime}(r,r^\prime)\,\frac{\chi^{J^\pi T}_{\nu^\prime}(r^\prime)}{r^\prime} = 0,\label{r-matrix-eq}\end{aligned}$$ where $E_{\alpha_i}^{I_i^{\pi_i} T_i}$ is the energy eigenvalue of the $i$-th cluster ($i=1,2$), and $W^{J^\pi T}_{\nu^\prime \nu}(r^\prime,r)$ is the overall non-local potential between the two clusters, which depends on the channel of relative motion, while it does not depend on the energy. These are the equations that we finally solve to obtain both our scattering and bound-state results. Importance truncated NCSM with excited states {#ITNCSM} --------------------------------------------- The primary limitation for the range of applicability of the NCSM in terms of particle number $A$ and model spaces size $N_{\max}$ results from the factorial growth of the dimension of the $N_{\max}\hbar\Omega$ space. Except for light isotopes, it is hardly possible to obtain a converged result using a ’bare’ Hamiltonian within the $N_{\max}\hbar\Omega$ spaces that are computationally tractable. At this point the importance truncation offers a solution. The importance truncation in connection with the NCSM was introduced in Ref. [@IT-NCSM] and discussed in detail in Ref. [@Roth09]. In the following we summarize a few key features of the IT-NCSM and generalize the approach to the simultaneous description of excited states. The motivation for the importance truncation results from the observation that the expansion of any particular eigenstate of the Hamiltonian in a full $m$-scheme NCSM space typically contains a large number of basis states with extremely small or vanishing amplitudes. The amplitudes define an adaptive truncation criterion, which takes into account the properties of the Hamiltonian and the structure of the eigenstate under consideration. If those amplitudes were known—at least approximately—before actually solving the eigenvalue problem, one could reduce the model space to the most relevant basis states by imposing a threshold on the amplitude. The amplitude of a particular basis state $| \Phi_{\nu} \rangle$ in the expansion of a specific eigenstate can be estimated using first-order multiconfigurational perturbation theory. In order to set up a perturbation series we need an initial approximation of the target state, the so-called reference state $| \Psi_{\text{ref}} \rangle$. In practice this reference state will be a superposition of basis states $| \Phi_{\mu} \rangle \in \mathcal{M}_{\text{ref}}$ from a reference space $\mathcal{M}_{\text{ref}}$: $$\label{eq:itncsm_referencestate} | \Psi_{\text{ref}} \rangle = \sum_{\mu \in \mathcal{M}_{\text{ref}}} C_{\mu}^{(\text{ref})} | \Phi_{\mu} \rangle \;.$$ The reference state and the amplitudes $C_{\mu}^{(\text{ref})}$ are typically extracted from a previous NCSM calculation. Based on $| \Psi_{\text{ref}} \rangle$ as unperturbed state, we can evaluate the first-order perturbative correction to the target state resulting from basis states $| \Phi_{\nu} \rangle \notin \mathcal{M}_{\text{ref}}$. Their first-order amplitude defines the so-called importance measure $$\label{eq:itncsm_importancemeasure} \kappa_{\nu} = -\frac{\langle \Phi_\nu | H | \Psi_{\text{ref}} \rangle}{\epsilon_\nu - \epsilon_{\text{ref}}} = -\sum_{\mu\in\mathcal{M}_{\text{ref}}} C_{\mu}^{(\text{ref})} \frac{\langle \Phi_\nu | H | \Phi_\mu \rangle}{\epsilon_\nu - \epsilon_{\text{ref}}} \;.$$ The energy denominator $\epsilon_\nu - \epsilon_{\text{ref}}$ in a Møller-Plesset-type partitioning is simply given by the unperturbed harmonic-oscillator excitation energy of the basis state $| \Phi_{\nu} \rangle$ (see Ref. [@Roth09] for details). Imposing an importance threshold $\kappa_{\min}$, we construct an importance truncated model space including all basis states with importance measure $|\kappa_{\nu}| \geq \kappa_{\min}$. Since the importance measure is zero for all basis states that differ from all of the states in $\mathcal{M}_{\text{ref}}$ by more than a two-particle-two-hole excitation, we have to embed the construction of the importance truncated space into an iterative update cycle. After constructing the importance truncated space and solving the eigenvalue problem in that space, we obtain an improved approximation for the target state that defines a reference state for the next iteration. In order to accelerate the evaluation of the importance measure , we typically do not use the complete eigenstate as new reference state, but project it onto a reference space spanned by the basis states with $|C_{\nu}|\geq C_{\min}$, where $C_{\nu}$ are the coefficients resulting from the solution of the eigenvalue problem. The second threshold $C_{\min}$ will be chosen sufficiently small so as not to affect the results for a given $\kappa_{\min}$ threshold. Simple iterative update schemes can be devised for any type of full model spaces, as discussed in Refs. [@Roth09; @RoGo09]. Specifically for the $N_{\max}\hbar\Omega$ space of the NCSM, however, there is an efficient sequential update scheme leading to the IT-NCSM(seq) approach. It is based on the fact that all states of an $(N_{\max}+2)\hbar\Omega$ space can be generated from the basis states of an $N_{\max}\hbar\Omega$ space using two-particle-two-hole excitations at most. Thus a single importance update starting from a reference state in an $N_{\max}\hbar\Omega$ space gives access to all relevant states in an $(N_{\max}+2)\hbar\Omega$ space. Making use of this property, in the IT-NCSM(seq) we start with a full NCSM calculation in, e.g., $2\hbar\Omega$ and use this eigenstate after applying the $C_{\min}$ threshold as reference state for constructing the importance truncated $4\hbar\Omega$ space. After solving the eigenvalue problem for this importance truncated $4\hbar\Omega$ space we use the resulting eigenstate as reference state to construct the $6\hbar\Omega$ space, and so on. Thus only one importance update is required for each value of $N_{\max}$, which makes this scheme very efficient computationally. Moreover, in the limit of vanishing thresholds, $(\kappa_{\min},C_{\min})\to0$, this scheme recovers the full $N_{\max}\hbar\Omega$ space at each step of the sequence, i.e., the IT-NCSM(seq) would recover the full NCSM result. Based on this limiting property, we can obtain a numerical approximation to the full NCSM result by extrapolating the IT-NCSM(seq) observables obtained for a set of different importance thresholds $\kappa_{\min}$ (and in principle also $C_{\min}$) to $\kappa_{\min}\to0$. Through this extrapolation, the contribution of discarded basis states, i.e. those with importance measures $|\kappa_{\nu}|$ below the smallest threshold considered, is effectively recovered. Because the control parameter $\kappa_{\min}$ is tied to the physical structure of the eigenstate, we observe a smooth threshold dependence for all observables, which allows for a robust threshold extrapolation. In the case of the energy we can improve the quality of the extrapolation further by considering a perturbative second-order estimate for the energy of the excluded basis states. While setting up the importance truncated space, all second-order energy contributions $$\label{eq:itncsm_importancemeasureenergy} \xi_{\nu} = -\frac{| \langle \Phi_\nu | H | \Psi_{\text{ref}} \rangle |^2}{\epsilon_\nu - \epsilon_{\text{ref}}} \;.$$ for the discarded states with $|\kappa_{\nu}| < \kappa_{\min}$ are summed up to provide a correction $\Delta_{\text{excl}}(\kappa_{\min})$ to the energy eigenvalue. By construction this correction goes to zero in the limit $\kappa_{\min}\to0$. We use this additional information for a constrained simultaneous extrapolation of the energy to vanishing threshold with and without perturbative correction for the excluded states as described in detail in Ref. [@Roth09]. The whole concept can be generalized to the description of excited states. For the present application in connection with the NCSM/RGM, we aim at an importance truncated model space that is equally well suited for the description of the lowest $M$ eigenstates of the Hamiltonian for given parity and total angular momentum projection. Instead of using a single reference state, we employ different reference states $| \Psi_{\text{ref}}^{(m)} \rangle$, with $m=1,...,M$, for each of the $M$ target states. For each reference state we define a separate importance measure $\kappa_{\nu}^{(m)}$ following Eq. . A basis state $|\Phi_{\nu} \rangle$ is included in the importance truncated space if at least one of the importance measures $|\kappa_{\nu}^{(m)}|$ is above the threshold $\kappa_{\min}$, i.e., if it is relevant for the description of at least one of the $M$ target states it will be included. Because the different eigenstates are typically dominated by different basis states, the dimension of the importance truncated space grows linearly with $M$. In the IT-NCSM(seq) scheme we start with a full NCSM calculation in $2\hbar\Omega$ and use the lowest $M$ eigenstates as initial reference states $| \Psi_{\text{ref}}^{(m)} \rangle$. Based on the corresponding importance measures $\kappa_{\nu}^{(m)}$ the importance truncated $4\hbar\Omega$ space is constructed and the lowest $M$ eigenvectors in this space serve as new reference states (after application of the $C_{\min}$ threshold) for the construction of the $6\hbar\Omega$ space, and so on. From a sequence of IT-NCSM(seq) calculations we obtain a set of $M$ eigenvectors for each value of $N_{\max}$ which can be used to evaluate other observables. By default we compute the expectation values of $\vec{J}^2$ and $\vec{T}^2$ as well as the expectation values of $H_{\text{int}}$ and $H_{\text{cm}}$. Indeed, since we use an importance truncated space in the $m$-scheme without explicit angular momentum projection, the eigenstates are not guaranteed to have good angular momentum and isospin. We therefore monitor the expectation values of $\vec{J}^2$ and $\vec{T}^2$ and find values which typically differ by less then $10^{-3}$ from the exact quantum numbers. As in the full NCSM we separate spurious center-of-mass (CM) excitations from the physical spectrum by adding a Lawson term $\beta H_{\text{cm}}$ to the translationally invariant intrinsic Hamiltonian $H_{\text{int}}$ (with the typical choice $\beta=10$). The use of this modified Hamiltonian provides at the same time a diagnostic for potential CM contaminations of the intrinsic states induced by the importance truncation. As discussed in Refs. [@RoGo09b; @Roth09], the independence of the intrinsic energies $\langle H_{\text{int}} \rangle$ on $\beta$ and the smallness of $\langle H_{\text{cm}} \rangle$ demonstate that the IT-NCSM(seq) solutions are free of CM contaminations. Eventually, the wave functions obtained in the IT-NCSM(seq) together with the threshold extrapolated intrinsic energies form the input for the NCSM/RGM calculations discussed in the following. Nucleon-$^4$He scattering {#n4He} ========================= The purpose of the nucleon-$^4$He calculations presented in this paper is two-fold. First, we want to check the predictive power of the SRG evolved chiral interaction in the $A=5$ system, where a lot of experimental scattering data exist and where our calculations can be easily converged with respect to the size of the basis expansion. Second, we want to benchmark the importance truncation scheme with the full-space calculations all the way up to very large $N_{\rm max}\hbar\Omega$ spaces. The first [*ab initio*]{} $A=5$ scattering calculations was reported in Ref. [@GFMC_nHe4]. The $n$-$\alpha$ low-lying $J^\pi=3/2^-$ and $1/2^-$ $P$-wave resonances as well as the $1/2^+$ $S$-wave non-resonant scattering below 5 MeV c.m. energy were obtained using the AV18 $NN$ potential with and without the three-nucleon force, chosen to be either the Urbana IX or the Illinois-2 model. The results of these Green’s function Monte Carlo (GFMC) calculations revealed sensitivity to the inter-nucleon interaction, and in particular to the strength of the spin-orbit force. Soon after, the development of the [*ab initio*]{} NCSM/RGM approach allowed us to calculate both $n$- and (for the first time) $p$-$\alpha$ scattering phase shifts for energies up to the inelastic threshold [@NCSMRGM; @NCSMRGM_PRC], using several realistic $NN$ potentials, including the chiral N$^3$LO [@N3LO], the $V_{{\rm low}k}$ [@BoKu03] and the CD-Bonn [@CD-Bonn2000] $NN$ potentials. Nucleon-$\alpha$ scattering provides one of the best-case scenarios for the application of the NCSM/RGM approach. This process is characterized by a single open channel up to the $d+^3$H threshold, which is fairly high in energy. In addition, the low-lying resonances of the $^4$He nucleus are narrow enough to be reasonably reproduced diagonalizing the four-body Hamiltonian in the NCSM model space. In the present work we include the first excited state of $^4$He, the $0^+ 0$ state, as a closed channel in our NCSM/RGM basis space. Convergence with the size of the HO basis expansion {#conv} --------------------------------------------------- We performed extensive nucleon-$^4$He calculations with the SRG-N$^3$LO $NN$ potential with a cutoff of 2.02 fm$^{-1}$ to check convergence of our NCSM/RGM calculations. In Fig. \[fig:nHe4\_phaseconv\], we present $n$-$^4$He phase shift results for the $S$- and $P$-waves obtained using an HO basis expansion up to $N_{\rm max}=17$ for for the localized parts of the NCSM/RGM integration kernels and for the $^4$He ground- and the first-excited $0^+ 0$ wave functions (since these states have positive parity, the $N_{\rm max}-1$ expansion is in fact used for the $^4$He eigenstates). As seen in the figure, the phase-shift convergence is excellent. In particular, the $N_{\rm max}=17$ and the $N_{\rm max}=15$ curves lie on top of each other. The convergence rate demonstrated here is quite similar to that obtained using the $V_{{\rm low} k}$ $NN$ potentatial in our erlier study (compare the present Fig. \[fig:nHe4\_phaseconv\] to the left panel of Fig. 13 in Ref. [@NCSMRGM_PRC]). ![(Color online) Dependence of the $n$-$^4$He phase shifts on the size of the HO basis expansion of the $^4$He wave functions and the localized parts of the integration kernels. The $^4$He ground state and the first $0^+ 0$ excited states were included. The SRG-N$^3$LO $NN$ potential with a cutoff of 2.02 fm$^{-1}$ and the HO frequency $\hbar\Omega=20$ MeV were used.[]{data-label="fig:nHe4_phaseconv"}](fig_conv_nHe4_srg-n3lo0600_20.eps){width="1.0\columnwidth"} Benchmark of Importance-Truncated calculations {#IT_bench} ---------------------------------------------- As shown in the previous subsection, for the $A=5$ system we are able to reach complete convergence with $^4$He wave functions obtained within full, non-truncated, NCSM calculations. We can, therefore, test the performace of the IT-NCSM scheme in this system all the way up to very large $N_{\rm max}$ values and see how well the IT-NCSM scheme reproduces the completetly converged results. It should be noted that for heavier $A=8,13$ and $A=17$ systems investigated later, full, non-truncated NCSM calculations for the $A=7$ ($A=12,16$) target nuclei are feasible only up to $N_{\rm max}=10$ ($N_{\rm max}=8$). It is, therefore, desirable and important to benchmark the IT-NCSM calculations in a lighter system like $A=5$ in $N_{\rm max}>10$ calculations. In Fig. \[fig:nHe4\_phase\_full\_IT\], we compare $n$-$^4$He phase shifts calculated within the NCSM/RGM with $^4$He wave functions obtained in a full $N_{\rm max}=16$ NCSM calculation and those obtained using $^4$He wave functions obtained within an $N_{\rm max}=16$ IT-NCSM calculation. The agreement of the two sets of phase shifts is excellent. It should be noted that the dimension of the full $N_{\rm max}=16$ $^4$He NCSM basis is 6344119. The dimension of the IT-NCSM basis used here to calculate the $^4$He wave functions was just 992578, more than a factor of six smaller. Truncation parameters $\kappa_{\min}=10^{-5} $ and $C_{\min}=2\times 10^{-4}$ were used. The ground state energy from the full NCSM calculation is $-28.224$ MeV. The $\kappa_{\min} \rightarrow 0$ extrapolated ground state energy from the IT-NCSM calculation is $-28.217(5)$ MeV with a difference from the full result less than 10 keV. The excited $0^+ 0$ energy obtained in the full NCSM calculation was 21.58 MeV. The corresponding extrapolated IT-NCSM result was 21.4(1) MeV. The slightly lower accuracy of the excited state reproduction in the IT-NCSM calculation is manifested in a very small deviation of the $S$-wave phase shift at energies above 12 MeV (less than 1 degree at 16 MeV). It should be noted that the excited $0^+ 0$ state is not bound. Consequently, it is challenging to reproduce the excited state as well as the ground state in a importance-truncated calculation. It should be also pointed out that unlike for the energies, no phase shift extrapolation was performed. The needed one- and two-body densities were calculated from the wave functions obtained in the IT-NCSM calculation with the truncation parameters described above. The excellent agreement of the full and the IT-NCSM phase shifts demonstrates that no extrapolation was actually necessary. Obviously, we can check the dependence of observables like phase shifts on the $\kappa_{\min}$ and $C_{\min}$ and perform an extrapolation to vanishing values of these parameters if needed. ![(Color online) Calculated $n$-$^4$He $S$- and $P$-wave phase shifts. Results obtained with $^4$He wave functions from full NCSM (solid lines) and IT-NCSM (dashed lines) calculations are compared. The SRG-N$^3$LO $NN$ potential with a cutoff of 2.02 fm$^{-1}$, the $N_{\rm max}=17$ basis space and the HO frequency $\hbar\Omega=20$ MeV were used. See text for details on the IT-NCSM calculation.[]{data-label="fig:nHe4_phase_full_IT"}](phase_shift_nHe4_srg-n3lo0600_20_17_full_IT.eps){width="1.0\columnwidth"} Comparison with experimental data {#n-He4_vs_exper} --------------------------------- ![(Color online) Calculated $n-^4$He (left panels) and $p-^4$He (right panels) compared to the R-matrix analysis of experimental data [@HalePriv]. The NCSM/RGM calculations that included the $^4$He ground state and the $0^+ 0$ excited state were done using the SRG-N$^3$LO $NN$ potential with a cutoff of 2.02 fm$^{-1}$. The HO frequency $\hbar\Omega=20$ MeV and $N_{\rm max}=17$ basis space were employed.[]{data-label="fig:N4He_phase"}](phase-n4He-srg-n3lo.eps){width="1.0\columnwidth"} Our calculated $n$-$^4$He and $p$-$^4$He phase shifts are compared to those obtained from an $R$-matrix analysis of $N-^4$He experimental data [@HalePriv] in Fig. \[fig:N4He\_phase\]. The agreement is quite reasonable for the $S$-wave, $D$-wave and $^2P_{1/2}$-wave. The $^2P_{3/2}$ resonance is positioned at higher energy in the calculation and the corresponding phase shifts are underestimated with respect to the $R$-matrix results, although the disagreement becomes less and less pronounced starting at about 8 MeV. While the inclusion of negative-parity excited states of the $\alpha-$particle would likely increase somewhat the $^2P_{3/2}$ phase shifts [@NCSMRGM; @NCSMRGM_PRC], the observed difference is largely due to a reduction in spin-orbit strength caused by the neglect of the three-nucleon interaction in our calculations. The importance of the three-nucleon force in reproducing the $R$-matrix $^2P_{3/2}$ phase shifts was demonstrated in the GFMC $n$-$^4$He calculations of Ref. [@GFMC_nHe4]. Overall, the present results obtained with the SRG-N$^3$LO $NN$ interaction agree better with experiment than our earlier calculations [@NCSMRGM; @NCSMRGM_PRC] with the $V_{{\rm low} k}$, N$^3$LO and CD-Bonn $NN$ potentials. The only exception is the $S$-wave phase shift which is best described using the CD-Bonn $NN$ potential. The larger spin-orbit strength of the employed SRG-N$^3$LO potential with respect to N$^3$LO itself is the likely responsible for the improved agreement. As our calculated phase shifts agree with the experimental ones reasonably well above the center-of-mass energy of 8 MeV, we expect a similar behavior for cross section and analysing power in that energy range. This is indeed the case as shown in Fig. \[fig:N4He\_17MeVxsay\], where the calculated differential cross section and analyzing power are compared to experimental data from Karlsruhe [@Karlsruhe] with polarized neutrons of $E_n=$17 MeV laboratory energy. For the cross section experimental data see also references in [@Karlsruhe]. The cross section is reproduced remarkably well at all angles and the analysing power is in reasonable agreement with the data, particularly at backward angles. The same quality of agreement can be found for all energies far from the low-lying resonances, as shown in the right panel of Fig. \[fig:N4He\_17MeVxsay\] for the analysing power at $E_n=15$ MeV and 19 MeV. A better display of the dependence of our calculated cross section and analysing power upon the incident nucleon energy is provided by Fig. \[fig:p4He\_1\], where the $p-^4$He results for these observables are compared to the data of Ref. [@Schwandt] at the proton laboratory energies of $E_p = 5.95$, 7.89, 9.89, and 11.99 MeV. As expected from the behavior of the phase shifts described earlier, for energies relatively close to the resonance region we find a rather poor agreement with experiment, particularly noticeable in the analysing power overall and in the cross section at backward angles. However, starting at about 10 MeV, the agreement improves substantially and data are once again reproduced in a quite satisfactory way at higher energies, as shown in Fig. \[fig:p4He\_2\], where the NCSM/RGM $p-^4$He results are compared to various experimental data sets [@Schwandt; @Brokman; @Dodder; @Hardekopf] in the energy range $E_p \sim 12-17$ MeV. Neutron-$^{7}$Li and proton-$^7$Be scattering {#n7Li} ============================================= The $^7$Be($p$,$\gamma$)$^8$B capture reaction plays a very important role in nuclear astrophysics as it serves as an input for understanding the solar neutrino flux [@Adelberger]. While the experimental determination of the neutrino flux from $^8$B has an accuracy of about 9% [@SNO], the theoretical predictions have uncertainties of the order of 20% [@CTK03; @BP04]. The theoretical neutrino flux depends on the $^7$Be($p$,$\gamma$)$^8$B S-factor. Significant experimental and theoretical effort has been devoted to studying this reaction. The S-factor extrapolation to astrophysically relevant energies depends among other things on the scattering lengths of the proton scattering on $^7$Be. Experimental determination of these lengths was performed recently [@Be7_scatl] with precision of the order of 30%. The proton-$^7$Be elastic scattering was also investigated in Ref. [@Rogachev01]. To benchmark the theoretical calculations used for S-factor extrapolations, an investigation of the mirror capture reaction, $^7$Li($n$,$\gamma$)$^8$Li, as well as the $n$+$^7$Li scattering is important. For example, the $n$+$^7$Li scattering lengths are known with a higher accuracy [@Li7_scatl]. ![(Color online) $^7$Li ground-state and the $1/2^-$ and $7/2^-$ excited state energy dependence on the model-space size $N_{\rm max}$, obtained within the importance-truncated NCSM (solid lines), using the SRG-N$^3$LO $NN$ potential with a cutoff of 2.02 fm$^{-1}$. The HO frequency $\hbar\Omega=20$ MeV was employed. The full-space NCSM results are shown by dashed lines.[]{data-label="fig:Li7_ITNCSM"}](Li7_317_IT.eps){width="1.0\columnwidth"} The first applications of the NCSM approach to the description of the $^7$Be($p$,$\gamma$)$^8$B capture reaction [@NBC06] required a phenomenological correction of the asympotic behavior of the overlap functions and, further, the scattering $p$+$^7$Be wave function was calculated from a phenomenological potential model. The present investigation within the [*ab initio*]{} NCSM/RGM approach paves the way for a complete first principles calculation of this capture reaction. Here, we limit ourselves to scattering calculations and postpone the capture reaction calculations to a forthcoming paper. Our current limit on the unrestricted NCSM calculations for $^7$Li and $^7$Be is $N_{\rm max}=10$. To improve the convergence of our scattering calculations, we utilize wave functions obtained within the IT-NCSM. In that scheme, we are able to reach $N_{\rm max}=18$ model spaces and calculate both ground as well as low-lying excited states. This is demonstrated in Fig. \[fig:Li7\_ITNCSM\]. With the SRG-N$^3$LO $NN$ potential with $\Lambda=2.02$ fm$^{-1}$ employed in the present study we reach convergence already around $N_{\rm max}=12-14$. Also, as seen in the figure, the aggreement between the unrestricted NCSM and the IT-NCSM is perfect up to the highest accessible unrestricted space, $N_{\rm max}=10$. $n$-$^{7}$Li ------------ ![(Color online) $P$-wave diagonal phase shifts of the $n$-$^7$Li elastic scattering (top panel), elastic $^7$Li($n$,$n$)$^7$Li cross section (middle panel), and inelastic $^7$Li($n$,$n'$)$^7$Li(1/2$^-$) cross section (bottom panel). The NCSM/RGM calculation that included the $^7$Li ground state and the $1/2^-$ and $7/2^-$ excited states were done using the SRG-N$^3$LO $NN$ potential with a cutoff of 2.02 fm$^{-1}$. Wave functions from IT-NCSM calculations in the $N_{\rm max}=12$ basis and the HO frequency of $\hbar\Omega=20$ MeV were employed. Experimental data are from Ref. [@FLR55]. []{data-label="fig:Li7_IT_317_Pwaves"}](phase_shift_nLi7_srg-n3lo0600_20_15_IT_317_Pwaves_fig.eps){width="0.9\columnwidth"} ![(Color online) $P$-wave diagonal phase shifts of the $n$-$^7$Li elastic scattering (top panel), elastic $^7$Li($n$,$n$)$^7$Li cross section (middle panel), and inelastic $^7$Li($n$,$n'$)$^7$Li(1/2$^-$) cross section (bottom panel). The NCSM/RGM calculation that included the $^7$Li ground state and the $1/2^-$ and $7/2^-$ excited states were done using the SRG-N$^3$LO $NN$ potential with a cutoff of 2.02 fm$^{-1}$. Wave functions from IT-NCSM calculations in the $N_{\rm max}=12$ basis and the HO frequency of $\hbar\Omega=20$ MeV were employed. Experimental data are from Ref. [@FLR55]. []{data-label="fig:Li7_IT_317_Pwaves"}](sigma_reac_nLi7_srg-n3lo0600_20_15_IT_317_2cf_010_ici_gs-gs.eps){width="0.9\columnwidth"} ![(Color online) $P$-wave diagonal phase shifts of the $n$-$^7$Li elastic scattering (top panel), elastic $^7$Li($n$,$n$)$^7$Li cross section (middle panel), and inelastic $^7$Li($n$,$n'$)$^7$Li(1/2$^-$) cross section (bottom panel). The NCSM/RGM calculation that included the $^7$Li ground state and the $1/2^-$ and $7/2^-$ excited states were done using the SRG-N$^3$LO $NN$ potential with a cutoff of 2.02 fm$^{-1}$. Wave functions from IT-NCSM calculations in the $N_{\rm max}=12$ basis and the HO frequency of $\hbar\Omega=20$ MeV were employed. Experimental data are from Ref. [@FLR55]. []{data-label="fig:Li7_IT_317_Pwaves"}](sigma_reac_nLi7_srg-n3lo0600_20_15_IT_317_2cf_010_ici_gs-1m.eps){width="0.9\columnwidth"} The NCSM/RGM coupled-channel calculations performed for the $A=8$ system include the $^7$Li ($^7$Be) ground state, the first excited $1/2^-$ state as well as the second excited $7/2^-$ state. It is essential to include the $7/2^-$ state in order to reproduce the low-lying $3^+$ resonance in $^8$Li and $^8$B. Using these three states, we are able to reach model spaces up to $N_{\rm max}=12$, which is sufficient concerning the HO basis expansion convergence as can be judged from Fig. \[fig:Li7\_ITNCSM\]. The coupled channel calculation described above gives two bound states for the $n$-$^7$Li system, a $2^+$ corresponding to the experimentally observed $^8$Li ground state, bound by 2.03 MeV [@TUNL_A8], and a $1^+$ corresponding to the $^8$Li first excited state at $E_x=0.98$ MeV, bound by 1.05 MeV [@TUNL_A8]. The calculated states are bound by 1.16 MeV and 0.17 MeV, respectively, i.e. less than in experiment. This is in part due to the fact that higher excited states of $^7$Li were omitted. In Fig. \[fig:Li7\_IT\_317\_Pwaves\], we present our results for the diagonal $P$-wave phase shifts of the $n$+$^7$Li elastic scattering as well as the elastic $^7$Li($n$,$n$)$^7$Li and inelastic $^7$Li($n$,$n'$)$^7$Li(1/2$^-$) cross sections. At low energies, we can identify four resonances two of which can be associated with the experimentally known $^8$Li states: $3^+$ at $E_x=2.255$ MeV and $1^+$ at $E_x=3.21$ MeV [@TUNL_A8]. The other two resonances, $0^+$ and $2^+$ are not present in the $^8$Li evaluation of Ref. [@TUNL_A8]. They do appear in many theoretical calculations including the GFMC [@GFMC], NCSM [@NBC06] and recoil-corrected continuum shell model (RCCSM) [@Halderson06]. The $0^+$ resonance also appears in the GCM calculations of Ref. [@Desc94]. Contributions of different resonances to the cross sections can be deduced from Fig. \[fig:Li7\_IT\_317\_Pwaves\]. The elastic cross section is dominated by the $3^+$ resonance with some contributions from the $2^+$ resonace at higher energy. The inelastic cross section shows a peak just above the threshold due to the $0^+$ resonance and also a contribution from the $1^+$ resonance. The appearance of a $0^+$ peak just above threshold of the $^7$Li($n$,$n'$)$^7$Li(1/2$^-$) reaction was also discussed in Ref. [@Halderson06] (see Fig. 10 in that paper). The data of Ref. [@FLR55] seem to rule out a $0^+$ state so close to the threshold. It is known, however, that the position of the $0^+$ state is sensitive to the strength of the spin-orbit interaction [@GFMC; @NBC06; @Halderson06]. The three-nucleon interaction, that would increase the strenght of the spin-orbit force, was not included in our present calculations. Consequently, our predicted $0^+$ state energy is likely underestimated. We note that no fit to the experimental treshold was done in the present NCSM/RGM calculations. Still, as seen in the bottom panel of Fig. \[fig:Li7\_IT\_317\_Pwaves\], the calculated inelastic cross section is very close to the experimental data just above the threshold. $p$-$^{7}$Be ------------ ![(Color online) $P$-wave diagonal phase shifts of the $p$+$^7$Be elastic scattering (top and middle panel) and inelastic $^7$Be($p$,$p'$)$^7$Be(1/2$^-$) cross section (bottom panel). The NCSM/RGM calculation that included the $^7$Be ground state and the $1/2^-$ and $7/2^-$ excited states were done using the SRG-N$^3$LO $NN$ potential with a cutoff of 2.02 fm$^{-1}$. Wave functions from IT-NCSM calculations in the $N_{\rm max}=12$ basis and the HO frequency of $\hbar\Omega=20$ MeV were employed. In the middle panel, the full-space NCSM (solid lines) and the IT-NCSM (dashed lines) results in the $N_{\rm max}=10$ basis are compared.[]{data-label="fig:Be7_317_Pwaves"}](phase_shift_pBe7_srg-n3lo2.0205_20_15_317_IT_Pwaves_fig.eps){width="0.9\columnwidth"} ![(Color online) $P$-wave diagonal phase shifts of the $p$+$^7$Be elastic scattering (top and middle panel) and inelastic $^7$Be($p$,$p'$)$^7$Be(1/2$^-$) cross section (bottom panel). The NCSM/RGM calculation that included the $^7$Be ground state and the $1/2^-$ and $7/2^-$ excited states were done using the SRG-N$^3$LO $NN$ potential with a cutoff of 2.02 fm$^{-1}$. Wave functions from IT-NCSM calculations in the $N_{\rm max}=12$ basis and the HO frequency of $\hbar\Omega=20$ MeV were employed. In the middle panel, the full-space NCSM (solid lines) and the IT-NCSM (dashed lines) results in the $N_{\rm max}=10$ basis are compared.[]{data-label="fig:Be7_317_Pwaves"}](phase_shift_pBe7_srg-n3lo2.0205_20_13_317_Pwaves_full_IT_compare_fig.eps){width="0.9\columnwidth"} ![(Color online) $P$-wave diagonal phase shifts of the $p$+$^7$Be elastic scattering (top and middle panel) and inelastic $^7$Be($p$,$p'$)$^7$Be(1/2$^-$) cross section (bottom panel). The NCSM/RGM calculation that included the $^7$Be ground state and the $1/2^-$ and $7/2^-$ excited states were done using the SRG-N$^3$LO $NN$ potential with a cutoff of 2.02 fm$^{-1}$. Wave functions from IT-NCSM calculations in the $N_{\rm max}=12$ basis and the HO frequency of $\hbar\Omega=20$ MeV were employed. In the middle panel, the full-space NCSM (solid lines) and the IT-NCSM (dashed lines) results in the $N_{\rm max}=10$ basis are compared.[]{data-label="fig:Be7_317_Pwaves"}](sigma_reac_pBe7_srg-n3lo2.0205_20_15_317_IT_gs-1m.eps){width="0.9\columnwidth"} In the mirror system, $p$-$^7$Be, we do not find a bound state in the same type of coupled-channel NCSM/RGM calculation as described above for $n$-$^7$Li. As seen in the top and the middle parts of Fig. \[fig:Be7\_317\_Pwaves\], the lowest $2^+$ resonance corresponding to the $^8$B ground state lies at about 200 keV above the threshold. In experiment, $^8$B is bound by 137 keV [@TUNL_A8]. Our calculated lowest $1^+$ resonance appears at about 1 MeV. It corresponds to the experimental $^8$B $1^+$ state at $E_x=0.77$ MeV (0.63 MeV above the $p$-$^7$Be threshold). This resonance dominates the inelastic cross section as seen in the bottom part of Fig. \[fig:Be7\_317\_Pwaves\]. The higher lying resonances follow similar patterns as those found in $n$-$^7$Li (Fig. \[fig:Li7\_IT\_317\_Pwaves\]). Again, we find $0^+$ and $2^+$ resonances not included in the recent $^8$B evaluation [@TUNL_A8]. We note that experimental efforts are now under way to find these resonances [@Rogachev01; @Greife07]. We further note that our calculated $1^+_2$ states in $^8$Li and $^8$B appear at a significantly higher energies than the corresponding $1^+_2$ states obtained within the microscopic cluster model in Ref. [@Csoto]. The middle panel of Fig. \[fig:Be7\_317\_Pwaves\] demonstrates once again the good accuracy of the importance truncated calculations for a high $N\hbar\Omega$, $N_{\rm max}=10$, model space. The IT calculation reduced the $^7$Be basis from 43.6 million to 11.9 million in the present case. ![(Color online) Elastic $^7$Be($p$,$p$)$^7$Be (top panel) and inelastic $^7$Be($p$,$p'$)$^7$Be(1/2$^-$) (bottom panel) differential cross section at $\Theta_{c.m.}=148^0$ calculated within the NCSM/RGM with SRG-N$^3$LO $NN$ potential with $\Lambda=2.02$ fm$^{-1}$.[]{data-label="fig:p_Be7_148"}](dsigma_dOmega_148_pBe7_srg-n3lo2.0205_20_15_317_IT_gs.eps){width="0.9\columnwidth"} ![(Color online) Elastic $^7$Be($p$,$p$)$^7$Be (top panel) and inelastic $^7$Be($p$,$p'$)$^7$Be(1/2$^-$) (bottom panel) differential cross section at $\Theta_{c.m.}=148^0$ calculated within the NCSM/RGM with SRG-N$^3$LO $NN$ potential with $\Lambda=2.02$ fm$^{-1}$.[]{data-label="fig:p_Be7_148"}](dsigma_dOmega_148_pBe7_srg-n3lo2.0205_20_15_317_IT_gs-1m.eps){width="0.9\columnwidth"} The elastic $p$-$^7$Be scattering was measured at $148^o$ and analyzed by the R-matrix approach [@Rogachev01]. Cross section calculations within the RCCSM at that angle were then published in Ref. [@Halderson04] and also in Ref. [@Halderson06]. Further, elastic and inelastic cross sections at this angle were analyzed within the time-dependent approach to the continuum shell model (TDCSM) [@Volya]. Our elastic and inelastic differential cross section results at $148^o$ are presented in Fig. \[fig:p\_Be7\_148\]. In the elastic cross section, the first $1^+$ state is visible and beyond the minimum of the cross section, we can see the dominant peak due to the $3^+$ state. At higher energies, the $2^+$ state contributes as well. The inelastic cross section at $148^o$ has a similar shape as the reaction cross section shown in Fig. \[fig:Be7\_317\_Pwaves\]. The first $1^+$ state peak dominates at low energy with contributions from the $0^+$ and the second $1^+$ at higher energies. Our findings are in line with the RCCSM results. However, we remind the reader that there is no fitting in our calculations, all results being predictions based on the SRG-N$^3$LO $NN$ potential. Because of this, the positions of our calculated resonances, e.g., $1^+$, $3^+$ do not exactly reproduce experiment. We do not include the experimental data in the figure as they would be shifted compared to the calculated peaks. There are at least two reasons why our predictions do not match the experimental resonances accurately. First, our nuclear Hamiltonian is incomplete, e.g. no three-nucleon interaction is included. Second, we omitted higher resonances of $^7$Li and $^7$Be due to numerical reasons. Most likely, the omitted resonances would produce some shifts in the calculated peaks. Both these points can and will be addressed in the future. Still, our current results contain the bulk of the physics behind the investigated scattering processes. $S$-wave scattering lengths of $n$-$^{7}$Li and $p$-$^{7}$Be ------------------------------------------------------------ ![(Color online) $S$-wave phase shifts of the $n$+$^7$Li (solid lines) and the $p$+$^7$Be (dashed lines) elastic scattering. The calculations as described in Figs. \[fig:Li7\_IT\_317\_Pwaves\] and \[fig:Be7\_317\_Pwaves\].[]{data-label="fig:Be7_Li7_317_Swaves"}](phase_shift_nLi7_pBe7_srg-n3lo0600_20_15_317_IT_S-waves_fig.eps){width="1.0\columnwidth"} ----------------- ------- ---------- ------- ------- Calc. Expt. Calc. Expt. $a_{01}$ \[fm\] +1.23 +0.87(7) -1.2 25(9) $a_{02}$ \[fm\] -0.61 -3.63(5) -10.2 -7(3) ----------------- ------- ---------- ------- ------- : The $n$-$^7$Li and the $p$-$^7$Be $S$-wave scattering lengths. Theoretical values correspond to calculations as described in Figs. \[fig:Li7\_IT\_317\_Pwaves\] and \[fig:Be7\_317\_Pwaves\]. Experimental values are from Refs. [@Be7_scatl; @Li7_scatl].[]{data-label="tab:S-wave_scatl"} In Fig. \[fig:Be7\_Li7\_317\_Swaves\], we present our calculated $n$-$^7$Li and the $p$-$^7$Be $S$-wave phase shifts. We do not find any evidence for a $2^-$ resonance advocated in Ref. [@Rogachev01] and discussed in Ref. [@Barker00]. The corresponding scattering lengths together with the experimental values are given in Table \[tab:S-wave\_scatl\]. With the exception of the $p$-$^7$Be $a_{01}$, which has a large experimental uncertainty, our calculated scattering lengths do agree with experimental data as to their signs, there are however differences in the absolute values. Again, as dicussed above, the results presented here serve only as a first step towards the [*ab initio*]{} investigation of the $n$-$^7$Li and the $p$-$^7$Be reactions. Prospects for a realistic calculation of the $^7$Be($p$,$\gamma$)$^8$B capture are excellent. Here we found the $^8$B unbound by only 200 keV. It is quite possible that $^8$B will become bound (with the $NN$ potential employed here: SRG-N$^3$LO with $\Lambda=2.02$ fm$^{-1}$) by including more excited states of $^7$Be in the coupled-channel NCSM/RGM calculations. The effect of higher excited states of $^7$Be can be, in fact, most efficiently included by coupling the presently used NCSM/RGM basis with the $^8$B NCSM eigenstates as outlined in Ref. [@NCSM_review]. Even if the $^8$B would not be bound or, most likely, the theshold energy will not agree with the experiment, we have the possibilty to explore a variation of the SRG $NN$ potential evolution parameter $\Lambda$ and tune this parameter to fit the experimental threshold. We note that for any $\Lambda$ the SRG-evolved $NN$ potential will describe all two-nucleon properties as accurately as the original starting $NN$ potential, here the chiral N$^3$LO potential of Ref. [@N3LO]. It should be noted that by adding the three-nucleon interaction, omitted in the present calculations due to computational reasons, the need for a fine-tuning should be significantly reduced, i.e. the results should become $\Lambda$ independent. Nucleon-$^{12}$C scattering {#n12C} =========================== For nucleon scattering calculations on $^{12}$C or heavier targets within the NCSM/RGM, the use of the importance truncation becomes essential. For $^{12}$C, the full-space NCSM calculations are currently limited to $N_{\rm max}=8$ (although successful runs were already performed for $N_{\rm max}=10$ on the biggest supercomputers with the latest version of the code MFD [@MFD]). This is insufficient for reaching or approaching convergence of the $^{12}$C NCSM calculations as seen from Fig. \[fig:C12\_ITNCSM\] and even more so of the NCSM/RGM scattering calculations. The importance-truncated calculations, on the other hand, are feasible up to $N_{\rm max}=18$, where convergence is reached for both the ground state as well excited states. Our $^{12}$C calculations are performed with the SRG-N$^3$LO $NN$ potential with the evolution parameter $\Lambda=2.66$ fm$^{-1}$, a higher value (i.e. shorter evolution, less soft) than that used for the lighter nuclei. The use of a small $\Lambda$ results in large overbinding of heavier nuclei and a significant underestimation of their radii. As seen in Fig. \[fig:C12\_ITNCSM\], our converged $^{12}$C binding energy is about 84.5(8) MeV, smaller than the experimental value of 92 MeV and, further, the agreement of the full-space and importance-truncated results is perfect all the way up to $N_{\rm max}=8$. $p$-$^{12}$C ------------ ![(Color online) Ground-state and the first excited $2^+$ state energy dependence on the model-space size $N_{\rm max}$ for $^{12}$C, obtained within the importance-truncated NCSM, using the SRG-N$^3$LO $NN$ potential with a cutoff of 2.66 fm$^{-1}$. The HO frequency $\hbar\Omega=24$ MeV was employed. The calculation is variational. No NCSM effective interaction was used. The full NCSM results were obtained with the code Antoine [@Antoine].[]{data-label="fig:C12_ITNCSM"}](C12_02_IT.eps){width="1.0\columnwidth"} Our low-energy $p$-$^{12}$C phase shift results are shown in Fig. \[fig:pC12\]. The comparison of the $N_{\rm max}=16$ and $N_{\rm max}=14$ results demonstrates good convergence with respect to the HO basis expansion. The $^{12}$C ground state and the first $2^+$ state were included in the coupled-channels NCSM/RGM equations. We note that we also performed a phase shift comparison of the full-space and the importance-truncated calculations up to $N_{\rm max}=6$ and found a similarly perfect agreement as presented in Fig. \[fig:nHe4\_phase\_full\_IT\] for $n$-$^4$He. ![(Color online) The $p$-$^{12}$C eigenphase shifts calculated within the NCSM/RGM using the SRG-N$^3$LO $NN$ potential with a cutoff of 2.66 fm$^{-1}$ and the HO frequency $\hbar\Omega=24$ MeV. Full lines (dotted lines) corespond to results obtained in the $N_{\rm max}=16$ ($N_{\rm max}=14$) model space. The ground state and the first excited $2^+$ state of $^{12}$C was included. The $^{12}$C wave functions were obtained within the IT NCSM.[]{data-label="fig:pC12"}](eigenphase_shift_pC12_srg-n3lo0200_24_17_19_02_IT_extrp_fig.eps){width="1.0\columnwidth"} In the present $p$-$^{12}$C calculations, we found a single bound state, $1/2^-$ at -2.98 MeV, corresponding to the $^{13}$N ground state, bound experimentally by 1.94 MeV [@AS91]. The lowest resonance in our calculation is $3/2^-$, barely visible at 0.25 MeV above threshold. In experiment, this resonance is at 1.56 MeV. Our calculated $1/2^+$ resonance appears at about 1.5 MeV above treshold (in experiment at 0.42 MeV above threshold) and the $5/2^+$ resonance at about 4.9 MeV (in experiment at 2.61 MeV). $n$-$^{12}$C ------------ In the mirror system, $n$-$^{12}$C, our NCSM/RGM calculations produce three bound states: $1/2^-$ at -5.34 MeV corresponding to the $^{13}$C ground state experimentally bound by 4.95 MeV with respect to the $n$-$^{12}$C threshold, $3/2^-$, bound by 2.23 MeV (experimentally bound by 1.26 MeV), and $1/2^+$ bound by just 0.03 MeV (experimentally bound by 1.86 MeV). In experiment, there is also a $5/2^+$ state bound by 1.09 MeV. Our present NCSM/RGM calculations including the lowest $0^+$ and and the lowest $2^+$ $^{12}$C states do not produce any bound $5/2^+$ state. ![(Color online) The $n$-$^{12}$C phase shifts calculated within the NCSM/RGM using the SRG-N$^3$LO $NN$ potential with a cutoff of 2.66 fm$^{-1}$. The HO frequency $\hbar\Omega=24$ MeV and the model-spaces size of $N_{\rm max}=16$ were used. The ground state and the first excited $2^+$ state of $^{12}$C was included. The $^{12}$C wave functions were obtained within the IT NCSM.[]{data-label="fig:nC12"}](phase_shift_nC12_srg-n3lo0200_24_19_02_IT_extrp_fig.eps){width="1.0\columnwidth"} Our low-energy $n$-$^{12}$C diagonal phase shifts are shown in Fig. \[fig:nC12\]. The $5/2^+$ resonance is found at 2.8 MeV (experimenally at 1.92 MeV with respect to the $n$-$^{12}$C threshold). The steep drop of the $1/2^+$ phase shift is due to the presence of the very weakly bound $1/2^+$ state. We note that similarly as in the case of $^{11}$Be, discussed in Ref. [@NCSMRGM], we observe a significant decrease of the $1/2^+$ state energy in the $n$-$^{12}$C NCSM/RGM calculation when compared to the standard NCSM calculation for $^{13}$C. We were able to make these comparisons in model spaces up to $N_{\rm max}=6$ where we found this drop to be about 3 MeV. ![(Color online) The analysing power for $n$-$^{12}$C elastic scattering below and above the calculated $5/2^+$ resonance. Energies are in the center of mass. The calculation as described in Fig. \[fig:nC12\].[]{data-label="fig:nC12_Ay"}](iT11_nC12_srg-n3lo0200_24_19_02_IT_fig.eps){width="1.0\columnwidth"} Analysing powers were measured for proton and neutron scattering on $^{12}$C [@Tra67; @Hsu66; @Ro05] and scattering experiments on polarized proton target are under way [@GUpriv]. In Fig. \[fig:nC12\_Ay\], we present our calculated analyzing power below and above the energy of the $5/2^+$ resonance. We note that our calculated $5/2^+$ resonance appears at 2.8 MeV in the center of mass (experimentally at 1.92 MeV). Below the resonance, the analyzing power is positive at $\Theta_{\rm CM}<90^o$ and negative at $\Theta_{\rm CM}>90^o$. At energies above the resonance, the analyzing power reverses its sign. Similar observations were made in calculations performed within the multichannel algebraic scattering (MCAS) theory [@n_C12_p_C12_MCAS; @n_C12_MCAS]. See in particular Fig. 5 of Ref. [@n_C12_MCAS]. Our calculated $^{13}$N and $^{13}$C bound-state levels and resonances are more spread than the experimental ones. This is a consequence of an underestimation of the $^{12}$C radius found to be 2.05 fm with the SRG-N$^3$LO $NN$ potential. To remedy this, one would have to calculate three-nucleon interaction terms induced due to the SRG evolution. This can be done as described in Ref. [@JNF09]. However, we still need to further develop the NCSM/RGM formalism in order to handle three-nucleon interactions in the scattering calculations. Nucleon-$^{16}$O scattering {#n16O} =========================== The calculation of nucleon scattering on $^{16}$O is the most challenging among the systems we investigate in this paper. The $\alpha$ clustering plays an important role in the structure of $^{16}$O, in particular for the first excited $0^+$ state that is known to be almost impossible to reproduce in NCSM or coupled-cluster calculations. Our present calculations do not include the $\alpha$ clustering yet. As in the case of $^{12}$C, we rely on the importance-truncated NCSM calculations for obtaining the $^{16}$O wave functions as the full-$N_{\rm max}$ NCSM calculations are possible only up to $N_{\rm max}=8$. In Fig. \[fig:O16\_ITNCSM\], we show the ground-state convergence within the IT-NCSM and a comparison to the full-space results. Again, up to the largest accessible model space, the agreement between the importance-truncated and the full-space calculations is perfect. $n$-$^{16}$O ------------ ![(Color online) Ground-state energy dependence on the model-space size $N_{\rm max}$ for $^{16}$O, obtained within the importance-truncated NCSM, using the SRG-N$^3$LO $NN$ potential with a cutoff of 2.66 fm$^{-1}$. The HO frequency $\hbar\Omega=24$ MeV was employed. The calculation is variational. No NCSM effective interaction was used. The full NCSM results were obtained with the code Antoine [@Antoine].[]{data-label="fig:O16_ITNCSM"}](O16_ITNCSM.eps){width="1.0\columnwidth"} ![(Color online) The $n-^{16}$O phase shifts calculated within the NCSM/RGM using the SRG-N$^3$LO $NN$ potential with a cutoff of 2.66 fm$^{-1}$ and the HO frequency $\hbar\Omega=24$ MeV in the $N_{\rm max}=18$ model space. The ground state and of $^{16}$O was included. The $^{16}$O wave functions were obtained within the IT NCSM.[]{data-label="fig:nO16gs_18"}](phase_shift_nO16_srg-n3lo0200_24_21_fin.eps){width="1.0\columnwidth"} ![(Color online) Basis size dependence of the $n-^{16}$O phase shifts calculated within the NCSM/RGM using the SRG-N$^3$LO $NN$ potential with a cutoff of 2.66 fm$^{-1}$. The HO frequency of $\hbar\Omega=24$ MeV was used. The $J^\pi=1/2^+ (3/2^+)$ channel is shown in the top (bottom) panel. Model space sizes up to $N_{\rm max}=18$ were considered. The ground state and of $^{16}$O was included. The $^{16}$O wave functions were obtained within the IT NCSM.[]{data-label="fig:nO16gs_conv"}](phase_shift_nO16_srg-n3lo0200_24_7_21_1p.eps){width="0.9\columnwidth"} ![(Color online) Basis size dependence of the $n-^{16}$O phase shifts calculated within the NCSM/RGM using the SRG-N$^3$LO $NN$ potential with a cutoff of 2.66 fm$^{-1}$. The HO frequency of $\hbar\Omega=24$ MeV was used. The $J^\pi=1/2^+ (3/2^+)$ channel is shown in the top (bottom) panel. Model space sizes up to $N_{\rm max}=18$ were considered. The ground state and of $^{16}$O was included. The $^{16}$O wave functions were obtained within the IT NCSM.[]{data-label="fig:nO16gs_conv"}](phase_shift_nO16_srg-n3lo0200_24_7_21_3p.eps){width="0.9\columnwidth"} It is straightforward to converge nucleon-$^{16}$O scattering calculations within the NCSM/RGM using the HO expansion up to $N_{\rm max}=18$. Our calculated $n$-$^{16}$O phase shifts are shown in Fig. \[fig:nO16gs\_18\] and the HO-basis expansion convegence is checked for the $S$- and the $D$-wave in Fig. \[fig:nO16gs\_conv\]. These calculations included the $^{16}$O ground state only. We find two bound states, $1/2^+$ at -0.88 MeV and $5/2^+$ at -0.41 MeV with respect to the $n$-$^{16}$O threshold. In experiment, the $^{17}$O ground state is $5/2^+$, bound by 4.14 MeV, and the $1/2^+$ state is the first excited state bound by 3.27 MeV. There are also two additional bound states: $1/2^-$ and $3/2^-$. Those are unbound in our calculations. ![(Color online) The $n$-$^{16}$O phase shifts calculated within the NCSM/RGM using the SRG-N$^3$LO $NN$ potential with a cutoff of 2.66 fm$^{-1}$ and the HO frequency $\hbar\Omega=24$ MeV in the $N_{\rm max}=13$ model space. The ground state and and the lowest $3^-$, $1^-$ and $2^-$ excited states of $^{16}$O were included. The $^{16}$O wave functions were obtained within the IT NCSM.[]{data-label="fig:nO16gs312_15"}](phase_shift_nO16_srg-n3lo0200_24_15_0312_fin.eps){width="1.0\columnwidth"} Clearly, it is insufficient to consider only the ground state of $^{16}$O in the coupled-channel NCSM/RGM scattering calculations. We, therefore, include in addition the three lowest $^{16}$O negative parity states: $3^-$, $1^-$, and $2^-$. Due to computational limitations, in this case we used HO basis expansion up to $N_{\rm max}$=13. Comparing Fig. \[fig:nO16gs312\_15\] to Fig. \[fig:nO16gs\_18\], the $1p-1h$ negative-parity excited states of $^{16}$O generate negative-parity resonances in $^{17}$O. These resonances do appear, however, at much higher energy than in experiment. The reason for this is the fact that our calculated $^{16}$O $1p-1h$ states have too large excitation energy. In particular, our calculated $3^-$ excited state has an excitation energy of 15.99 MeV while experimentally it lies at just 6.13 MeV. One reason for the discrepancy is the softness of the SRG-N$^3$LO $NN$ potential we use that results in an overall overbinding of the $^{16}$O ground state and in an underestimation of its radius. Another aspect is the challenging problem of the IT-NCSM extrapolations of the independent positive and negative-parity state calculations. The uncertainties of the relative excitation energies are higher than in same-parity calculations. On the positive side our calculation with the negative-parity states, even though with overestimated excitation energies, results in the proper ordering of the $^{17}$O bound states. The ground state is $5/2^+$ at -1.32 MeV and the $1/2^+$ state gains binding as well, appearing at -1.03 MeV. $p$-$^{16}$O ------------ We also investigated the $p$-$^{16}$O scattering and $^{17}$F states. When the NCSM/RGM calculations are restricted to the channels involving only the $^{16}$O ground state, we find a $1/2^+$ resonance at 1.0 MeV and a $5/2^+$ resonance at 2.2 MeV. These resonances correspond to the $^{17}$F $1/2^+$ first excited state, bound by 0.105 MeV, and the $^{17}$F $5/2^+$ ground state bound by 0.6 MeV with respect to the $p$+$^{16}$O threshold. By coupling channels involving the $1p-1h$ $^{16}$O $3^-$, $1^-$ and $2^-$ excited states, the calculated $1/2^+$ and $5/2^+$ states are still unbound resonances but their energy moves significantly closer to the threshold: the $1/2^+$ appears at +0.7 MeV and the $5/2^+$ at +1.2 MeV. The $^{17}$F low-lying states were recently investigated within the coupled-cluster approach with the Gamow-Hartree-Fock basis [@HPH10]. In those calculations with the N$^3$LO $NN$ potential, the $1/2^+$ state is weakly bound while the $5/2^+$ state remains unbound by about 0.1 MeV. Using the SRG evolved interaction, the $5/2^+$ state became bound with the decrease of the cutoff $\Lambda$. We note that our calculated $^{16}$O ground state energy, -139.0(8) MeV (Fig. \[fig:O16\_ITNCSM\]) obtained with the SRG-N$^3$LO $NN$ potential with $\Lambda=2.66$ fm$^{-1}$, compares well with the CCSD coupled-cluster $^{16}$O calculations: -137.6 MeV with the SRG-N$^3$LO $NN$ potential with $\Lambda=2.8$ fm$^{-1}$ [@Pap10]. The differences in the positions of the $1/2^+$ and the $5/2^+$ are due to deficiencies in our description of the negative parity $1p-1h$ states, which could be related to the two-body Hamiltonian used here as well as the uncertainties of the threshold extrapolations for the excitation energies. The inclusion of additional $^{16}$O excited states would increase the absolute energy of our calculated $^{17}$F states. The most efficient way to do this is by coupling the presently used NCSM/RGM basis with the $^{17}$F NCSM eigenstates in as outlined in Ref. [@NCSM_review]. Conclusions =========== By combining the importance truncation scheme for the cluster eigenstate basis with the [*ab initio*]{} NCSM/RGM approach, we were able to perform many-body calculations for nucleon scattering on nuclei with mass number as high as $A=16$. With the soft SRG-evolved chiral $NN$ potentials, convergence of the calculations with respect to the HO basis expansion of the target eigenstates and the localized parts of the NCSM/RGM integration kernels can be reached using $N_{\rm max}=12-16$. We first benchmarked the IT-NCSM results with the full-space NCSM results for the $A=5$ system. Our neutron-$^4$He and proton-$^4$He calculations compare well with an R-matrix analysis of the data in particular at energies above 8 MeV, and describe well measured cross sections and analysing powers for those energies. Our calculations of $n$-$^7$Li and $p$-$^7$Be scattering predict low-lying $0^+$ and $2^+$ resonances in $^8$Li and $^8$B that have not been experimentally clearly identified yet. We found that the prospects of a realistic [*ab initio*]{} calculation of the $^7$Be($p$,$\gamma$)$^8$B capture within our approach are very good. In the present calculations we found the $^8$B unbound by only 200 keV. It is quite possible that $^8$B will become bound (with the $NN$ potential employed here: SRG-N$^3$LO with $\Lambda=2.02$ fm$^{-1}$) by including more excited states of $^7$Be in the coupled-channel NCSM/RGM calculations. Even if the $^8$B will still not be bound or, most likely, the threshold energy will not agree with the experiment, we have the possibility to explore a variation of the SRG $NN$ potential evolution parameter $\Lambda$ and tune this parameter to fit the experimental threshold. The use of the importance-trunacted basis becomes essential in calculations with $^{12}$C or $^{16}$O targets as the full-space NCSM calculations are limited to $N_{\rm max}=8$. Our $n$-$^{12}$C and $p$-$^{12}$C investigations included $^{12}$C ground and the first excited $2^+$ states. We found a single bound state, $1/2^+$ in $^{13}$N as in experiment. In $^{13}$C, we found three bound states with the $5/2^+$ state still unbound contrary to experiment. Our calculated spectrum of $A=13$ states is more spread than in experiment due to the underestimation of the $^{12}$C radius, a consequence of the softness of the SRG-evolved $NN$ interaction. The description of nucleon scattering on $^{16}$O within our formalism was the most challenging. The $\alpha$ clustering that plays an important role in the structure of $^{16}$O is not yet included in our present calculations. Further, the $1p-1h$ $^{16}$O excited states are more difficult to treat in the IT-NCSM approach, as the extrapolations of excitation energies are done from the independent ground state and the negative-parity state calculations. We found a strong impact of the $1p-1h$ $^{16}$O states on the positions of the lowest $A=17$ states. For example, correct ordering of the $5/2^+$ and the $1/2^+$ states in $^{17}$O was obtained only when the $1p-1h$ states were included. Overall, we find that the inclusion of additional excited states of the target nuclei would be beneficial in all studied systems and more significant with the increase of $A$. Coupled-channel NCSM/RGM calculations with many excited states of the traget are computationally challenging. The most efficient way of including the effects of such states is by coupling the presently used NCSM/RGM basis, consisting of just a few lowest excited states, with the NCSM eigenstates of the composite system as outlined in Ref. [@NCSM_review]. Work on this coupling is under way. The use of the SRG-evolved $NN$ interaction facilitates convergence of the NCSM/RGM calculations with respect to the HO basis expansion. On the other hand, due to the softness of these interactions, radii of heavier nuclei become underestimated. To remedy this, one would have to calculate three-nucleon interaction terms induced due to the SRG evolution. This can be done as described in Ref. [@JNF09]. It is essential to further develop the NCSM/RGM formalism in order to handle three-nucleon interactions, both genuine and the SRG-evolution induced, in the scattering calculations. In the present paper, we limited ourselves to single-nucleon projectile scattering. Extensions of the NCSM/RGM formalism to include deuteron, $^3$H and $^3$He projectiles are under way. Numerical calculations have been performed at the LLNL LC facilities and at the NIC, Jülich. Prepared in part by LLNL under Contract DE-AC52-07NA27344. Support from the U. S. DOE/SC/NP (Work Proposal No. SCW0498), LLNL LDRD grant PLS-09-ERD-020, and from the U. S. Department of Energy Grant DE-FC02-07ER41457 is acknowledged. This work is supported in part by the Deutsche Forschungsgemeinschaft through contract SFB 634 and by the Helmholtz International Center for FAIR within the framework of the LOEWE program launched by the State of Hesse.\ [10]{} H. Kamada [*et al.*]{} Phys. Rev. C [**64**]{}, 044001 (2001). A. Nogga, H. Kamada, and W. Glöckle, Phys. Rev. Lett. [**85**]{}, 944 (2000). R. B. Wiringa, S. C. Pieper, J. Carlson, V. R. Pandharipande, Phys. Rev. C [**62**]{}, 014001 (2000); S. C. Pieper and R. B. Wiringa, Ann. Rev. Nucl. Part. Sci. [**51**]{}, 53 (2001); S. C. Pieper, K. Varga and R. B. Wiringa, Phys. Rev. C [**66**]{}, 044310 (2002). P. Navrátil and W. E. Ormand, Phys. Rev. C [**68**]{}, 034305 (2003). H. Witala, W. Glöckle, J. Golak, A. Nogga, H. Kamada, R. Skibinski, and J. Kuros-Zolnierczuk, Phys. Rev. C [**63**]{}, 024007 (2001). R. Lazauskas and J. Carbonell, Phys. Rev. C [**70**]{}, 044002 (2004). A. Kievsky, S. Rosati, M. Viviani, L. E. Marcucci and L. Girlanda, J. Phys. G [**35**]{}, 063101 (2008). A. Deltuva and A. C. Fonseca, Phys. Rev. C [**75**]{}, 014005 (2007); Phys. Rev. Lett. [**98**]{}, 162502 (2007). K. M. Nollett, S. C. Pieper, R. B. Wiringa, J. Carlson and G. M. Hale, Phys. Rev. Lett. [**99**]{}, 022502 (2007). G. Hagen, D. J. Dean, M. Hjorth-Jensen and T. Papenbrock, Phys. Lett. B [**656**]{}, 169 (2007). P. Navrátil, J. P. Vary, and B. R. Barrett, Phys. Rev. Lett. [**84**]{}, 5728 (2000); Phys. Rev. C [**62**]{}, 054311 (2000). K. Wildermuth and Y. C. Tang, [*A unified theory of the nucleus*]{}, (Vieweg, Braunschweig, 1977). Y. C. Tang, M. LeMere and D. R. Thompson, Phys. Rep. [**47**]{}, 167 (1978). T. Fliessbach and H. Walliser, Nucl. Phys. [**A377**]{}, 84 (1982). K. Langanke and H. Friedrich, [*Advances in Nuclear Physics*]{}, edited by J. W. Negele and E. Vogt (Plenum, New York, 1986). R. G. Lovas, R. J. Liotta, A. Insolia, K. Varga and D. S. Delion, Phys. Rep. [**294**]{}, 265 (1998). H. M. Hofmann and G. M. Hale, Phys. Rev. C [**77**]{}, 044002 (2008). S. Quaglioni and P. Navr[á]{}til, Phys. Rev. Lett. [**101**]{}, 092501 (2008). S. Quaglioni and P. Navr[á]{}til, Phys. Rev. C [**79**]{}, 044606 (2009). P. Descouvemont, C. Daniel, and D. Baye, Phys. Rev.  C [**67**]{}, 044309 (2003). P. Descouvemont, E. Tursunov, and D. Baye, Nucl. Phys. [**A765**]{}, 370 (2006). M. Theeten, D. Baye, and P. Descouvemont, Phys. Rev. C [**74**]{}, 044304 (2006). M. Theeten, H. Matsumura, M. Orabi, D. Baye, P. Descouvemont, Y. Fujiwara, and Y. Suzuki, Phys. Rev. C [**76**]{}, 054003 (2007). D. Baye, P. Capel, P. Descouvemont, and Y. Suzuki, Phys. Rev. C [**79**]{}, 024607 (2009). R. Roth and P. Navrátil, Phys. Rev. Lett. [**99**]{}, 092501 (2007). R. Roth, Phys. Rev. C [**79**]{}, 064324 (2009). S. K. Bogner, R. J. Furnstahl and R. J. Perry, Phys. Rev. C [**75**]{}, 061001 (2007). R. Roth, S. Reinhardt and H. Hergert, Phys. Rev. C [**77**]{}, 064003 (2008). R. Roth, T. Neff, H. Feldmeier, Prog. Part. Nucl. Phys. **65**, 50 (2010). D. R. Entem and R. Machleidt, Phys. Rev. C [**68**]{}, 041001(R) (2003). R. Roth, J. R. Gour, and P. Piecuch, Phys. Rev. C **79**, 054325 (2009). R. Roth, J. R. Gour, and P. Piecuch, Phys. Lett. **B** 682, 27 (2009). S. K. Bogner, T. T. S. Kuo, and A. Schwenk, Phys. Rept. **386**, 1 (2003); G. Hagen, private communication. R. Machleidt, Phys. Rev. C [**63**]{}, 024001 (2001). G. M. Hale, private communication. H. Krupp, J. C. Hiebert, H. O. Klages, P. Doll, J. Hansmeyer, P. Klischke, J. Wilczynski, and H. Zankel, Phys. Rev. C [**30**]{}, 1810 (1984). P. Schwandt, T. B. Clegg, and W. Haeberli, Nucl. Phys. A [**163**]{}, 432 (1971). K. W. Brokman, Phys. Rev. [**108**]{}, 1000 (1957). D. C. Dodder, G. M. Hale, N. Jarmie, J. H. Jett, P. W. Keaton, Jr., R. A. Nisley, and K. Witte, Phys. Rev. C [**15**]{}, 518 (1977). R. A. Hardekopf and G. G. Holsen, Phys. Rev. C [**15**]{}, 514 (1977). E. Adelberger [*et al.*]{}, rev. Mod. Phys. [**70**]{}, 1265 (1998). SNO Collaboration, S. N. Ahmed [*et al.*]{}, Phys. Rev. Lett. [**92**]{}, 181301 (2004). S. Couvidat, S. Turck-Chièze, and A. G. Kosovichev, Astrophys. J. [**599**]{}, 1434 (2003). J. N. Bahcall and M. H. Pinsonneault, Phys. Rev. Lett. [**92**]{}, 121301 (2004). C. Angulo [*et al.*]{}, Nucl. Phys. A [**716**]{}, 211 (2003). G. V. Rogachev [*et al.*]{}, Phys. Rev. C [**64**]{}, 061601(R) (2001). L. Koester, K. Knopf, and W. Waschkowski, Z. Phys. A - Atoms and Nuclei [**312**]{}, 81 (1983). P. Navratil, C. A. Bertulani and E. Caurier, Phys. Lett B [**634**]{}, 191 (2006); Phys. Rev. C [**73**]{}, 065801 (2006). D. R. Tilley [*et al.*]{}, Nuclear Physics A [**745**]{}, 155 (2004). D. Halderson, Phys. Rev. C [**73**]{}, 024612 (2006). P. Descouvemont and D. Baye, Nucl. Phys. A [**567**]{}, 341 (1994). J. M. Freeman, A. M. Lane, and B. Rose, Phil. Mag. [**46**]{}, 17 (1955). U. Greife [*et al.*]{}, Nucl. Instrum. Methods B [**261**]{}, 1089 (2007). A. Csoto, Phys. Rev. C [**61**]{}, 024311 (2000). D. Halderson, Phys. Rev. C [**69**]{}, 014609 (2004). A. Volya, Phys. Rev. C [**79**]{}, 044308 (2009). F. C. Barker and A. M. Mukhamedzhanov, Nucl. Phys. A [**673**]{}, 526 (2000). P. Navratil, S. Quaglioni, I. Stetcu and B. R. Barrett, J. Phys. G: Nucl. Part. Phys. [**36**]{}, 083101 (2009). J. P. Vary, “The Many-Fermion-Dynamics Shell-Model Code”, Iowa State University, 1992, unpublished. E. Caurier, G. Martinez-Pinedo, F. Nowacki, A. Poves, J. Retamosa and A. P. Zuker, Phys. Rev. C [**59**]{}, 2033 (1999); E. Caurier and F. Nowacki, Acta Physica Polonica B [**30**]{}, 705 (1999). F. Ajzenberg-Selove, Nucl. Phys. A [**523**]{}, 1 (1991). W. Trachslin and L. Brown, Nucl. Phys. A [**101**]{}, 273 (1967). C.-C. Hsu, Y.-C. Yang, and T.-J. Lee , Chinese J. Phys. [**4**]{}, 49 (1966). C. D. Roper [*et al.*]{}, Phys. Rev. C [**72**]{}, 024605 (2005). A. Galindo-Uribarri, private communication. G. Pisent, J. P. Svenne, L. Canton, K. Amos, S. Karataglidis, and D. van der Knijff, Phys. Rev. C [**72**]{}, 014601 (2005). J. P. Svenne, K. Amos, S. Karataglidis, D. van der Knijff, L. Canton, and G. Pisent, Phys. Rev. C [**73**]{}, 027601 (2006). E. D. Jurgenson, P. Navratil, and R. J. Furnstahl, Phys. Rev. Lett. [**103**]{}, 082501 (2009). G. Hagen, T. Papenbrock, and M. Hjorth-Jensen, Phys. Rev. Lett. [**104**]{}, 182501 (2010). T. Papenbrock, private communication.
--- author: - 'Matthew Dawber and James F. Scott' date: | Centre for Ferroics, Dept of Earth Sciences, Downing St, Cambridge, CB2 3EQ, UK.\ Email: [email protected], [email protected] title: 'Reply to cond-mat/0211660: Comments on “A model for fatigue in ferroelectric perovskite thin films” published in Appl. Phys. Lett, 76, 1060 (2000); addendum, ibid. p.3655 ' --- Although the appropriate forum for discussion of published papers should be the journal itself through the properly refereed process where we are given an opportunity to have a reply published simultaneously with the comments, Taganstev has chosen instead to attack our paper through this unrefereed forum. In this context it is worth noting that the comments that Tagantsev has made were in fact submitted to Applied Physics Letters more than two years ago. At that time we wrote a reply to these comments. The decision of the referee at that time was that Tagantsev’s comment was wrong and should not be published. Although we do not feel this is the appropriate forum for this discussion, Tagantsev has chosen to resume this debate in the unrefereed public forum of cond-mat and so we feel the need to defend ourselves against the 8 points he has raised. Our model was a first attempt to produce a quantitative analytic model for fatigue based on earlier work by Yoo and Desu, and as such requires further testing and development. We note that our model has already been extended and applied successfully by Wang et al. (Physica Status Solidi A, **191** 482 (2002)). Tagantsev’s “model” for fatigue is untestable (falsifiable).We would encourage feedback from other authors who have attempted to apply our model, and are always happy to discuss our work. Please contact us directly on the above email addresses if you have any concerns about the publications mentioned here. 1\. Taganstsev objects to our use of the Onsager expression for the local field at an oxygen vacancy, preferring instead an expression that is linear in the dielectric constant. One wonders what might happen when a ferroelectric goes through its phase transition and the dielectric constant (and hence local field, if one uses Taganstev’s expression) diverges. The Tagantsev model of ferroelectric detonation in which internal fields diverge as a ferroelectric material is field cooled through its transition temperature does not seem to have been experimentally observed. Perhaps one should look beyond undergraduate textbooks such as Kittel’s. A more detailed calculation of the effective charge on an oxygen vacancy has been recently undertaken by Prof. S.A. Prosandeev (cond-mat/0209019) in which he found that our result was much more appropriate than Taganstev’s. 2\. Taganstev claims that our equation is quite different from that of O’Dwyer. Simple inspection of the two equations show that this is not true. In the high field limit sinh(x)=exp(x), and as our local field is only 1.5 times the applied field Taganstev’s claim that use of this field changes the result by “orders of magnitude” is clearly unfounded. 3.Tagantsev’s point on equation 10 is taken. The reason that there appears to be a change in the oxygen vacancy concentration at the interface in the absence of an applied field is that we have used the high field limit of the sinh(x) term in the diffusion equation ie. (exp(x)). This means that our equations are not appropriate for low fields, but it should be noted that during polarisation switching high fields are applied. The reason for the use of the exponential limit of the sinh term was so as to simplify the derivation that followed. 4.We consider that the approximations we have taken are appropriate for the situation. The applied field in our model is very high because the applied potential falls across a quite narrow depletion region in the ferroelectric. Therefore in our opinion the space charge field is not significant compared to the applied field. Tagantsev is correct that one would expect to see an increase in concentration of charge at the electrodes in non-ferroelectric back-to-back Schottky diodes, however his claim that this has never been observed is false. This has been observed for at least twenty years in zinc oxide varistors. (e.g. Hayashi et al., J. Appl. Phys. **58** 5754 (1982)) Thus his argument helps prove our model - as confirmed by Hayashi. 5\. The activation energy of electrons was used to calculate the number of oxygen vacancies that would be charged, not the concentration of oxygen vacancies. The activation energy of 0.7 eV in fact corresponds to the trapping energy of Ti$^{3+}$ which is known to be associated with oxygen vacancies. We originally considered that the important activation energy originated from the charge state of the oxygen vacancy. However following our new ideas on oxygen vacancy ordering we believe that the entropy term is more important than originally anticipated. We would refer readers to further references on this subject for more details. (J.F. Scott and M. Dawber, Appl. Phys. Lett. **76** 3801 (2000), M. Dawber and J. F. Scott, Integr. Ferroelectr. **32** 951 (2001) J. F. Scott, Ferroelectric Memories (Springer, Heidelberg, 2000), pp. 134) 6\. Ref. 6 of Tagantsev’s paper was originally cited by us because it gave a reasonable number for the depletion width, which we used in our calculations. In the years since we published our paper we have re-examined the data of reference 6. We are no longer convinced that the current observed in this paper is in fact Fowler-Nordheim tunneling. We do not wish to further criticize this paper in this unrefereed forum, but any interested reader should attempt to fit the data of ref 6 to a Schottky plot to see the origin of our concerns. Readers can contact us directly for a more detailed explanation of these concerns which are partly based on our own unpublished results. We also have concerns about the effective masses used in Tagantsev’s analysis and the lack of specification of carrier type (electron/hole). \[Their m\* = 1.4 m$_{e}$ (Bull Am Phys Soc, Seattle, March 2001) value disagrees by x4 with the known electron band mass, and in undoped PZT films the carriers are NOT holes.\] We have previously discussed these problems elsewhere (J.F. Scott, Integr. Ferroelectr. **42** 1 (2002). 7\. In our original paper the figure was incorrectly labelled due to technical error in journal production. Our addendum clearly acknowledges this error. We apologize for any confusion caused by this error. The prediction of the equations in both papers is in line with the data of Mihara. 8\. Our model does predict a frequency dependence for fatigue. This has been seen in other papers than that of Colla et al., where it is true that the use of different waveforms complicates interpretation. Examples of such papers include, Lee et al, Appl. Phys. Lett. **79** 821 (2001) ; Zhang et al., Ferroelectrics, **259** 109 (2001).
--- abstract: 'We consider the possible observation of Fast Radio Bursts (FRBs) with planned future radio telescopes, and investigate how well the dispersions and redshifts of these signals might constrain cosmological parameters. We construct mock catalogues of FRB dispersion measure (DM) data and employ Markov Chain Monte Carlo (MCMC) analysis, with which we forecast and compare with existing constraints in the flat [$\Lambda$CDM]{}model, as well as some popular extensions that include dark energy equation of state and curvature parameters. We find that the scatter in DM observations caused by inhomogeneities in the intergalactic medium (IGM) poses a big challenge to the utility of FRBs as a cosmic probe. Only in the most optimistic case, with a high number of events and low IGM variance, do FRBs aid in improving current constraints. In particular, when FRBs are combined with CMB+BAO+SNe+$H_0$ data, we find the biggest improvement comes in the [$\Omega_{\rm b} h^2$]{}constraint. Also, we find that the dark energy equation of state is poorly constrained, while the constraint on the curvature parameter $\Omega_k$, shows some improvement when combined with current constraints. When FRBs are combined with future BAO data from 21cm Intensity Mapping (IM), we find little improvement over the constraints from BAOs alone. However, the inclusion of FRBs introduces an additional parameter constraint, [$\Omega_{\rm b} h^2$]{}, which turns out to be comparable to existing constraints. This suggest that FRBs provide valuable information about the cosmological baryon density in the intermediate redshift Universe, independent of high redshift CMB data.' author: - Anthony Walters - Amanda Weltman - 'B. M. Gaensler' - 'Yin-Zhe Ma' - Amadeus Witzemann bibliography: - 'frb\_refs.bib' title: Future Cosmological Constraints from Fast Radio Bursts --- Introduction ============ Improvements in cosmological measurement in recent years have been said to hail an era of “precision cosmology”, with observations of the cosmic microwave background (CMB) temperature anisotropies , baryon acoustic oscillation (BAO) wiggles in the galaxy power spectrum [@2011MNRAS.416.3017B; @2014MNRAS.441...24A; @2015MNRAS.449..835R], luminosity distance-redshift relation of Type Ia supernovae (SNIa) , local distance ladder [@2016ApJ...826...56R], galaxy clustering and weak lensing [@2017arXiv170801530D], and direct detection of gravitational waves [@2017Natur.551...85A], providing constraints on cosmological model parameters at percent, or sub-percent, level precision. Since the discovery of the accelerated expansion of the Universe, these observations have cemented the emergence of the flat $\Lambda$CDM model as the standard model of cosmology, in which global spatial curvature is zero, and the energy budget of the Universe is dominated by “dark energy” in the form of a cosmological constant, $\Lambda$. However, beyond the [$\Lambda$CDM]{}paradigm there are a large number of dark energy models aimed at explaining the accelerated expansion of the Universe (see reviews [@2011CoTPh..56..525L; @2015PhR...568....1J], and references therein), and so understanding the nature of dark energy remains one of the central pursuits in modern cosmology. To this end, it has become common observational practice to constrain the dark energy equation of state, $w(z)$, and check for deviations from the [$\Lambda$CDM]{}value of $w=\mathrm{const.}=-1$. While observational probes do not indicate any significant departure from $\Lambda$CDM [@2017arXiv170901091H], there is still room to tighten constraints and thereby rule out competing alternatives for dark energy. In particular, by tuning the parameters of alternative theories of dark energy, one can recover the behaviour of $\Lambda$CDM model at both the background expansion and perturbation levels [@2011CoTPh..56..525L; @2015PhR...568....1J]. Observations of the CMB together with SNIa and BAO constrain the spatial curvature parameter to be very small, $|\Omega_k|<0.005$ , consistent with the flat [$\Lambda$CDM]{}model, and the inflationary picture of the early Universe. However, model independent constraints from low redshift probes are not nearly as strong, with SNIa alone preferring an open universe with $\Omega_k \sim 0.2$ [@2015PhRvL.115j1301R]. Similarly, constraints on the baryon fraction, $\Omega_{\mathrm b}$, derived from observations of the CMB, and the abundance of light elements together with the theory of Big Bang Nucleosynthesis (BBN) [@2016ApJ...830..148C], are both rooted in high redshift physics. And while these constraints are somewhat consistent, the BBN results strongly depend on nuclear cross section data [@2016ApJ...830..148C; @2016MNRAS.458L.104D]. Thus, independent and precise low redshift probes of spatial curvature and the baryon density parameter which confirm the constraints from high redshift data are of observational and theoretical interest. Recently, a promising new astrophysical phenomenon, so called Fast Radio Bursts (FRBs) [@2007Sci...318..777L; @2011MNRAS.415.3065K; @2013Sci...341...53T; @2014ApJ...790..101S; @2015MNRAS.447..246P; @2014ApJ...792...19B; @2015ApJ...799L...5R; @2016MNRAS.460L..30C; @2015Natur.528..523M; @2016Natur.530..453K; @2016Sci...354.1249R; @2017MNRAS.468.3746C; @2017MNRAS.469.4465P], has emerged. An FRB is characterised by a brief pulse in the radio spectrum with a large dispersion in the arrival time of its frequency components, consistent with the propagation of an electromagnetic wave through a cold plasma. To date a total of 25 such FRBs [^1] have been detected, primarily by the the Parkes Telescope in Australia, but more recently interferometric detections have also been reported. Considering the greatly improved sensitivity of upcoming radio telescopes, expectations are high that many more FRB events will be observed in the near future [@2017MNRAS.465.2286R; @2017ApJ...846L..27F]. While their exact location and formation mechanism is still a subject of ongoing research [@2013ApJ...776L..39K; @2013PASJ...65L..12T; @2014ApJ...780L..21Z; @2015MNRAS.450L..71F; @2014MNRAS.442L...9L; @2016MNRAS.457..232C; @2017MNRAS.465L..30G; @2016ApJ...823L..28G; @2016ApJ...822L...7W; @2017ApJ...843L..26B; @2017arXiv170806352L; @2017MNRAS.468.2726K; @2017MNRAS.469L..39K; @2017arXiv170807507G; @2017ApJ...844..162T], their excessively large dispersion measures (DMs) argue that they have an extragalactic origin [@2015RAA....15.1629X]. Indeed, one FRB event has been sufficiently localised to be associated with a host galaxy at $z=0.19$ [@2017ApJ...834L...7T]. Should one be able to associate a redshift with enough FRBs, it would give access to the $\mathrm{DM}(z)$ relation, which may provide a new probe of the cosmos , possibly complementary to existing techniques. In addition, the observation of strongly lensed FRBs may help to constrain the Hubble parameter [@2017arXiv170806357L] and the nature of dark matter [@2016PhRvL.117i1301M], and dispersion space distortions may provide information on matter clustering [@2015PhRvL.115l1301M], all without redshift information. In this paper we assess the potential for using FRB $DM(z)$ measurements, to constrain the parameter space of various cosmological models, and whether this may improve the existing constraints coming from other observations. The outline is as follows: The details of modelling an extragalactic population of FRBs, constructing a mock catalogue of DM observations, and extracting and combining cosmological parameter constraints is given in §\[cosmoFRB\]. Parameter constraint forecasts from the mock FRB data, and its combination with CMB + BAO + SNIa + $H_0$ (hereafter referred to as CBSH), is given in §\[base\] for the flat [$\Lambda$CDM]{}model, and in §\[ext\] for 1- and 2-parameter extensions to the flat [$\Lambda$CDM]{}model. Possible synergies with other experiments are discussed in §\[synergies\]. Cosmology with Fast Radio Bursts {#cosmoFRB} ================================ Dispersion of the Intergalactic Medium {#cosmoDM} -------------------------------------- The DM of an FRB is associated with the propagation of a radio wave through a cold plasma, and is related to the path length from the emission event to observation, and the distribution of free electrons along that path, ${\rm DM}=\int n_{\rm e} {\rm d}l$. If FRBs are of extragalactic origin their observed dispersion measure, ${{\rm DM}_{\mathrm{obs}} }$, should be the sum of a number of different contributions, namely; from propagating through its host galaxy, ${{\rm DM}_{\mathrm{HG}} }$, the intergalactic medium (IGM), ${{\rm DM}_{\mathrm{IGM}} }$, and the Milky Way, ${{\rm DM}_{\mathrm{MW}} }$ [@2014ApJ...783L..35D]. Since ${{\rm DM}_{\mathrm{MW}} }$ as a function of Galactic latitude is well known from pulsar observations [@2017ApJ...835...29Y], and its contribution to ${{\rm DM}_{\mathrm{obs}} }$ is relatively small in most cases, we assume it can be reliably subtracted. We choose to work with the extragalactic dispersion measure, given by [@2016ApJ...830L..31Y] $$\begin{aligned} {{\rm DM}_{\mathrm{E}} }\equiv {{\rm DM}_{\mathrm{obs}} }- {{\rm DM}_{\mathrm{MW}} }= {{\rm DM}_{\mathrm{IGM}} }+ {{\rm DM}_{\mathrm{HG}} }, \label{dme}\end{aligned}$$ where ${{\rm DM}_{\mathrm{HG}} }$ is defined in the observers frame, and related to that at the emission event by $$\begin{aligned} {{\rm DM}_{\mathrm{HG}} }= \frac{{{\rm DM}_{\mathrm{HG,loc}} }}{1+z}.\end{aligned}$$ This contribution is not well known and is expected to depend on the type of host galaxy, its inclination relative to the observer, and the location of the FRB inside the host galaxy [@2015RAA....15.1629X; @2016ApJ...830L..31Y], and so we include this as a source of uncertainty in our analysis. The intergalactic medium is inhomogeneous and so ${{\rm DM}_{\mathrm{IGM}} }(z)$ will have a large sightline-to-sightline variance, with estimates ranging between $\sim200$ and 400 pc cm$^{-3}$ by $z\sim 1.5$ [@2014ApJ...780L..33M]. It has however been shown that with enough FRB events in small enough redshift bins, the mean dispersion measure in each bin will approach the Friedmann-Ł-Robertson-Walker (FLRW) background value to good approximation. Specifically, with $N\sim80$ events in the redshift bin $1\leq z \leq 1.05$, the mean dispersion measure will be with 5% of the FLRW background value, at 95.4% confidence [@2014PhRvD..89j7303Z]. This is essential if one wishes to measure the cosmological parameters with any precision. Assuming a non-flat FLRW Universe that is dominated by matter and dark energy, one finds the average (background) dispersion measure of the intergalactic medium is [@2014ApJ...783L..35D; @2014PhRvD..89j7303Z; @2014ApJ...788..189G] $$\begin{aligned} \langle {{\rm DM}_{\mathrm{IGM}} }(z) \rangle = \frac{3 c H_0 \Omega_{\rm b} {f_{\mathrm{IGM}} }}{8 \pi G m_{\rm p}} \int^z_0 \frac{\chi (z') (1+z') }{E(z')}~ {\rm d}z',\label{dmigm}\end{aligned}$$ where $$\begin{aligned} E(z)&=\left[ (1+z)^3 \Omega_{\rm m} + f(z) \Omega_{\rm DE} + (1+z)^2 \Omega_k \right]^{1/2},\\ \chi(z) &= Y_{\rm H} \chi_{\rm e,H}(z) + \frac{1}{2}Y_{\rm p} \chi_{\rm e,He}(z), \label{chiz}\\ f(z) &= \exp\left[ 3 \int_0^z \frac{(1 + w(z'')){\rm d}z''}{(1+z'')}\right], \label{fz}\end{aligned}$$ and $H_0$ is the value of the Hubble parameter today, $\Omega_{\rm b}$ is the baryon mass fraction of the Universe, ${f_{\mathrm{IGM}} }$ is the fraction of baryon mass in the intergalactic medium, $Y_{\rm H}=3/4$ ($Y_{\rm p}=1/4$) is the hydrogen (helium) mass fraction in the intergalactic medium, and $\chi_{\rm e,H}$ ($\chi_{\rm e,He}$) is the ionisation fraction of hydrogen (helium). The cosmological density parameters for matter and curvature are $\Omega_{\rm m}$ and $\Omega_k$, respectively, and the dark energy density parameter is given by the constraint ${\Omega_{\mathrm{DE}} }\equiv 1-\Omega_{\rm m}-\Omega_k$. We allow for the equation of state of dark energy, $w$, to vary with time, and parameterise it by [@2001IJMPD..10..213C; @2003PhRvL..90i1301L] $$\begin{aligned} w(z) &= w_0 +w_a \frac{z}{1+z} \label{cpl},\end{aligned}$$ where $w_0$ and $w_a$ are the CPL parameters. Substituting [(\[cpl\])]{} into [(\[fz\])]{}, and integrating, gives an exact analytic expression for the growth of dark energy density as a function of redshift $$\begin{aligned} f(z)=(1+z)^{3(1+w_0+w_a)} \exp{\left[-3 w_a\frac{ z}{1+z}\right]}. \label{fzCPL}\end{aligned}$$ Choosing $(w_0,w_a)=(-1,0)$ in [(\[fz\])]{} gives $f(z)=\mathrm{const.}$, corresponding to the $\Lambda$CDM model, in which dark energy is a cosmological constant. For simplicity (to avoid modelling any astrophysics) we restrict our analysis to the region $z\leq3$, since current observations suggest that both hydrogen and helium are fully ionised there [@2009RvMP...81.1405M; @2011MNRAS.410.1096B], and thus we can safely take $\chi_{e,H}=\chi_{e,He}=1$ in [(\[chiz\])]{}. This gives a constant $\chi(z)=7/8$ in the region of interest. The ${f_{\mathrm{IGM}} }$ term presents some complications. Strictly speaking, ${f_{\mathrm{IGM}} }$ is a function of redshift (${f_{\mathrm{IGM}} }={f_{\mathrm{IGM}} }(z)$) ranging from about 0.9 at $z\gtrsim1.5$ to $0.82$ at $z \leq 0.4$ [@2009RvMP...81.1405M; @2012ApJ...759...23S], and should be included inside the integral in [(\[dmigm\])]{}. As a first approximation we neglect the effect of evolving ${f_{\mathrm{IGM}} }$, and set it to a constant. Telescope Time and the Mock Catalogue {#forecast} ------------------------------------- Based on current detections, the FRB event rate in the Universe is expected to be high, and given the improved design sensitivity of future radio telescopes, their detection rate is expected to increase significantly. This value, of course, will depend on the exact specifications of the telescope, and the true distribution and spectral profile of FRBs. For example, assuming they live only in low mass host galaxies, and have a Gaussian-like spectral profile, the mid-frequency component of the Square Kilometre Array (SKA) is expected to detect FRBs out to $z\sim3.2$ at a rate of $\sim10^3$ sky$^{-1}$ day$^{-1}$ [@2017ApJ...846L..27F]. In the more immediate future, the Hydrogen Intensity Real-time Analysis eXperiment (HIRAX) [@2016SPIE.9906E..5XN] and the Canadian Hydrogen Intensity Mapping Experiment (CHIME) [@2014SPIE.9145E..22B], are expected to detect $\sim 50-100$ day$^{-1}$ and $\sim 30-100$ day$^{-1}$, respectively [@2017MNRAS.465.2286R]. Assuming that 5% of the detected FRBs can be sufficiently localised to be associated with a host galaxy, the rate of detection and localisation would be roughly $\sim2-5$ day$^{-1}$ for HIRAX and CHIME, and far higher for the SKA. This suggests that a large catalogue of localised FRBs could be built up relatively quickly, and the main bottleneck in obtaining a catalogue of $DM(z)$ data will be acquiring the redshifts. Given the bright emission lines in the spectrum of the host galaxy for the repeating FRB 121102 [@2017ApJ...834L...7T], a mid- to large-sized optical telescope should be able to obtain $\sim10$ redshifts for FRB host galaxies per night; we thus estimate that a redshift catalogue with ${N_{\mathrm{FRB}}}=1000$ will take approximately 100 nights of observing to construct, which would be feasible with a dedicated observing program spread over a few years. Motivated by a phenomenological model for the distribution of gamma ray bursts, we assume the redshift distribution of FRBs is given by $P(z)=z e^{-z}$ [@2014PhRvD..89j7303Z; @2016ApJ...830L..31Y], and simulate ${{\rm DM}_{\mathrm{E}} }(z)$ measurements, given by the far right side of [(\[dme\])]{}. Due to matter inhomogeneities in the IGM, and variations in the properties of the host galaxy, we promote ${{\rm DM}_{\mathrm{IGM}} }$ and ${{\rm DM}_{\mathrm{HG,loc}} }$ to random variables, and sample them from a normal distribution. That is ${{\rm DM}_{\mathrm{IGM}} }\sim \mathcal{N} \left( \langle {{\rm DM}_{\mathrm{IGM}} }(z)\rangle, {\sigma_{\mathrm{IGM}}}\right)$, and ${{\rm DM}_{\mathrm{HG,loc}} }\sim \mathcal{N} \left( \langle {{\rm DM}_{\mathrm{HG,loc}} }\rangle, {\sigma_{\mathrm{HG,loc}} }\right)$. We assume $\langle {{\rm DM}_{\mathrm{IGM}} }(z)\rangle$ is given by [(\[dmigm\])]{} and a flat $\Lambda$CDM background as the fiducial cosmology, using the best fit CBSH parameter values provided by the Planck 2015 data release[^2], listed in the second column of table \[multi\_table\]. We also take ${f_{\mathrm{IGM}} }=0.83$ [@2012ApJ...759...23S]. The value of ${{\rm DM}_{\mathrm{HG,loc}} }$ is expected to contain contributions from the Interstellar Medium (ISM) of the FRB host galaxy and near-source plasma. Since FRB progenitors and their emission mechanisms are as yet unknown, reasonable values of $\langle{{\rm DM}_{\mathrm{HG,loc}} }\rangle$ and ${\sigma_{\mathrm{HG,loc}} }$ are still debatable. Here we assume nothing about the host galaxy type or location of the FRB therein, just that there is a significant contribution to ${{\rm DM}_{\mathrm{HG,loc}} }$ due to near source-plasma, and thus take $ \langle {{\rm DM}_{\mathrm{HG,loc}} }\rangle=200$ pc cm$^{-3}$ and ${\sigma_{\mathrm{HG,loc}} }=50$ pc cm$^{-3}$ [@2016ApJ...830L..31Y]. To investigate the effect of sample size and IGM inhomogeneities on resulting constraints, we construct a number of mock catalogues with various values for ${\sigma_{\mathrm{IGM}}}$ and ${N_{\mathrm{FRB}}}$. For the most optimistic sample, we choose $({N_{\mathrm{FRB}}},{\sigma_{\mathrm{IGM}}})=(1000,200)$. See table \[cats\] for a summary of the various catalogues. [ l c c c ]{} & ${N_{\mathrm{FRB}}}$ & ${\sigma_{\mathrm{IGM}}}$ \[[[pc]{}[cm]{}$^{-3}$]{}\] & ${z_{\mathrm{lim}}}$\ FRB1 & $1000$ & $200$ & $3$\ FRB2 & $1000$ & $400$& $3$\ FRB3 & $100$ & $200$& $3$\ Parameter Estimation and Priors ------------------------------- For the MCMC analysis we use the $\chi^2$ statistic as a measure of likelihood for the parameter values. The log-likelihood function is given by $$\begin{aligned} \ln \mathcal{L_{\mathrm{FRB}}}(\theta|d) = -\frac{1}{2} \sum_i \frac{\left( {{\rm DM}_{\mathrm{E},i} }- \langle {{\rm DM}_{\mathrm{E}} }\rangle \right)^2}{ {\sigma_{\mathrm{IGM},i } }^2 + \left[ {\sigma_{\mathrm{HG,loc},i} }/(1+z_i)\right]^2} , \label{likelihood} \end{aligned}$$ where $\theta$ is the set of fitting parameters, $d$ is the FRB data, and the sum over $i$ represents the sequence of FRB data in the sample. Constraints on the flat $\Lambda$CDM model parameters are obtained by setting $\Omega_k=0$ in [(\[dmigm\])]{} and $w=-1$ in [(\[fz\])]{}, and then fitting the mock data for $\theta=(\Omega_{\rm m}, H_0, \Omega_{\rm b} h^2, \langle{{\rm DM}_{\mathrm{HG,loc}} }\rangle )$. To investigate spatial curvature in the [$\Lambda$CDM]{}model, we allow for $\Omega_k\neq0$ in [(\[dmigm\])]{}, and include it as an additional fitting parameter. For the dark energy constraints we consider two model parametrisations with flat spatial geometry. In the first case, we extend to the $w$CDM model, allowing for $w=\mathrm{const.}\neq-1$. We set $\Omega_k=0$ in [(\[dmigm\])]{} and $(w_0,w_a)=(w,0)$ in [(\[cpl\])]{}, and fit the data for $\theta=(w, \Omega_{\rm m}, H_0, \Omega_{\rm b} h^2 , \langle {{\rm DM}_{\mathrm{HG,loc}} }\rangle)$. In the second case, we allow for dark energy to vary with time and use the CPL parametrisation [(\[cpl\])]{}, and thus set $\Omega_k=0$ in [(\[dmigm\])]{}, and fit the FRB data for the parameters $\theta=(w_0, w_a, \Omega_{\rm m}, H_0, \Omega_{\rm b} h^2 , \langle {{\rm DM}_{\mathrm{HG,loc}} }\rangle)$. For all the extended models, we fit to the flat [$\Lambda$CDM]{}data described in §\[forecast\], and examine how close to fiducial values the additional parameters are constrained. This also allows us to easily combine the constraints with existing data, which is consistent with flat [$\Lambda$CDM]{}. We use the Python package [*emcee*]{} [@2013PASP..125..306F] to determine the posterior distribution for the parameters, and [*GetDist*]{}[^3] for plotting and analysis. When prior information is included in the analysis we use the respective covariance matrix provided by the Planck 2015 data release. We thus calculate the priors according to $$\begin{aligned} \ln P(\theta) = -\frac{1}{2} \xi \mathbf{C}^{-1} \xi, \label{lnP}\end{aligned}$$ where $P(\theta)$ is the prior probability associated with the parameter values $\theta$, $\mathbf{C}$ is a (square) covariance matrix, and $\xi = \theta-{\theta_{\mathrm{fiducial}}}$ is the displacement in parameters space between the relevant parameter values and the fiducial values. To avoid rescaling the CBSH covariance matrix to accommodate for $\Omega_{\rm b}$, we set up our code to fit for [$\Omega_{\rm b} h^2$]{}, which is a primary parameter in the Planck analysis, and thus its covariance is provided. ![image](TTTEEE_multi_1d){width="\linewidth"} [ l c c c c ]{} Parameter & 95% limits\ &CBSH & CBSH+FRB1 & CBSH+FRB2 & CBSH+FRB3\ & $3.09^{+0.12}_{-0.12} $& $3.07^{+0.11}_{-0.11} $ &$3.10^{+0.12}_{-0.12} $ & $3.11^{+0.12}_{-0.12} $\ [$H_0 $]{} & $67.74^{+0.92}_{-0.90} $ &$67.86^{+0.79}_{-0.80} $ &$67.66^{+0.86}_{-0.87} $& $67.60^{+0.89}_{-0.89} $\ [$10^2 ~\Omega_{\rm b} h^2 $]{}&$2.230^{+0.027}_{-0.026}$ & $2.235^{+0.021}_{-0.021}$ &$2.227^{+0.025}_{-0.025}$& $2.224^{+0.026}_{-0.026}$\ [$\langle {{\rm DM}_{\mathrm{HG,loc}} }\rangle $]{} & &$215^{+30}_{-30} $& $189^{+60}_{-60} $&$161^{+90}_{-90} $\ & $0.8^{+4.0}_{-3.9}$ &$-0.1^{+2.6}_{-2.6}$& $0.9^{+3.2}_{-3.2}$ & $1.7^{+3.5}_{-3.5}$\ [$10~ \Omega_{\rm m} $]{} &$3.08^{+0.12}_{-0.12} $ & $3.08^{+0.12}_{-0.12} $ & $3.08^{+0.12}_{-0.12} $ & $3.08^{+0.12}_{-0.12} $\ & $67.9^{+1.3}_{-1.2} $&$67.8^{+1.2}_{-1.2} $ & $67.9^{+1.2}_{-1.2} $ & $68.0^{+1.2}_{-1.2} $\ [$10^2 ~ \Omega_{ \rm b} h^2 $]{} & $2.228^{+0.032}_{-0.031}$ &$2.235^{+0.020}_{-0.021}$ & $2.226^{+0.026}_{-0.026}$ & $2.220^{+0.029}_{-0.029}$\ [$\langle {{\rm DM}_{\mathrm{HG,loc}} }\rangle $]{} &&$201^{+40}_{-40} $ & $196^{+60}_{-60} $ & $177^{+90}_{-90} $\ [$w $]{} & $-1.019^{+0.075}_{-0.080} $& $-1.012^{+0.077}_{-0.078} $& $-1.020^{+0.077}_{-0.077} $ &$-1.020^{+0.077}_{-0.077} $\ [$10~ \Omega_{\rm m} $]{} & $3.06^{+0.18}_{-0.18} $& $3.05^{+0.18}_{-0.17} $ & $3.07^{+0.17}_{-0.17} $& $3.08^{+0.17}_{-0.17} $\ [$H_0 $]{} & $68.1^{+2.1}_{-1.9} $ & $68.1^{+2.0}_{-2.0} $ & $68.1^{+1.9}_{-1.9} $ & $68.0^{+1.9}_{-1.9} $\ [$10^2 ~ \Omega_{\rm b} h^2 $]{}& $2.227^{+0.027}_{-0.029}$ & $2.233^{+0.022}_{-0.022}$ & $2.224^{+0.026}_{-0.026}$ & $2.221^{+0.028}_{-0.028}$\ [$\langle {{\rm DM}_{\mathrm{HG,loc}} }\rangle $]{} && $204^{+30}_{-30} $& $191^{+60}_{-60} $ & $163^{+90}_{-90} $\ [$w_0 $]{} & $-0.95^{+0.21}_{-0.20} $& $-0.98^{+0.21}_{-0.21} $& $-0.96^{+0.21}_{-0.21} $& $-0.96^{+0.21}_{-0.21} $\ [$w_a $]{}& $-0.25^{+0.72}_{-0.78} $& $-0.10^{+0.71}_{-0.71} $& $-0.23^{+0.76}_{-0.76} $& $-0.24^{+0.77}_{-0.76} $\ [$10~ \Omega_{\rm m} $]{} & $3.08^{+0.19}_{-0.19} $& $3.07^{+0.19}_{-0.18} $& $3.09^{+0.18}_{-0.18} $& $3.10^{+0.18}_{-0.18} $\ [$H_0 $]{} & $68.0^{+2.0}_{-2.0} $& $68.0^{+2.0}_{-2.1} $& $67.9^{+2.0}_{-2.0} $& $67.9^{+2.0}_{-2.0} $\ [$10^2 ~ \Omega_{\rm b} h^2 $]{} & $2.225^{+0.030}_{-0.029}$& $2.233^{+0.024}_{-0.023}$& $2.223^{+0.027}_{-0.027}$ & $2.220^{+0.029}_{-0.029}$\ [$\langle {{\rm DM}_{\mathrm{HG,loc}} }\rangle $]{} && $205^{+30}_{-30} $& $195^{+60}_{-60} $ & $167^{+90}_{-90} $\ Parameter Constraints Forecast ============================== Here we discuss the FRB constraints forecast for the flat [$\Lambda$CDM]{}model and some simple 1- and 2-parameter extensions. In all models, when fitting the most optimistic catalogue, FRB1, we find that $H_0$ and [$\Omega_{\rm b} h^2$]{}are unconstrained when no prior information about the parameters is included. This is unsurprising, since ${{\rm DM}_{\mathrm{IGM}} }\propto \Omega_{\rm b} H_0$. And as a result, the other cosmological parameters are only very weakly constrained, if at all. In all models we find the measurement precision of $\Omega_{\rm m}$ is tens of a percent, hardly good enough to be considered a tool for ‘precision cosmology’ at the sub-percent level. We thus include the CBSH covariance matrix in our analysis in order to determine if FRBs offer any additional constraining power. In figure \[multi\_1d\] we plot a compilation of the marginalised 1D posterior probability distributions for the cosmological parameters, obtained from a combination of CBSH constraints and the various mock FRB catalogues listed in Table \[cats\]. Black lines indicate the CBSH constraints used in the covariance matrix for calculating the priors, given by Eq. [(\[lnP\])]{}. The solid red, dot-dashed blue and dotted green lines indicate the constraints when CBSH is combined with the FRB1, FRB2 and FRB3 catalogues, respectively. The corresponding 2-$\sigma$ confidence intervals are listed in table \[multi\_table\]. We deal with the various cosmological models in turn, below. Flat [$\Lambda$CDM]{} {#base} --------------------- Including the CBSH covariance matrix gives the combined constraints, CBSH+FRB, shown in the top row of figure \[multi\_1d\]. We find that the posteriors for $H_0$ and $\Omega_{\rm m}$ show only a minor improvement over their priors, as can be seen in the second and third column. The most improved constraint is given by $\Omega_{\rm b} h^2=0.02235^{+0.00021}_{-0.00021}$, which corresponds to a $\sim 20\%$ reduction in the size of the 2-$\sigma$ confidence interval of the CBSH prior. The source of this improvement can be seen in figure \[base\_2Dcomp\_omegamomegabh2\], where we plot constraints in the $\Omega_{\rm m}$-$\Omega_{\rm b}h^2$ plane. Here we include the CBSH prior for $H_0$ with the FRB1 analysis, and plot the resulting constraint (grey) with the CBSH constraints (red). The degeneracy directions of the two eclipses are different, and their intersection gives the combined constrain (blue). Thus, given our current knowledge of the [$\Lambda$CDM]{}parameters and their covariance, DM observations will provide more information on $\Omega_{\rm b}h^2$ than the other cosmological parameters. Constraints derived from a combination of CBSH with the various FRB catalogues, represented by the coloured curves in the top row of figure \[multi\_1d\], illustrate the effect of varying the IGM inhomogeneity and sample size. Increasing the IGM inhomogeneity from ${\sigma_{\mathrm{IGM}}}=200$ [[pc]{}[cm]{}$^{-3}$]{}to ${\sigma_{\mathrm{IGM}}}=400$ [[pc]{}[cm]{}$^{-3}$]{}weakens the constraints considerably. The strongest constraint in this case becomes $\Omega_{\rm b}h^2=0.02227^{+0.00025}_{-0.00025}$, which corresponds to a $\sim 5\%$ reduction in size of the 2-$\sigma$ interval of the CBSH constraint. Similarly, reducing the samples size to ${N_{\mathrm{FRB}}}=100$, and keeping IGM inhomogeneity low at ${\sigma_{\mathrm{IGM}}}=200$ [[pc]{}[cm]{}$^{-3}$]{}also weakens any improvement offered by FRBs. In this case we find $\Omega_{\rm b}h^2=0.02224^{+0.00026}_{-0.00026}$, which is a $\sim 2\%$ reduction in the size of the CBSH 2-$\sigma$ interval. Clearly one needs many FRB events in order to mitigate the effects of IGM inhomogeneity. ![ Flat $\Lambda$CDM parameter constraints in the $\Omega_{\rm m}$-$\Omega_{\rm b}h^2$ plane. Constraints obtained from the FRB1 catalogue with a CBSH prior on $H_0$ are shown in grey, the CBSH constraints are shown in red, and the combined constraints are shown in blue. Without including priors, the FRB constraints are very weak, and so have been omitted from this plot. []{data-label="base_2Dcomp_omegamomegabh2"}](TTTEEE_base_2d_omegamomegabh2){width="\linewidth"} Extensions Beyond Flat [$\Lambda$CDM]{} {#ext} --------------------------------------- #### Curvature When no priors are included, we find that $\Omega_k$ is unconstrained by FRB observations alone. Even when the CBSH covariance matrix for $(\Omega_{\rm m}, H_0, \Omega_{\rm b} h^2 )$ is included, the constraint on $\Omega_k$ remains very weak. However, with the full CBSH covariance matrix included we find $\Omega_k=-0.0001^{+0.0026}_{-0.0026}$ and $\Omega_{\rm b}h^2=0.02235^{+0.00020}_{-0.00021}$. This corresponds to a $\sim 35\%$ reduction in the size of the CBSH 2-$\sigma$ intervals for [$\Omega_{\rm b} h^2$]{}and $\Omega_k$. The source of this improvement is illustrated in figure \[base\_omegak\_2d\_omegabh2omegak\] where we plot the 2D marginalised constraints in the $\Omega_{\rm b}h^2$-$\Omega_k$ plane. The FRB1 constraints with CBSH covariance for $(\Omega_{\rm m},H_0\Omega_{\rm b}h^2)$ are shown in grey, and the CBSH constraints in red. Its clear that the grey contour very weakly constrains $\Omega_k$. However, it runs orthogonal to the CBSH constraint, and intersects it in a way that simultaneously improves both the $\Omega_k$ and $\Omega_{\rm b} h^2$ constraints when the data are combined, shown in blue. Posteriors for $\Omega_{\rm m}$ and $H_0$ are dominated by their priors, as can be seen in the second row of figure \[multi\_1d\]. Increasing the IGM variance to ${\sigma_{\mathrm{IGM}}}=400$ [[pc]{}[cm]{}$^{-3}$]{}degrades the constraints to $\Omega_{\rm b}h^2=0.02226^{+0.00026}_{-0.00026}$ and $\Omega_k=0.0009^{+0.0032}_{-0.0032}$, which corresponds to a $\sim 18\%$ reduction in the size of CBSH 2-$\sigma$ interval. Similarly, reducing the sample size to ${N_{\mathrm{FRB}}}=100$, we find $\Omega_{\rm b} h^2=0.02220^{+0.00029}_{-0.00029}$ and $\Omega_k=0.0017^{+0.0035}_{-0.0035}$, which corresponds to a $\sim10\%$ reduction in the size of the CBSH 2-$\sigma$ intervals. Thus, while FRB observations alone do not constrain $\Omega_k$, they add some constraining power when current parameter covariance is included. As in the flat [$\Lambda$CDM]{}case, many FRBs are needed to realise this improvement. ![Non-Flat [$\Lambda$CDM]{}marginalised 2-D posterior distribution in the $\Omega_{\rm b}h^2$-$\Omega_k$ plane. FRB constraints, when including CBSH covariance for $(\Omega_{\rm m},H_0,\Omega_{\rm b}h^2)$, are shown in grey, CBSH constraints are shown in red, and the combined constraints are shown in blue. Without including priors, the FRB constraints are very weak, and so have been omitted from this plot.[]{data-label="base_omegak_2d_omegabh2omegak"}](TTTEEE_base_omegak_2d_omegabh2omegak){width="\linewidth"} #### Testing Concordance When the CBSH covariance for $(H_0,\Omega_{\rm b}h^2)$ is included in the analysis, the resulting 2D marginalised constraint contours are, in all cases, larger than the CBSH ones. A crucial difference between this result and that of [@2014PhRvD..89j7303Z; @2014ApJ...788..189G], is that the previous authors assumed perfect knowledge of $H_0$ and $\Omega_{\rm b}$, and neglected any contribution from the host galaxy, and thus got a very narrow FRB contour in the $w$-$\Omega_{\rm m}$ plane, which they showed would intersect with, and improve, the current constraints. Alas, we find this is not the case if realistic prior knowledge about $H_0$ and $\Omega_{\rm b}h^2$ is included. In the third row of figure \[multi\_1d\] we plot the normalised 1D posterior distributions for the $w$CDM model parameters. For all catalogues listed in table \[cats\] we find that the posteriors are dominated by their priors, with the exception being $\Omega_{\rm b}h^2$. When using the most optimistic catalogue, we find $\Omega_{\rm b}h^2=0.02233^{+0.00022}_{-0.00022}$, which corresponds to a $\sim 20\%$ reduction in the size of the 2-$\sigma$ confidence interval of the CBSH prior. Increasing the IGM variance to ${\sigma_{\mathrm{IGM}}}=400$ [[pc]{}[cm]{}$^{-3}$]{}weakens this improvement to a few percent. There is no improvement in the [$\Omega_{\rm b} h^2$]{}constraint if the sample size is reduced to ${N_{\mathrm{FRB}}}=100$. #### Dynamical Dark Energy The normalised 1D posterior distributions can be seen in the bottom row of figure \[multi\_1d\]. With the CBSH covariance included in the FRB1 analysis, we find that all posteriors are dominated by the CBSH priors, with the exception being $\Omega_{\rm b}h^2 = 0.02233^{+0.00024}_{-0.00023}$, which corresponds to a $\sim 20\%$ reduction in the size of the CBSH 2-$\sigma$ interval. As in the $w$CDM model, increasing the IGM variance to ${\sigma_{\mathrm{IGM}}}=400$ [[pc]{}[cm]{}$^{-3}$]{}weakens this improvement to a few percent, and there is no improvement if the sample size is reduced to ${N_{\mathrm{FRB}}}=100$. Thus, even under our most optimistic assumptions, we find FRB provide no additional information about the nature of dark energy. Synergy with 21cm BAO Experiments {#synergies} ================================= Future 21cm Intensity Mapping (IM) experiments designed to measure BAO in the distribution of neutral hydrogen, such as HIRAX and CHIME, are expected to numerous FRBs during the course of their observing runs. Since these FRB detections will essentially come for free (although the redshift will require dedicated observations), we aim determine whether their inclusion in the data analysis might improve the constraint forecasts for the 21cm IM BAO alone. Here we perform a simultaneous MCMC analysis of the FRB1 catalogue with the mock 21cm IM BAO measurement presented in [@witzemann]. The mock BAO data is generated for HIRAX, which is a near-future radio interferometer planned to be built in South Africa. It will consist of $1024$ $6$m dishes, covering the frequency range $400$-$800$ MHz, corresponding to a redshifts between $0.8$ and $2.5$. We assume an integration time of $1$ year, and a non-linear cutoff scale at $z=0$ of $k_\mathrm{NL,0} = 0.2$ Mpc${}^{-1}$, which evolves with redshift according to the results from [@2003MNRAS.341.1311S], $k_\mathrm{max} = k_\mathrm{NL,0}(1+z)^{2/(2+n_{\rm s})}$ with the spectral index $n_{\rm s}$. We use these specifications and a slightly adapted version of the publicly available code from [@2015ApJ...803...21B] to calculate covariance matrices $\mathbf{C}_\mathrm{BAO}$ for the Hubble rate, $H$, and angular diameter distance, $D_{\rm A}$, in $N=20$ equally spaced frequency bins. We consider correlations between $H$ and $D_{\rm A}$ and assume different bins to be uncorrelated. For the MCMC analysis, the likelihood of a given set of cosmological parameters is then calculated using these measurements together with the FRB1 catalogue, according to $$\begin{aligned} \ln \mathcal{L}= \ln \mathcal{L}_{\rm BAO} + \ln \mathcal{L}_{\rm FRB} ~,\end{aligned}$$ where $$\begin{aligned} \ln \mathcal{L}_{\rm BAO} = -\frac{1}{2}\sum_{j=1}^{N} (\nu_j - \mu_j)^\mathrm T \mathbf{C}_\mathrm{BAO}^{-1}(z_j) (\nu_j - \mu_j)\end{aligned}$$ and $\ln \mathcal{L}_{\rm FRB}$ is given by [(\[likelihood\])]{}. Further definitions used are $\nu_j = (D_\mathrm A(z_j, \theta) , H(z_j, \theta))$ as well as $\mu_j = (D_\mathrm A(z_j, \theta_\mathrm{fid}) , H(z_j, \theta_\mathrm{fid}))$. All priors are flat and identical to the ones used in the FRB analysis. We find that FRBs add little to the constraints coming from 21cm BAO alone — they only tend to remove some of the non-Gaussian tails in the BAO posteriors. However, they do add an additional parameter into the fitting process, [$\Omega_{\rm b} h^2$]{}, which turns out to be the most competitive constraint. We find $\Omega_{\rm b}h^2=0.02235^{+0.00032}_{-0.00032}$, which is comparable to the current CBSH constraint, and entirely independent. This suggest that, when combined with 21cm IM BAO measurements, FRBs may provide an intermediate redshift measure of the cosmological baryon density, independent of high redshift CMB constraints. Conclusions {#conc} =========== In this paper we have investigated how future observations of FRBs might help to constrain cosmological parameters. By constructing various mock catalogues of FRB observations, and using MCMC techniques, we have forecast constraints for parameters in the flat [$\Lambda$CDM]{}model, as well as [$\Lambda$CDM]{}with spatial curvature, flat $w$CDM and flat $w_0w_a$CDM. Since ${{\rm DM}_{\mathrm{IGM}} }\propto \Omega_{\rm b} H_0$, we find $\Omega_{\rm b}h^2$ and $H_0$ are degenerate, and unconstrained by FRBs observations alone. And as a result, the other cosmological parameters are very weakly constrained, if at all. In all models considered here, the measurement precision on $\Omega_{\rm m}$ is a few tens of percent, when using the most optimistic catalogue with no priors. This is a order of magnitude larger than current constraints coming from CBSH. To determine whether FRBs will improve current constraints, we have included in our FRB analysis realistic priors in the form of the CBSH covariance matrix. With this we showed that [$\Omega_{\rm b} h^2$]{}and $\Omega_k$ are the only two parameters that are better constrained when FRBs are included. All dark energy equation of state parameters are poorly constrained by FRBs. To investigate how sample size and IGM inhomogeneity affect the resulting constraints, we constructed a number of mock catalogues while varying ${N_{\mathrm{FRB}}}$ and ${\sigma_{\mathrm{IGM}}}$. We find that the inhomogeneity of the IGM poses a serious challenge to the ability of FRBs to improve current constraints. For all model parameterisations that we have considered here, we find that only the most optimistic FRB catalogue gives any appreciable improvement in the current CBSH constraints. For this catalogue we assumed a relatively low DM variance due to the IGM, with ${\sigma_{\mathrm{IGM}}}=200$ [[pc]{}[cm]{}$^{-3}$]{}, and a large number of events, with ${N_{\mathrm{FRB}}}=1000$. Crucially, these events require followup observations to acquire redshift information, which would require $\sim$100 days of dedicated optical spectroscopic follow-up. Increasing the IGM inhomogeneity to ${\sigma_{\mathrm{IGM}}}=400$ [[pc]{}[cm]{}$^{-3}$]{}, or decreasing the sample size to ${N_{\mathrm{FRB}}}=100$ causes the resulting constraints to be dominated by their priors. Future 21cm IM experiments designed to measure the BAO wiggles in the matter power spectrum will provide independent constraints on cosmological parameters at low/intermediate redshifts. While these observations do not constrain $\Omega_{\rm b}$, they will provide competitive constraints on $H_0$ and $\Omega_{\rm m}$ (within the [$\Lambda$CDM]{}model). Since these experiments are expected to detect many FRBs during the course of their observations, we have investigated combining the BAO constraints with FRB data. We find that this produces a constraint on [$\Omega_{\rm b} h^2$]{}comparable to the existing one coming from CBSH observations. Thus, this approach may provide a novel low/intermediate redshift probe of the cosmic baryon density, independent of high redshift CMB data. The biggest promise of FRB observations seems to be in locating the missing baryons, and not testing concordance or measuring the dark energy equation of state. This may change should one be able to mitigate the effect of IGM variance and the DM contribution from the host galaxy. There are however some caveats. We have assumed that ${f_{\mathrm{IGM}} }$ is not evolving with time, and its value is known perfectly. We have assumed perfect knowledge of ${{\rm DM}_{\mathrm{MW}} }$, and that it can be reliably subtracted from ${{\rm DM}_{\mathrm{obs}} }$, which is not practical as is know from pulsar observations. Also, we have assumed no error in the redshift of the FRBs. Including these additional sources of uncertainty will weaken any constraints we have obtained here. We thank Jonathan Sievers and Kavilan Moodley for helpful comments. A. Walters is funded by a grantholder bursary from the National Research Foundation of South Africa (NRF) Competitive Programme for Rated Researchers (Grant Number 91552). A. Weltman gratefully acknowledges financial support from the Department of Science and Technology and South African Research Chairs Initiative of the NRF. The Dunlap Institute is funded through an endowment established by the David Dunlap family and the University of Toronto. B.M.G. acknowledges the support of the Natural Sciences and Engineering Research Council of Canada (NSERC) through grant RGPIN-2015-05948, and of the Canada Research Chairs program. Y.Z.M. acknowledges the support by NRF (no. 105925). A. Witzemann acknowledges support from the South African Square Kilometre Array Project and NRF. Any opinion, finding and conclusion or recommendation expressed in this material is that of the authors and the NRF does not accept any liability in this regard. [^1]: From version 2.0 of the FRB catalogue [@2016PASA...33...45P] found at [http://www.frbcat.org/]{}, accessed on 17 November 2017 [^2]: Planck 2015 covariance matrices and MCMC chains can be found at http://pla.esac.esa.int/pla/\#cosmology [^3]: Package available at [https://github.com/cmbant/getdist]{}
--- address: | $^1$Laboratoire de Physique des Solides, URA 2\ CNRS,\ Univ. Paris-Sud, 91405 Orsay, France\ $^2$LLB, CE Saclay, CEA-CNRS, 91191, Gif Sur Yvette,\ France\ $^3$Laboratoire des Composés Non-Stoechiométriques,\ Université Paris-Sud, 91405, Orsay, France author: - 'A. V. Mahajan$^1$[@byline], H. Alloul$^1$, G. Collin$^2$, and J. F. Marucco$^3$' title: | $^{89}$Y NMR Probe of Zn Induced Local Magnetism in YBa$_2$(Cu$_{1-y}$Zn$_{y} $)$_3$O$_{6+x}$ --- Introduction ============ It is now experimentally well established that the CuO$_{2}$ planes are responsible for the magnetic and superconducting properties of the cuprates. However the interconnection between these two properties is still an essential but unanswered question. Understanding the normal state of the cuprates is still a prerequisite for any theoretical approach to the microscopic origin of High Temperature Superconductivity. In the recent past, considerable interest has been aroused due to the detection of a pseudo-gap in the spin excitation spectrum of the cuprates for underdoped materials (the word pseudo is prefixed because although a strong decrease in the intensity of excited states is detected well above the superconducting transition temperature $T_{c}$, a real gap is only detected below $T_{c}$). The first indications of a pseudo-gap were provided in the microscopic NMR measurements of the susceptibility of the CuO$_{2}$ planes. $^{89}$Y NMR shifts in YBCO$_{6+x}$ of Alloul [*et al.*]{} [@alloulohno] were found to decrease markedly with decreasing $T$. The large decrease of the static susceptibility was interpreted to be due to an opening of a pseudo-gap in the homogeneous ${\bf q}=0$ excitations of the system. In the underdoped ($x$ $<$ 1 for YBCO$_{6+x}$) high-T$_{c}$ cuprates, a similar decrease of the spin-lattice relaxation rate $^{63}$Cu $1/T_{1}T$ [@warren], which is dominated by the imaginary part of the susceptibility at the AF wave vector $% {\bf q}=(\pi, \pi )$ was also indicative of a pseudo-gap in the spin excitations at this wave vector [@berthier]. The inelastic neutron scattering experiments which followed [@rossat; @ginsberg], clearly confirmed the existence of this pseudo spin-gap at $(\pi ,\pi )$, and allowed measurements of the frequency dependence of the excitations. The temperatures at which these two pseudo-gaps begin to open are found to be different, and it is not clear at present whether they signal different cross-overs between distinct states or a single cross-over phenomenon. The latter case would imply a wave-vector dependence of the pseudo-gap. Presently, the existence of a pseudogap for the underdoped high-$% T_{c}$ superconductors has been detected by many techniques such as transport, photoemission, etc. While various explanations are proposed for the pseudo-gaps, it is believed that the essential physics of the normal state (and perhaps the superconducting state), of the cuprates might be linked to it. In order to better characterise the properties of the cuprates it has appeared quite important to understand their modifications due to impurities or disorder. Atomic substitutions on the planar Cu site are naturally found to be the most detrimental to superconductivity, while modification in the charge reservoir chains mainly yield changes in hole doping. For such studies the YBCO$_{6+x}$ system is particularly suitable, as the variation of impurity induced magnetism with hole doping can be studied by merely changing the oxygen content $x$ for a given impurity content. In classical superconductors, $T_c$ is mainly affected by magnetic impurity substitutions. In cuprates it has been shown that even a non-magnetic impurity like Zn ($3d^{10}$), which substitutes on the Cu site of the CuO$_2$ plane strongly decreases the superconducting transition temperature $T_c$ (about 10.6 K/% Zn for $x$ = 1). It has also been anticipated [@fink] and then shown experimentally that although Zn itself is non-magnetic, it induces a modification of the magnetic properties of the correlated spin system of the CuO$_2$ planes [@alloul2]. Using $^{89}$Y NMR we have further shown, in the preliminary report of the present work [@mahajan], that local magnetic moments are induced on the $nn$ Cu of the Zn substituent in the CuO$_2$ plane. Two important result have been demonstrated: i\) the $q = 0$ pseudo-gap was found unaffected by Zn even when $T_c$ is reduced to zero for YBCO$_{6.6}$, ii\) The magnitude of the induced local moment is strongly dependent on the carrier concentration [@mendels1]. Since our reports, other experimental evidence by NMR in YBCO [@dupree1; @walstedtdupree], in YBa$_2$Cu$_4$O$_8$ (1248) [@williams1], in La$_2$CuO$_4$ [@ishida2], or ESR in YBCO and 1248 [@janossy], have confirmed that the occurrence of local moment induced by non-magnetic impurities on the Cu sites is a general property of cuprates. The local moments have been observed as well in macroscopic bulk susceptibility data [@mendels1; @cooper; @mendels; @jps1]. The Zn-induced modifications of the magnetic excitations both in the superconducting and the normal state have been studied by neutron scattering [@kakurai; @sidis]. Also, electrical transport [@ong; @mizuhashi], and thermal properties [@loram] of substituted high-$T_c$ cuprates have been investigated. However some studies have concluded that the susceptibility near the Zn does not exhibit a Curie behaviour, at least for $x$ = 1, or that the AF correlations were destroyed in the vicinity of the Zn substituents [@janossy; @ishida1]. Also some data have been interpreted as due to the total disappearance of the pseudo-gap in the vicinity of Zn. Finally, the dynamics of the local moment [@ishida2] appears to be quite different in La$_2$CuO$_4$:Al than in our results. To clarify the situation, we present in this article an extended report of our experimental data, and perform an exhaustive comparison with the literature. We examine in detail the effect of Zn on $^{89}$Y NMR, in oriented powders of YBCO$_{6+x}$:Zn$_y$ with 0.5% $\leq$ $y$ $\leq$ 4 %, for $x$ = 0.64 and 1. NMR, being a local probe, provides useful information about the impurity induced short-range and long-range effects in the metal via an analysis of the lineshape, Knight shift, and linewidth. In section II, we present the experimental details regarding sample preparation and the procedures adopted for NMR measurements. In section III, a thorough description of the results of our NMR work, allows us to highlight the differences in the effect of Zn doping in underdoped and overdoped YBCO. In section IV, the NMR shift, linewidth, and relaxation rate data are analyzed considering Zn induced local moments on the neighbouring Cu. A contrasting comparison with the other experimental results introduced in this section is contained in the last subsection of IV. In the conclusion section we summarize our overall view of the experimental situation, and discuss several theoretical works which have paid some attention to the induced magnetism in cuprates. Experimental Details ==================== Samples of YBCO$_{6+x}$:Zn were prepared by conventional solid state reaction techniques as described elsewhere [@alloul3]. Large (single crystal) grain (size $>$ 50 microns) samples were made which were finely ground before oxygenation (see Ref. [@laurence] for further details regarding characterisation). In order to prepare the samples with maximum oxygen content, oxygenation was done at $\sim$ 300 $^{\circ}$C for a long period ($>$ 10 days) which ensured homogeneity of oxygen content. For preparing samples with a reduced oxygen content, the maximally oxidized samples were treated in vacuum in a thermobalance at variable temperatures, up to 450 $^{\circ}$C. The samples were quenched to room temperature when the equilibrium oxygen content was reached. In the case of YBCO$_{6+x}$ (without Zn), when the maximum oxidized samples were deoxidized to a point where the sample decomposed, the weight loss corresponded to $\delta$x (= x$_{max}$- x$_{min}$) = 1.0. However, on addition of Zn, the actual maximum value of $\delta$x which could be reached, progressively decreases and equals 0.92 for 4% Zn. Since for the Zn doped samples, the ortho-tetra structural phase transition still takes place for an oxygen content of x$_{min}$ + 0.45 (as in YBCO$_{6+x}$ without Zn) [@mendels2], it appears that x$_{min}$ = 6.0 in the Zn doped samples while x$_{max}$ linearly decreases from 7.0 for 0% Zn to 6.92 for 4% Zn. These samples with specific oxygen and zinc contents were then fixed in Stycast 1266 and cured overnight in a field of 7.5 Tesla in order to orient the grains with the $c$ axis aligned along the applied field direction. NMR measurements were performed by standard pulsed NMR techniques. We observed the spin echos after a $\pi/2-\pi$ sequence followed by a $% 3\pi/2-\pi$ sequence. A perfect inversion of the spin-echo in the latter relative to the first sequence ensured the correctness of the $\pi/2$ pulse length (about 13 $\mu$sec at room temperature). The $^{89}$Y shift was measured with respect to a standard YCl$_3$ solution. The $^{89}$Y spin-lattice relaxation time $T_1$ was determined using a $% \pi/2-\pi$ sequence, with a repetition time $t_{rep}$. An exponential fit of the nuclear magnetization (obtained from a Fourier Transform, FT, of the time domain spin-echo signal) as a function of $t_{rep}$ allowed us to deduce $T_1$ . For YBCO$_{6.64}$:Zn, $nn$ resonances are seen (see Fig. 1) at low temperatures (T $<$ 150 K). The relative intensity of these $nn$ resonances could be enhanced by repeating the pulse sequence at a fast rate ($t_{rep}$ $% \sim$ 20 sec). Indeed, the $T_1$ of the outermost satellite ($\sim$ 10 sec), was found smaller than that of the main line ($\sim$ 100 sec).The fact that the latter has a reduced intensity in such an experimental condition thus allows us to fix accurately the position and the width of the $nn$ resonances. The mainline being the narrowest and the most intense, its position and width were easily determined with a long repetition time, allowing full recovery of the mainline signal. Using the positions and widths of the $nn$ resonances determined in the manner indicated above, the relative intensities of the various lines in the spectrum were determined (by fitting the lineshape to a sum of three gaussians), for $t_{rep}$ $>$ 5$% T_1$ of the slowest recovering component, so that all the components had fully recovered. The $T_1$’s of the individual lines were determined from an exponential fit of their intensities (in the FT) with respect to the repetition time. Each $T_1$ measurement took about 15 hours. Results ======= In the following, we present the doping and temperature dependence of various NMR parameters in YBCO$_{6+x}$:Zn. We shall first report the existence of additional NMR lines detected in the underdoped samples. Their characteristics (shift, width, and intensity) enable us to associate them with $^{89}$Y nuclei near-neighbours of the Zn substituted on the CuO$_2$ planes (Section III-A-1). The results on the main resonance line, which corresponds to $^{89}$Y sites far from the substituted Zn, are reported next and compared to those in the pure system (Section III-A-2). Spin-lattice relaxation data on the $nn$ and main resonance lines are reported in Section III-B. Resonance line shift and width ------------------------------ ### Near neighbour resonances We will argue here that the additional resonances detected in YBCO$_{6.64}$:Zn are not seen in YBCO$_7$:Zn and that the additional resonances are intrinsic and a direct effect of Zn substitution. As seen in Fig. 1 the $nn$-resonance positions depend on the sample orientation with respect to the applied field. Furthermore (see Fig. 2), the relative intensity of the outer line increases with Zn content while its position is unchanged. We also see that, YBCO$_7$:Zn spectra (Fig. 3 (a)) do not show additional lines in the temperature range of our measurements (80 $% < $ $T$ $<$ 350). While this is unambiguously evident for the outermost resonance, the absence of the middle resonance in spectra of YBCO$_7$:Zn is perhaps not immediately obvious. By measuring the YBCO$_7$:Zn lineshape with a fast repetition rate (so that the middle resonance might be enhanced, relative to the mainline, due to its shorter $T_1$), we see (Fig. 3 (b)) in fact, that the lineshape of YBCO$_7$:Zn is unaltered by fast repetition (up to one-fourth of $T_1$ of the mainline). If there is any change, it is in fact the high frequency tail that has a somewhat reduced relative intensity, indicating that the tail has a longer $T_1$. This is in keeping with our understanding that the upper tail in the lineshape of YBCO$_7$:Zn appears due to those regions of the sample which are not fully oxidized and hence have a longer $T_1$. In short, YBCO$_{6.64}$:Zn has additional $^{89}$Y resonances while YBCO$_7$:Zn does not. In view of the abovementioned facts, the additional resonances are not due to spurious phases since those should be present independent of the oxygen content (the deoxygenated samples are obtained merely by vacuum reduction of YBCO$_7$:Zn at low $T$ ($< $ 450 $% ^{\circ}$C)). In order to identify the origin of these lines we have therefore performed quantitative analyses of the spectral intensity. Experimental lineshapes for YBCO$_{6.64}$:Zn, obtained with repetition times much longer than the spin-lattice relaxation times $T_1$, were fitted to a sum of three gaussians, where the line position and width of the two outer lines had been reliably fixed from the short repetition time spectra. The spectra along with the fits are shown in Fig. 4, while the variation of their intensity as a function of Zn content is shown in Fig. 5. It is of course quite natural to expect that the most affected outer line should be associated with the Y nuclei $nn$ to the Zn atoms. In the dilute limit, the intensity from a purely statistical occupancy of a single neighbouring site of Y by Zn, for an in plane concentration $c$, is 8$% c(1-c)^{7}$ for the 1$^{st}$ shell (curve A in Fig. 5). But, as Zn induces a significant shift of the 2$^{nd}$ $nn$ Y sites as well, we also need to ensure that the 2$^{nd}$ $nn$ to Y is unoccupied by Zn. The corresponding intensity for a purely random statistical occupancy would then be modified to 8$c(1-c)^{15}$ (curve B in Fig. 5) which yields a smaller intensity for large Zn concentrations. We see that the intensity of the outer line is then consistent with that of the 1$^{st}$ $nn$ shell, [*assuming that all the Zn are substituted in the planes*]{} ( in Fig. 5 we have taken $c = 1.5y$). As for the middle resonance, the expression for the intensity due to the occupancy of a single Y 2$^{nd}$ $nn$ site by Zn (with the 1$^{st}$ $nn$ unoccupied) is 16$c(1-c)^{23}$ in the dilute limit, which is much smaller than the experimental intensity. If the 2$^{nd}$ and 3$^{rd}$ $nn$ are occupied by Zn with the 1$^{st}$ $nn$ unoccupied, the intensity would be (8$% c(1-c)^{7}$ + 16$c(1-c)^{15}$)(1-c)$^{8}$ (curve C in Fig. 5). The assignment for the middle resonance is not so clear, but for dilute samples its intensity is consistent with that of total occupancy of the 2$^{nd}$ and 3$^{rd}$ $nn$, with the 1$^{st}$ $nn$ unoccupied by Zn. The $T$-dependence of the $nn$ line-shifts shown in Fig. 6 is seen to be Curie-like with a negative hyperfine coupling. This Curie-like behaviour is usually observed for local moments and justifies this denomination that we introduced in [@alloul2], although the actual magnitude and exact origin of this local moment behaviour will only become clear hereafter. The linewidth of the $nn$ resonances is found to increase with decreasing $T$ (Fig. 7). The linewidth which increases with Zn concentration may be associated with the RKKY-like interaction between the Zn induced local moments. With an increase in the concentration of local moments, we might expect a frozen magnetic state (most probably a disordered spin-glass) at some point in temperature. An estimate for this is provided by a simple analysis in Section IV-E. ### Main resonance The temperature dependence of the shifts, $\Delta K(T)$, of the mainline for YBCO$_{6.64}$:Zn is shown in Fig. 8. As reported before [@alloul2], the mainline shift does not significantly depend on the Zn content, and the average carrier density at long distance from Zn carrier density is therefore nearly unaffected by Zn substitution, at least for dilute concentrations of Zn for which the sample is still metallic. A slight offset with respect to the pure YBCO$_{6.64}$ is however evident at higher Zn doping levels and might be due to a small increase in the carrier concentration. Similar slight offsets are detected for the $^{17}$O NMR shift of these compounds [@bobroff], but would rather correspond to a minute decrease in the hole content. In this latter case, a slightly incomplete oxygen loading of the starting samples might result from the fact that it has to be achieved in sealed vials, and not in a flowing oxygen atmosphere, to facilitate $^{17}$O enrichment. It should be mentioned that in Ref. [@alloul2], the mainline shift in an unoriented YBCO$_{6.64}$:Zn$_{4\%}$ sample, had showed a slight upturn at low temperatures. Since the satellite intensities constitute a significant fraction here and cannot be clearly distinguished from the mainline, the line-position obtained from the peak did not represent the true position of the main line. In the present work, the different components of the spectra have been analysed in the fits with different repetition times so that the true position of the main resonance is deduced and shows no upturn. The width of the mainline for YBCO$_{6.64}$:Zn$_{y}$ (see Fig. 9) has a $T$-dependence similar to that of pure YBCO$_{6.64}$, in that it initially decreases with decreasing temperature and then shows an increase below 120 K which is sample dependent. In the pure system, the linewidth can only be associated with a small macroscopic distribution of chain oxygen content (which we estimate of about $\pm$0.02 for most oxygen contents) which results in a distribution of shifts at high temperatures. The $T$-dependencies of the shifts around the YBCO$_{6.64}$ composition are such that while the shifts are measurably different at room temperature, their magnitudes become nearly the same at low-$T$ [@alloulohno] and the NMR line becomes narrower. Therefore the width due to a distribution of oxygen content decreases at low-$T$. The magnitude of the width increases with Zn doping, partly due to long-distance effects of the spin-polarisation from the Zn-induced local moments. The $T$-dependence of the $^{17}$O NMR width is found to be much larger and provides supplementary information which is analysed in detail by Bobroff [*et al.*]{} [@bobroff]. Turning to the fully oxygenated YBCO$_7$:Zn samples, we find that here again the mainline shift is nearly independent of Zn concentration (Fig. 10). However, due to the broadening of the line at low-temperatures, we cannot unambiguously determine whether the maximum seen in the $T$ dependence of $% ^{89}$Y NMR shift of pure YBCO$_7$, with T(K$_{max}$) only slightly larger than $T_c$ (and having a possible connection to the pseudo-gap) has shifted to lower temperatures or altogether disappeared. The Curie-like broadening (Fig. 11) in YBCO$_7$:Zn is indicative of a distribution of magnetic contributions to the line positions. This RKKY-like broadening must originate from a magnetic state which develops around the doped Zn. For oxygen contents intermediate between O$_{7}$ and O$_{6.6}$, the $T$-dependence of the $^{89}$Y shift in YBCO$_{6+x}$:Zn is qualitatively similar to that in YBCO$_{6+x}$ (Fig. 12(a)). However, the sharp decrease in the shift that occurs around 100 K for the slightly oxygen depleted samples (YBCO$_{6.95}$ or so) is absent in the Zn doped samples where the decrease in the shift is more gradual with $T$. This might again be due to the difficulty in defining accurately the oxygen content (and therefore the hole concentration for the Zn substituted samples and especially for the large 4 % Zn concentration which has been systematically investigated. As for the outer resonance, we did not perform systematic investigations versus oxygen content. It is however clear in the spectra of Fig. 12(b), that the low-frequency tail which monitors the position of this outer resonance is progressively nearer to the central line when the oxygen content is increased. Further, this outer resonance even disappears for $x = 0.92$ which corresponds to the maximum oxygen content for 4 % Zn. The NMR shift of the outer resonance with respect to the mainline is therefore progressively reduced with increasing hole content. Spin-lattice relaxation ----------------------- Next, we present the results of spin-lattice relaxation measurements for YBCO $_{6.64}$:Zn. The data were obtained on the $nn$ lines and the mainline as detailed in the previous section. Representative spectra for various values of $t_{rep}$ are shown in Fig. 13. The resulting magnetisation recovery for the three lines for the data of Fig. 13 are shown in Fig. 14. Exponential fits have been found to apply in all cases as illustrated in Fig. 14. Taking such data is obviously not straightforward. A high enough signal to noise ratio is required, as seen for our spectra displayed in Fig. 13. Other publications on $^{89}$Y NMR in YBa$_{2}$Cu$_{4}$O$_{8}$:Zn [@williams1; @williams2] are completely bereft of $T_{1}$ data, which corroborates the difficulty in obtaining good data. The relaxation rate of the additional lines (other than the mainline) is seen (Fig. 15) to be strongly enhanced resulting from local moment fluctuations as is discussed in section IV-A2. The outermost satellite is the most affected which indicates that it must result from having Zn as its 1$^{st}nn$. The mainline $T_{1}$ is nearly unaffected which shows that, for dilute concentrations of Zn, the planar dynamic susceptibility far from Zn is unaffected, in accordance with the NMR shift data. In YBCO$_7$, Dupree [*et al.*]{} [@dupree1] measured the effect of Zn doping on $T_1$ at room temperature. They found that the $^{89}$Y spin-lattice relaxation rate was strongly enhanced on Zn substitution. However, our data were in complete disagreement with theirs. We therefore repeated measurements on various batches of samples and at various temperatures. In all cases we found that the nuclear magnetisation recovery fits well to a single exponential (Fig. 16) and that the resulting $T_1$ and its $T$-dependence is not significantly different from that of pure YBCO$_7$, as can be seen in Fig. 17. We must therefore conclude that limited accuracy was responsible for the observation done by Dupree [*et al.*]{} [@dupree1]. As there are no discernible resonances in addition to the main line in these experiments on YBCO$_7$:Zn, the implication is a much weaker induced moment in YBCO$_7$, compared to YBCO$_{6.64}$:Zn, in agreement with our bulk susceptibility data [@mendels1; @mendels]. Analysis of the experimental results ==================================== Local moments in YBCO$_{6.64}$:Zn --------------------------------- ### $nn$ NMR shifts We recall here, that the distinct, well defined resonances that we have observed in YBCO$_{6.64}$ correspond to Y near neighbour sites of the substituted Zn. The Curie-like $T$-dependence of the position of the first near-neighbour line, and the shortening of its T$_1$ at low-$T$ are striking experimental evidence of the occurrence of Zn induced local moments. The location, spatial extent and dynamics of these moments in YBCO$_{6.64}$ will be discussed first. Occurrence of local moments for the slightly overdoped composition YBCO$_{7}$ is also established through the induced long-distance perturbation of the host-spin-magnetization. The Zn induced local moments are quite clearly located in the vicinity of the Zn, and dominantly on the four nearest neighbour O or Cu orbitals. In what follows, we shall perform extensive comparisons of the $^{89}$Y NMR shift with the Zn induced Curie contribution to the spin susceptibility (expressed per mole Zn) [@mendels1; @mendels], $$\chi_c = \frac{C_M}{T} = \frac{N_A p^2_{eff}}{3k_BT}$$ Here $N_A$ is the avogadro number and $p_{eff}$ is the effective moment. These comparisons will allow us first to rule out a localisation of the moments on the O orbitals and then to demonstrate that the local moment is distributed on the Cu orbitals. Furthermore, assuming that the transferred hyperfine couplings are not modified by Zn substitution, we will show that our analysis is consistent with a locally AF state extended over a few lattice sites. A local moment could also be present around Zn if a hole were trapped on the near-neighbour oxygen orbitals. Two quite different physical situations would occur, depending on whether the local moment is located on the $% p_{\pi} $ or $p_{\sigma}$ orbitals. Since the $p_{\pi}$ are directly admixed with the Y $s$ orbitals, a strong positive hyperfine coupling would result, contrary to our observation of a negative Curie contribution to the shift. Therefore, the present experiment implies that this shift component can only be induced through the oxygen $p_{\sigma}$ orbitals. In undoped YBCO$_{6+x}$, the Y NMR shift arises from a coupling of the Y nuclear spin with the small fraction of holes on the O(2$p_{\sigma}$) orbitals due to their [*covalency*]{} with the Cu(3$d_{x^2-y^2}$) holes, while the spin-polarization of the doped holes themselves is negligible [@alloulohno]. This has been deduced from the fact that the covalent admixture of the O(2$p_{\sigma}$) orbital with the Cu(3$d_{x^2-y^2}$) orbital is about 10 % [@hybrid], which implies that the hyperfine coupling to the oxygen holes should be 10 times larger than its coupling to the Cu(3$d_{x^2-y^2}$) holes. Let us first consider the possibility that the Curie contribution comes from holes localised on the four O($2p_{\sigma}$) orbitals near Zn. The 1$^{st}$ $% nn$ Y site has six O $nn$ which we assume are nearly unaffected by Zn and two O $nn$ which would exhibit Zn induced Curie magnetism. The net $^{89}$Y shift would be written as $$\Delta K_{1}^{\alpha}=(6/8)K_{s}^{\alpha}+K_{c}^{\alpha}+\delta_{ 1}^{ \alpha}$$ where index $\alpha$ refers to a principal direction, $K_{s}^{\alpha}$ is the spin shift of the mainline, $K_{c}^{\alpha}$ =2$C_{s}^{\alpha}/T$ is the Curie contribution to the spin shift due to its two 1$^{st}$ $nn$ O with a moment, and $\delta_{1}^{\alpha}$ is the chemical shift. A least- squares fit to the data in Fig. 18(a) allows us to extract the two unknown parameters $C_{s}^{\alpha}$ and $\delta_{1}^{\alpha}$. The chemical shift values thus obtained are $\delta_{1}^c$ $(\delta_{1}^{ab})$ =144 (163) $\pm$ 10 ppm. The values of $C_{s}^{\alpha}$ are found to be -14000 (-12300) $\pm$ 500 ppm K for H$\mid \mid$c (H$\mid \mid$ab). Let us compare then these shifts with the macroscopic susceptibility $\chi_c$, with $\mu_B K_{c}$ = $2H_{hf}\chi_c/4$, if the moment is distributed on the four O(2$p_{\sigma}$) near neighbour orbitals to the Zn. Using $C_M$ = 9.2 $\times$ 10$^{-2}$ emu K/mole Zn [@mendels1; @mendels] and the Curie term in the shift deduced above, we get H$_{hf}$ = -1.6 kG. This is of the order of the hyperfine coupling expected with Cu and nearly 10 times smaller than that expected with oxygen. Moreover, we point out that a 2$^{nd}$ $nn$ Y to Zn would not be coupled to the moment on the oxygen (a local moment on oxygen is unlikely to be spread over more than 4 sites since it would presumably arise from hole localisation). Hence, a second line in addition to the main line should not be observed, contrary to the data from our experiment. One could imagine that one has both, localised hole and weakly affected $nn$ Cu. But this would require a large susceptibility on the Cu to give the strong shift of the 2$^{nd}$ $nn$ line compared to the mainline. This eliminates the oxygen $p_{\sigma}$ as a possible site for local moments. If, however, the satellite shift is modelled as coming from a hyperfine coupling to the local moments residing on the Cu $d_{x^{2}- y^{2}}$ orbitals, the relevant equation for the 1$^{st}$ $nn$ shift in our model is $$\Delta K_{1}^{\alpha }=(5/8)K_{s}^{\alpha }+K_{c}^{\alpha }+\delta _{1}^{\alpha }$$ &gt;From a fit of the data to this equation, the chemical shift values obtained (see Fig. 18(b)), $\delta _{1}^{c}$ $(\delta _{1}^{ab})$ =100 (140) $\pm $ 10 ppm, are only slightly different from $\delta _{1}^{c}$ $(\delta _{1}^{ab})$ =165 (150) ppm found in the pure material [@alloul2]. The values of $C_{s}^{\alpha }$ are found to be -13100 (-11600) $\pm $ 500 ppm. This implies a hyperfine field $H_{hf}$ $\approx $ -3.2 kG/Cu which is slightly larger than that for the pure material ($\approx $ -2 kG). Such a modification of $H_{hf}$ could be attributed to a corresponding change of the Cu(3$d_{x^{2}-y^{2}}$)-O(2$p_{\sigma }$) hybridisation due to a displacement of the 1$^{st}$ $nn$ oxygen to Zn . Alternatively, if the hyperfine coupling stays unchanged, the actual susceptibility on the 1$^{st}$ $nn$ Cu, $\chi (1)$, is larger than $\chi _{c}/4$. This would imply that further copper ions would have a magnetization anti-parallel to the applied field, which might be expected if the local moment develops as an AF correlated cloud of copper lattice sites. Considering this possibility, the Cu 2$^{nd}$ $nn$ to Zn will bear a small negative susceptibility $\chi (2)$. Fig. 19 illustrates schematically the location and the orientation of the Zn induced local moments. In such a model, the shifts of the 1$^{st}$ and the 2$% ^{nd}$ $nn$ Y will be as follows; $$\Delta K_{1}^{\alpha }=(4/8)K_{s}^{\alpha }+2K_{1}^{\alpha }+K_{2}^{\alpha }+\delta _{1}^{\alpha }$$ $$\Delta K_{2}^{\alpha }=(6/8)K_{s}^{\alpha }+K_{1}^{\alpha }+K_{2}^{\alpha }+\delta _{2}^{\alpha }$$ Here we can differentiate the hyperfine fields for the first and second $nn$ using $\mu _{B}K_{1}^{\alpha }=H_{hf}(1)\chi (1)$ and $\mu _{B}K_{2}^{\alpha }=H_{hf}(2)\chi (2)$. We can then fit $\Delta K_{1}^{\alpha }-(4/8)K_{s}^{\alpha }$ to a Curie term in addition to a constant and likewise for $\Delta K_{2}^{\alpha }-(6/8)K_{s}^{\alpha }$. A fit of the observed shifts of the 1$^{st}$ and the 2$^{nd}$ $nn$ Y to these equations yields the following values for the corresponding Curie terms; $C_{1}^{c}$ = -14,530 ppm K, $C_{2}^{c}$ = + 4630 ppm K. The corresponding chemical shifts for the 1$^{st}$ and the 2$^{nd}$ $nn$ Y are found to be 75 and 144 ppm, respectively. Assuming the same hyperfine coupling for 1$^{st}$ $nn$ and 2$% ^{nd}$ $nn$ Y, $\chi (2)=-\chi (1)/3$ and therefore the macroscopic susceptibility $\chi _{c}=8\chi (1)/3$. Using $\mu_{B}K_{1}^{% \alpha}=H_{hf}(1)\chi (1)$, we get a hyperfine field of about -2.35 kG which is closer to the value of -2 kG in the undoped compound. Although this picture is then compatible with the experimental results, the accuracy of the data is not sufficient to ascertain its validity. The fact that the intensity of the middle resonance cannot be assigned solely to the 2$^{nd}$ $% nn$ of Zn implies that further near neighbour sites of the Zn should be taken into account. More accuracy would also be required to take into account the 3$^{rd}$ $nn$ and try to estimate the size of the AF correlated region around the Zn site. ### Spin-lattice relaxation We next turn to a discussion of the spin-lattice relaxation rate which is expressed as, $$\frac{1}{T_1T} \propto \Sigma _{{\bf q}} A^2({\bf q}) \frac{\chi^{\prime \prime}({\bf q}, \omega)}{\omega}$$ where $A({\bf q})$ is the coupling to the magnetic fluctuations at wave vector ${\bf q}$ [@moriya]. The O and Y nuclei are at symmetry positions with respect to Cu so that $A({\bf q}_{AF})=0$, and the fluctuations at $% {\bf q}_{AF}$ are filtered at these two sites. Consequently, in YBCO$_{6+x}$, the $T$-dependence of $(T_1T)^{-1}$ for Y and O is different from that of Cu which is dominated by the fluctuations at ${\bf q}_{AF}$ [@ginsberg]. On adding Zn, the symmetry around the 1$^{st}$ $nn$ Y is broken and this Y site becomes then sensitive to the magnetic fluctuations on the neighbouring copper ions (either intrinsic to the pure compound or due to the local moment). Similarly, the magnetic fluctuations are no longer symmetric on the 2$^{nd}$ $nn$, so that the enhanced relaxation rate at this site is also connected to local moment fluctuations. The sharp increase of $(T_{1}T)^{-1}$ at low-$T$ on the Y $nn$ nuclei (much faster than the corresponding variation on the $^{63}$Cu nuclei in the pure compounds) is a direct proof that the local moment fluctuations are [*not*]{} those of the Cu hole spins of the pure compound. The very existence of a Curie contribution to the spin susceptibility indeed clearly points out that the fluctuation of the Cu hole spins in the vicinity of the Zn are much slower than those of the pure host. The present data for $(T_{1}T)^{-1}$ on the near-neighbour nuclei are then good proof of the slow fluctuations of the local moment. In the case of local moments in noble metal hosts, the spin-lattice relaxation of host nuclei nearby the local moment is totally dominated at low-$T$ by the fluctuations of the local moment (the usual Korringa process via conduction electrons is somewhat smaller) [@alloul1]. The situation here is quite similar for the Y 1$^{st}$ $nn$ of the Zn. We can therefore consider that this nuclear spin is coupled to the 2 $nn$ coppers which bear susceptibilities $\chi _{c}({\bf q},\omega )$ if we neglect the contribution to $T_{1}$ of the uncompensated Cu spin 2$^{nd}$ $nn$ to Zn (Fig. 19). The relaxation rate at low-$T$ is then given by, $$\frac{1}{T_{1}}=\frac{2k_{B}T}{\hbar ^{2}}(\frac{\gamma _{n}}{\gamma _{e}} )^{2}H_{hf}(1)^{2}\Sigma (\frac{\chi _{c}^{\prime \prime }({\bf q},\omega )}{% \omega})$$ where $\gamma _{n}/\gamma _{e}$ is the ratio of the nuclear and the electronic gyromagnetic ratios and $H_{hf}(1)$ is the hyperfine coupling. In the limit $\omega \rightarrow 0$, the summation is given by $\chi _{c}(T)\tau /2\pi $ where $\chi _{c}(T)$ is the local moment susceptibility and $\tau $ is the relaxation time of the local moment spin. The fluctuation rate of the local moment spin is usually made up of two contributions $$\frac{1}{\tau }=\frac{1}{\tau _{ex}}+\frac{1}{\tau _{int}}$$ where the first term corresponds to the single Zn impurity local moment relaxation to the host spin bath, for instance through the exchange with the conduction electron spins and the second would correspond to fluctuations due the coupling between the Zn induced local moments which depends on Zn concentration. For instance, for dilute local moments in noble metal hosts $$\frac{1}{\tau _{ex}}=(\frac{4\pi }{\hbar })(k_{B}T)(J_{ex}\rho (\epsilon _{F}))^{2}$$ if a Korringa relation holds ($J_{ex}$ is the coupling of the local moments to the band). In that case, the second term is $\tau _{int}^{- 1}=\omega _{int}/2\pi $ with $\omega _{int}^{2}=8J_{int}^{2}zS(S+1)/3\hbar ^{2}$ where $z\propto c$ is the number of nearest neigbour spins and $J_{int}$ is the conduction electron mediated coupling between impurity spins. In our case, the $T_{1}$ values for the near neighbour resonances did not depend markedly on the Zn content, so that the single impurity induced relaxation $1/\tau _{ex}$ dominates the results. Further, the local moment spin susceptibility is Curie-like down to low temperatures. A small Curie-Weiss correction $\chi =C/(T+\theta )$ with $\theta $ $\simeq 4$ K is observed for YBCO$_{6.64}$:Zn$_{4\%}$ [@mendels1]. This is in agreement with the observed spin freezing temperature of about 3 K in the sample [@mendels1]. All these results therefore allow to conclude consistently that the spin-lattice relaxation is dominated by the spin fluctuations of the isolated Zn induced local moment. This should then be written as $$\frac{1}{T_{1}}=\frac{2k_{B}T}{\hbar ^{2}}(\frac{\gamma _{n}}{\gamma _{e}} )^{2}H_{hf}(1)^{2}\frac{\chi _{c}(T)\tau _{ex}}{2\pi }$$ Since $\chi _{c}T$ is constant, $1/T_{1}$ is proportional to $\tau _{ex}$. In the conventional metallic case, where $\rho (\epsilon _{F})$ is independent of the temperature, the spin-lattice relaxation rate $1/T_{1}$ of the host nuclei (near impurities) is observed to follow a Curie- like law $1/T_{1}$ $\propto $ $C/T$ [@alloul1]. The present case is clearly more complicated since the host metal itself is strongly correlated, which results in the pseudo-gap of the static spin susceptibility and in the $% 1/(T_{1}T)$ behaviour for the $^{63}$Cu. As the local moment positions are commensurate with the Cu hole spin system, we might anticipate that a similar anomaly might occur for $1/(\tau _{ex}T)$, which scales with $% T_{1}/T $. We have therefore plotted in Fig. 20 this quantity for the three data points for Y near neighbours to Zn. Although the data are clearly insufficient, they suggest a maximum for $1/(\tau _{ex}T)$, quite analogous to that seen for $^{63}$Cu $T_{1}$. Comparison with other experiments --------------------------------- Following our early report [@mahajan], some related experiments have been performed on the impurity substituted cuprates. In various cases, the authors have drawn conclusions which are not in complete agreement with our results and sometimes disagree totally. These contradictions are therefore considered in the following. ### Gd ESR Janossy [*et al.*]{} [@janossy] have done Gd$^{3+}$ ESR, in 1% Gd substituted YBCO$_{6+x}$, which are in principle quite similar to our experiments. Indeed, the Gd electronic moment and the $^{89}$Y nuclear spin are coupled to the copper hole spins by similar transferred couplings. They find that the $g$-shift of the Gd ESR has the same $T$ dependence as the $% ^{89}$Y NMR shift. They have therefore used the Gd probe in Y$_{0.99}$Gd $% _{0.01}$BaCuO$_{6 + x}$:Zn samples as well. The Gd spectrum should result from a simple scaling with the $^{89}$Y NMR spectrum through the ratio of the hyperfine couplings, as the relative positions of the main and satellite lines scale as the ratio of $(\nu H_{hf})$ where $\nu$ is the operating frequency and $$(\delta \nu/\nu) = H_{hf}\chi$$ The fact that they did not detect any $nn$ resonances might cast some doubt on the meaning of our results. However, we insist here that one should also consider scaling of the relaxation rates to arrive at reliable conclusions. The relaxation rates scale as $$1/T_1 \propto (\gamma H_{hf})^2$$ and therefore contribute quite differently to the broadening of the spectra. In the case of NMR, the linewidth $\Delta \nu$ is governed by the susceptibility distribution which scales as $\delta \nu$, while in ESR, the $% T_1$ process is so efficient that it contributes significantly to (and even dominates) the ESR linewidth. Using our detailed data for $^{89}$Y $nn$ NMR, we can easily calculate the expected contributions for the Gd $nn$ ESR, through Eq. (11)-(12). Here we shall use [@janossy], $^{Gd}H_{hf}$ = 10 $% ^{Y}H_{hf}$ and $\gamma _e$ = 1.6 $\times$ 10$^{4}$ $^{89} \gamma$. Using the operating frequencies, $\nu$ = 15.64 MHz for $^{89}$Y NMR and $\nu$ = 245 GHz for Gd ESR, we can deduce the satellite separation from the main line and the static and $T_1$ contributions to the linewidth at 100 K. These are reported in Table 1. We find then that the expected width for the Gd ESR $nn$ line is at least four times larger than its shift, which justifies that the satellite resonances are indeed quite difficult if not impossible to observe. The other issue is the suggestion by Janossy [*et al.*]{} that the susceptibility corresponding to the pure YBCO$_7$ is restored at the Cu neighbouring Zn. Since we observe a Curie-like increase of the $nn$ line shift in Zn doped samples, increasing to values well above the YBCO$_7$ shift, the implication is that the hole content near the Zn dopants has not been restored to that of undoped YBCO$_{7}$. ### NMR in Al doped La$_{2-x}$Sr$_x$CuO$_4$ Recently, Ishida [*et al.*]{} [@ishida2] have performed NMR experiments on La$_{2-x}$Sr$_{x}$CuO$_{4}$ (LASCO), in which non- magnetic Al is substituted on the Cu site of the CuO$_{2}$ planes. Although they were unable to detect the $nn$ nuclei of Al, they could directly detect the $% ^{27} $Al NMR signal. They did find that the shift of the $^{27}$Al NMR has a Curie component. Since Al itself does not bear a local moment, this observation can only result from a local moment which resides either on the $% nn$ oxygen or copper orbitals which are coupled to the $^{27}$Al nuclear spin via transferred hyperfine couplings. This observation does not enable one to decide the location of the local moment. However, by analogy with our results, the authors have inferred that it is located on the $nn$ Cu orbitals. Their data are important as they confirm that a non-magnetic substituent induces a local moment in a cuprate different from YBCO$_{6+x}$. Ishida [*et al.*]{} [@ishida2] measured the $^{27}$Al NMR shift and $% T_{1}$ which can be compared with the corresponding data on the $nn$ $^{89}$Y NMR in YBCO$_{6+x}$. As we stress below, several qualitative differences in the experimental results are evident. First, they analysed the $T$-dependence of the shift and susceptibility in Al doped LASCO with a Curie-Weiss law with a sizeable Weiss temperature ($% \theta $ $\approx $ 50 K). It is not so clear whether this high value of $% \theta $ is also suggested by the $^{27}$Al NMR results since it might be influenced by the reference taken for the $^{27}$Al chemical shift. In any case, Ishida [*et al.*]{} do not demonstrate whether this large $\theta $ corresponds to a genuine single impurity effect or if it varies with Al content thereby revealing a large coupling between the local moments. In underdoped YBCO, we never found any indication for such a large deviation from the Curie law, neither from NMR nor from susceptibility data. A negative value $\theta $ $\approx $ -30K has been found by Monod [*et al.*]{} [@monod] in YBCO:Zn by susceptibility measurements. However recent data on samples with low content of impurity phases, by Mendels [*et al.*]{} [@mendels1], establish that a significant estimate of $\theta $ requires a correct accounting of the susceptibility contribution of the pure compound. They deduced $\theta $ $\simeq 4$ K for YBCO$_{6.64}$:Zn$_{4\%}$. In order to analyse the $^{27}$Al relaxation rate data, Ishida [*et al.*]{} use the simple local moment formulation of Eq. (7-10) (with different notations). They provided a quantitative analysis of their data in which the value of $J_{int}$ as deduced from their value of $\tau_{int}$ (a microscopic probe) is fully consistent with their value of $\theta$ (from bulk susceptibility, a macroscopic probe). This analysis would then appear to be consistent with and support a simple local moment picture. We stress here that not only do their results differ [*qualitatively*]{} from ours but that an alternative interpretation is possible [@comment]. In their analysis, they deduce a relaxation rate 1/$\tau$ which only varies slightly with $T$. This would be expected if it were dominated by the $\tau_{int}$ term which is expected to be $T$-independent in the classical local moment picture. Our results, on the other hand, have exhibited a totally opposite trend with the $\tau_{ex}$ being negligible. We further find that in their $% T_1$ analysis, Ishida [*et al.*]{}have taken a local moment Curie susceptibility while their own $^{27}$Al data yielded a large Curie-Weiss temperature ($\theta$ = 50 K). Introducing the actual 1/$(T + 50)$ dependence of $\chi$ in Eq. 7-10 lead us to deduce $1/(\tau_{ex}T)$ = 2 $% \times$ 10$^{10}$ (sec K)$^{-1}$ which corresponds to a $T$ dependent contribution to $1/\tau$ which is one order of magnitude larger than their own result. This would yield a rather large value $J_{ex}$ = 0.17 eV which contradicts their expectations. However, such an analysis yields only a modest modification of the $T$-independent contribution which becomes $1/\tau _{int}$ = 5.6 $\times $ 10$% ^{12}$ sec$^{-1}$, only a factor of two smaller than their result which still corresponds to a sizeable value of $\theta$, the Weiss temperature. Of course, a significant difference between the two systems (YBCO$_{6.64}$ and LASCO) is their doping range. While La$_{1.85}$Sr$_{0.15}$CuO$_{4}$ should be considered close to an optimally doped material, in YBCO$_{6.64}$ we are clearly in the underdoped regime which usually displays qualitatively different properties. Unfortunately we do not have as complete results on YBCO$_{7}$:Zn, which would allow for a direct comparison between the two systems. The case of YBCO$_7$:Zn ----------------------- We have then clearly demonstrated that local moments are induced on Zn substitution in YBCO$_{6.64}$. We have not studied samples with oxygen contents other than 6.64 and 7 in great detail. However, we have seen, from measurements on unoriented samples (see Fig. 12(b)), that the low-frequency tail of the spectra which is associated with the outer satellite resonance is less shifted from the main resonance position for increasing oxygen content. We can then conclude that the local moment value decreases gradually with increasing $x$. On reaching YBCO$_7$, we find that the $nn$ lines have practically merged with the main line. This is confirmed from magnetisation data on impurity-phase free samples by Mendels [*et al.*]{} [@mendels1; @mendels], which show that the Curie constant for YBCO$_7$:Zn is about one-sixth that of YBCO$_{6.64}$:Zn. Assuming the same hyperfine couplings, the expected first $nn$ line position is shown in Fig.  21. In view of the width due to a distribution of oxygen content, it is evident that it will be difficult to resolve any extra resonance even for lower Zn contents. Going to lower temperatures is ruled out as well due to the relatively high $T_c$ of the samples. We did not succeed either in distinguishing the $nn$ resonance from a contrast of relaxation rate with the mainline. We shall see here that such a contrast is not expected if we estimate the relaxation rate for the outermost resonance by scaling the YBCO$_{6.64}$:Zn data at 100 K. The Curie term in YBCO$_{7}$:Zn is about one-sixth that in YBCO$_{6.64}$:Zn and the density of states at the Fermi level $\rho (\epsilon _{F})$ can be estimated about 3 times higher, from the $^{89}$Y NMR shift data. Assuming that the local moment to band coupling $J_{ex}$ stays unchanged, Equations 9 and 10 allow to deduce a contribution of the local moment fluctuations to 1/$T_{1}$ of $\approx $ 0.0018 sec$^{-1}$, which is much smaller than the observed rate in undoped YBCO$_{7}$ (0.03 sec$^{-1}$). This confirms that for YBCO$% _{7}$:Zn, local moment fluctuations are indeed difficult to detect on the $% nn $ Y site. In fact, the occurrence of a local moment in YBCO$_7$:Zn was evidenced first [@alloul2] from the presence of the oscillating long distance RKKY spin polarisation of the host Cu spins. This was established from the Curie like increase of the $^{89}$Y NMR linewidth observed in YBCO$_7$:Zn, and is confirmed in the present experiments as well as from $^{17}$O NMR linewidth [@yoshinari] data. About AF correlations near the Zn impurities -------------------------------------------- The experimental results on Gd ESR in YBCO and Al NMR in Al doped LASCO have been interpreted along quite different lines. For Janossy [*et al.*]{}, the absence of a Curie term for the ESR line shift led them to conclude that there was no local moment. Instead, they considered that the susceptibility of the YBCO system is restored near the Zn, as they found an increase of the ESR shift with decreasing $T$. On purely experimental grounds it is not clear whether the detected ESR signal involves all the Gd spins. We have seen above that the outermost $nn$ resonances are not expected to be resolved in the ESR data. However, the inner resonance might contribute to a wing in the signal, and might explain the observed shift. In any case, the present detailed $^{89}$Y NMR data demonstrate that the central line is not shifted at all, so that the susceptibility is unmodified at a few lattice distances from the Zn impurity. Second, the Curie-like increase of the shift for the $nn$ reaches values well above those observed for pure YBCO$_7$, which implies that the hole content near the Zn dopants has not been restored to that of undoped YBCO$_{7}$. Finally, the bulk susceptibility data (measured using a commercial SQUID magnetometer) of Mendels [*et al.*]{} [@mendels1] indicate that the local moment susceptibility increases down to 10 K, so that there is no doubt about the occurrence of a Curie contribution. The fact that the $nn$ lines broaden stongly with decreasing temperature is sufficient to explain that the Gd ESR picks up only a part of the Gd signal which saturates at low- $T$. This is somewhat reminiscent of the situation which prevailed in the preliminary $% ^{89}$Y NMR measurements done for large Zn concentrations in YBCO [@alloul2]. In that case, the $nn$ resonances were not resolved and an apparent shift of the $^{89}$Y NMR signal was observed. As the broadening of the $nn$ lines is expected to be larger in the Gd ESR, the measured shift involves a contribution of those $nn$ sites and is much smaller than that expected for the 1$^{st}$ $nn$ signal. As for the analysis of the Al NMR data in LASCO, the authors of course do not question the existence of a local moment. But, they still consider that AF correlations are reduced near the impurity, and that the induced moments on the four copper near neighbours are decoupled. Further, they even anticipate that a state nearer to that observed in the overdoped material prevails at distances just greater than the 1$^{st}$ $nn$ distance [@ishida2]. However, we feel that there is no experimental evidence for such a possibility in their work on the LASCO system. The only argument advanced by Ishida [*et al.*]{} to support this hypothesis is the independent experimental evidence found in their group for two $T_{1}$ components in their $^{63}$Cu NQR measurements in Zn doped YBCO$_{7}$, and in YBa$_{2}$Cu$% _{4}$O$_{8}$[@ishida1; @zheng]. In both cases they find that the long $% T_{1}$ component is longer than that observed in the pure system, and they therefore associate it with Cu nuclei near the Zn impurities. This leads them to conclude that the magnetic fluctuations near the Zn impurites have been suppressed. However, the relative magnitude of the two components in terms of number of sites and their dependence on Zn doping which could support this interpretation has not been studied in detail. Most importantly, the underlying idea seems to us to contain an essential contradiction. Indeed if the AF fluctuations around the Zn were suppressed, this would imply that Zn is in a classical metallic environment, which would be totally inconsistent with the occurrence of a local moment [@comment]. In the superconducting state, Ishida [*et al.*]{} do find $^{63}$Cu NQR relaxation rates much larger than those found in the pure system, which indicates the existence of states in the superconducting gap. This is also seen from Yb Mossbauer experiments on samples in which a small fraction of Y has been substituted by Yb [@hodges]. States in the gap in the superconducting state induced by Zn impurities were also seen by neutron scattering experiments [@sidis]. Those states are found at a scattering vector of ($\pi$, $\pi$), even for YBCO$_7$:Zn , while a scattering at ($\pi$, $\pi$) for this oxygen content can hardly be detected in the pure system. These experiments are thus direct evidences in favor of the persistence of AF correlations in the vicinity of the impurities. All these observations support the main point which we have been advocating, that the AF correlations are at least maintained, and perhaps even strengthened near the Zn impurities. In such a case the local moment cannot be considered as formed of four independently fluctuating moments on the four Cu sites $nn$ to Zn, but rather as an extended state involving further neighbours, and in which the Cu $nn$ to Zn are ferromagnetically correlated and fluctuate as a single entity. We therefore think that the experimental observation done by Ishida [*et al.*]{} on the normal state $^{63}$Cu NQR $T_1$ might have a quite distinct interpretation. A more systematic study, possibly with different impurities might be needed to clarify the origin of the longer $T_1$ component. In conclusion, it seems to us that the existing experiments do not contradict the main point of view originally proposed, i.e. that the local moments induced by Zn are associated with the [*correlated*]{} nature of the CuO$_2$ planes and that AF correlations might even actually be enhanced around Zn. Induced spin polarisation at large distance from the Zn ------------------------------------------------------- Up to now we have mainly considered the magnetic moments induced near the Zn impurities. In noble metals hosts, any local charge perturbation is known to induce long distance charge density oscillations (also called Friedel oscillations). Similarly a local moment induces a long distance oscillatory spin polarisation (RKKY) which has an amplitude which scales with the coupling $J_{ex}$ of the local moment with the conduction electrons. This oscillatory spin polarisation gives a contribution to the NMR shift of the nuclei which decreases with increasing distance from the impurity. In very dilute samples, if the experimental sensitivity is sufficient, the resonances of the different shells of neighbours to the impurity can be resolved [@alloul1]. These resonances merge together if the impurity concentration is too large, which results then in a net broadening of the host nuclear resonance. Here, the occurrence of the local moment, even induced by the non-magnetic substitution is also a local magnetic perturbation in the correlated host. One therefore expects a response which will extend to long distances from the impurity. Such contributions to the NMR linewidths have been found in our work. We shall consider here in turn the case of YBCO$_{6.6}$ and that of YBCO$_7$. ### YBCO$_{6.6}$ Indeed, both the central $^{89}$Y line as well as the near neighbour resonances have been found to be broadened in YBCO$_{6.6}$. As seen in figures 7 and 9, these linewidths increase at low-$T$ and also increase with increasing impurity content. The central line broadening is unfortunately only a small fraction of the pure compound linewidth, and the temperature dependence of the impurity induced contribution cannot be extracted accurately. Experiments have therefore been performed by Bobroff [*et al.*]{} [@bobroff] on $^{17}$O nuclei in substituted samples. The larger hyperfine coupling of the $^{17}$O nuclei with the planar Cu as compared to that of $^{89}$Y lead consequently to broadenings of the $^{17}$O NMR width, which have been studied in great detail both for Ni and Zn substitutions. It has been found that in both cases the broadening increases much faster than $% 1/T$ at low temperatures, contrary to what one might expect in a non-correlated metallic host. This fast increase is a signature of the anomalous magnetic response of the host which displays a peak near the AF wavevector ($\pi$, $\pi$). In the present experiments, the broadenings of the 1$^{st}$ $nn$ line (Fig. 7) are somewhat related to this long distance polarisation induced by the Zn impurity. The large increase of the $nn$ linewidth with increasing Zn concentration is due to the distribution of susceptibility of the moments associated with their mutual interaction. In a molecular field approach, the Curie contribution $K_c$ to the shift of a $^{89}$Y $nn$ of a given Zn atom is proportional to $\chi (H_0 + H_m)$, where $\chi$ is the single impurity dimensionless susceptibility (= $c_{imp}/T$) and $H_m$ the molecular field at the moment site induced by other Zn moments. This molecular field scales with the magnetization of the local moments ($H_m$ = $kM$) and therefore varies as $1/T$. The linewidth is then related to the root mean square value of the molecular field $\delta H_m$. Consequently, the linewidth due to interaction between the local moments (which scales with $\chi \delta H_m$) should scale as $1/T^2$. We have therefore plotted in Fig.  22 the quantity $% T^2 \Delta H_{corr}/H$ versus $T$, where $\Delta H_{corr} = \Delta H_{nn} - \Delta H_{pure}$ is the increase of the $nn$ satellite linewidth (Full Width at Half Maximum) with respect to that of $^{89}$Y in pure YBCO$_{6.6}$. We can see that $T^2 \Delta H_{corr}/H$ is nearly $T$-independant as expected from such a simple model. Let us point out that $H_m$ should in principle behave as the long distance spin polarisation detected by $^{17}$O NMR, and should then increase faster than $1/T$ at low temperature. Although the experimental accuracy on the NMR width is not great, a large increase of $% T^2 \Delta H_{corr}/H$ is not observed at low $T$. More detailed and possibly more accurate experiments are required to better understand whether other contributions to the $nn$ linewidth have to be considered as well. &gt;From our data we can however get an overestimate of $\delta H_{m}$ from a comparison of the magnitude of the linewidth with the actual shift of the $nn $ line. Assuming a gaussian shape for this resonance $\Delta H_{corr}/2.36$ is simply proportional to $\chi \delta H_{m}$, while the shift $K_{c}H_{0}$ is proportional to $\chi H_{0}$. Therefore, $\delta H_{m}$ = 0.42 $\Delta H_{corr}/K_{c}$. Further, from the analysis of Eq. 3, $K_{c}\simeq 0.024/T$. From the discussion above, $\Delta H_{corr}$ = $2.36kc_{imp}^2H_0/T^2$ and from Fig. 22 for 1 % Zn, $\Delta H_{corr}\simeq 2H_{0}/T^{2}$. Then, for an applied field $H_{0}$ = 7 Tesla, we deduce $$\delta H_{m}=250/T(Tesla/\%Zn)$$ The molecular field becomes comparable with the thermal energy for $% k_{B}T=\mu _{eff}H_{m}$, which for a measured $\mu _{eff}\simeq 0.8\mu _{B}$ in YBCO$_{6.6}$ corresponds to about 1.2 Tesla/K. Therefore the temperature at which a spin-glass freezing of this system should occur can be estimated to be about 15 K for 1 % Zn. This number deduced from this rough analysis is somewhat higher than that obtained from the Weiss temperature measured by static susceptibility by Mendels [*et al.*]{} [@mendels1], which does not exceed 4 K for 4 % Zn. Apart from the above mentioned possible experimental limitations, this difference could be linked with the fact that we are dealing here with a 2D Heisenberg spin system, for which quantum fluctuations reduce the spin-glass ordering temperature to $T_{g}=0$ [@dekker]. A finite value for $T_{g}$ would then only result from weak interplane exchange couplings. ### YBCO$_7$ The broadening has been found to increase as 1/$T$ at low-$T$, for this slightly overdoped composition for which the planar susceptibility of the pure system has little $T$-dependence. This increased linewidth at low-$T$ is a direct proof of the existence of a local moment behaviour induced by Zn for this overdoped system [@alloul1]. We have seen that the absence of near neighbour resonance lines is also an indication that the effective moment is very small, which confirms the susceptibility data of Mendels [*et al.*]{}. A similar observation has been made by Bobroff [*et al.*]{} [@bobroff] through $^{17}$O NMR data. Initially such a broadening can be explained with the RKKY-like broadening induced by the local moments. However, in the underdoped case it has been shown that the response of the correlated electronic system is quite distinct from that of a free electron gas, as the $^{17}$O NMR linewidth exhibits an anomalous $T$ dependence The NMR data for the YBCO$_7$ composition, both for $^{17}$O and $^{89}$Y, does not display such an anomalous T dependence, and one might wonder whether a conventional RKKY broadening is then recovered. Such an approach has been used in our initial report [@mahajan]. However, the very fact that a $T$- dependent magnetic behaviour is induced by Zn substitution is an indication that [*the correlated nature of the electronic state has not disappeared in*]{} YBCO$_7$. This is also established by the well known anomalous non-Korringa $T$-dependence of $^{63}$Cu [@takigawa]. Therefore a direct test of the shape of the spatial dependence of the impurity induced spin polarisation should give us information on the importance of these correlations. This aim will be pursued in the future with careful studies of the NMR lineshapes, which are expected to be more sensitive to the detailed shape of the spin polarisation [@bobroff]. Such experimental studies will be undertaken on $^{17}$O NMR which possesses a larger signal to noise ratio. A comparative discussion of the induced spin polarisation as sensed by the $^{89}$Y and the $^{17}$O nuclei will therefore be performed in the future. Conclusions =========== A large variety of conclusions have been drawn and various questions have been raised from the present results. They address different points extending from the materials properties to detailed questions on the electronic structure of the impurities and their influence on superconductivity. First, concerning the [*physical chemistry*]{} of the cuprates, the intensity of the near neighbour resonances allowed us to calibrate the amount of Zn substituted on the planar Cu sites. Our result is the strongest experimental proofthat the Zn substitutes dominantly on this planar site, up to 3 % Zn, and within 10 % experimental accuracy. We have confirmed the influence of Zn impurities on the [*phase diagram* ]{} of the cuprates in the underdoped regime. The implication that the static and dynamic susceptibility far from the impurity is unaffected by Zn is borne out by our shift and relaxation data. This demonstrates that the related [**q**]{} = 0 pseudo-gap is not modified. The change of the macroscopic susceptibility is only associated with modifications of magnetic properties in the vicinity of the impurity. Kakurai [*et al.*]{} [@kakurai] initially suggested, on the basis of their neutron scattering experiments, that the pseudo-gap vanishes at [**q**]{} =$(\pi ,\pi )$ while the gap at other [**q**]{} values is unchanged. However, the neutron data of Sidis [*et al.*]{} [@sidis] in fact suggests that the pseudo-gap at [**q**]{} =$(\pi ,\pi )$ does not vanish but that some states appear in the pseudo-gap. Those could also be associated with the local magnetic modifications induced around the Zn. In a scenario in which the pseudo gaps would be associated with the formation of local pairs at high-$T$, these results ndicate that impurities do not prevent the formation of local pairs except possibly in their vicinity. What are the actual magnetic properties [*in the vicinity of the Zn impurity*]{}? Although our early experiments had given strong proofs of the occurrence of a local moment behaviour induced by non-magnetic Zn impurities, the validity of this observation has been periodically put into question. The significance of the $nn$ $^{89}$Y NMR results has been, for instance, questioned from the absence of detectable $nn$ resonances of Zn in the ESR experiments on Gd/Y substituted underdoped samples. We have clearly shown here that the large expected relaxation rate induces a broadening of the Gd ESR $nn$ lines which prohibits their detection. The authors have also concluded from those Gd ESR experiments that the full density of states corresponding to pure YBCO$_{7}$ is restored near the Zn impurity. The fact that the $^{89}$Y 1$^{st}$ $nn$ resonance is found to display an NMR shift much larger than that of the optimally doped compound at low- $T$, is clear evidence against this idea. On the contrary, the susceptibility of the Cu $% nn $ to Zn is found to present a Curie like $T$-dependence, hence the “local moment” denomination, which we have been using throughout. This local moment behaviour is confirmed in YBCO$_{6.6}$ by macroscopic susceptibility SQUID data [@mendels1; @mendels]. It is clear that the observed local moment behaviour is original inasmuch as it is the [*magnetic response of the correlated electron system to the presence of a spinless site.*]{} The perturbation induced by Zn extends at least to the four $nn$ copper sites, but we have shown that, in underdoped YBCO$_{6.6}$, our data are compatible with a local dynamic AF state which extends over more Cu sites. Although the present NMR data are not sufficient to allow us to determine the actual extension of this state, the width of the neutron scattering peak at ($\pi$, $\pi$) which is found to develop at low-$T$ within the pseudo-gap in presence of Zn [@sidis], corresponds to a real space extension of at least 7 Å. Various theoretical arguments in favor of the [*occurence of a local moment in presence of a spinless site in a correlated electronic system*]{} have been advanced [@fink; @nagaosa; @poilblanc; @khaliullin]. As complete understanding of the magnetic properties of pure cuprates is far from being achieved, it is no surprise that the present theoretical descriptions of the impurity induced magnetism are rather crude, and for example, do not address its microscopic extent. Our results might, however, be put in parallel with recent theoretical work on undoped quantum spin systems. For instance Martins [@dagotto] predicts static local moments induced due to doping $S $ = 1/2 Heisenberg AF chains or ladders with non-magnetic impurities. NMR experiments on the S = 1/2 Heisenberg chain system Sr$_{2}$CuO$_{3}$ are consistent with the prediction of an induced local moment with a large spatial extent along the chain [@takigawa2]. In this undoped insulating quantum liquid, the response is then purely magnetic. Since the parent compound to YBCO superconductors is a 2D Heisenberg AF and dynamic AF correlations appear to persist even in the metallic compositions, appearance of local moments on many Cu sites near to the doped Zn might well be anticipated. In the slightly overdoped YBCO$_{7}$, the local moment could initially only be detected through the induced long distance spin polarisation [@alloul2]. A local moment induced by non-magnetic Al substituted on Cu is also detected in optimally doped LASCO from $^{27}$Al NMR . The fact that we could not resolve the $nn$ signal in YBCO$_{7}$ is consistent with the weak magnitude found for the Curie like contribution to the local susceptibility. The [*decreasing magnitude of the moment with increasing hole doping*]{} could carefully be monitored by direct SQUID measurements [@mendels]. This decrease could be linked experimentally with a [*decreasing screening radius*]{}by the conduction band. However, the magnetic states which are detected within the spin-gap at low-$T$ by neutron scattering [@sidis] exhibit a short magnetic correlation length, so that the spatial extent of the local moment also decreases with increasing hole doping as does the AF correlation length in the pure system. Altogether, our experiments cannot, at present, distinguish the [*respective roles of the screening radius and the AF correlation length in defining the local moment magnitude and spatial extent*]{}. Another important question which arises then concerns the [*coupling of the defect local moment to the host*]{}. For magnetic impurities in simple metals, an exchange coupling $J_{ex}$ between the local moment and the conduction electron spins usually occurs, and determines some of the thermodynamic properties of the local moment. For instance the [*fluctuation rate of the local moment*]{} ($1/\tau $) is directly determined by $% J_{ex}$, and follows a Korringa relation in classical metals. This fluctuation time can be estimated from nuclear spin lattice relaxation data. In the YBCO system we could only obtain such measurements in the underdoped regime on the $^{89}$Y $nn$ of Zn. From these we could show that only weak contributions to $1/T_{1}$ are expected on the $^{89}$Y $nn$ in the optimally doped case, and could not be sensed within experimental accuracy. Direct measurements of the $^{27}$Al $T_{1}$ are on the contrary sensitive enough in the optimally doped case in LASCO, as seen by Ishida [*et al*]{}. Their results, although they establish the occurrence of a local moment induced by the spin-less Al$^{3+}$ substituent, differ markedly from those obtained in YBCO. A large temperature independent contribution to the shift and local moment fluctuation time is detected, contrary to our observations. The origin of the difference might be i\) linked with the larger valence of Al$^{3+}$, i.e. the charge difference with respect to host planar Cu$^{2+}$, ii\) a peculiarity of the LASCO system, as indeed the physical properties of this system do not appear to fit in a universal picture with the other cuprates (see Ref. [@bobroff1]), iii\) or merely a difference between underdoped and optimally doped systems, as the experiments could not be performed on the two systems under similar conditions. Further experiments will certainly permit to make a decision between these possibilities. Currently, experiments do not permit a clear indication on the applicability of an exchange model. In conventional metallic systems, the local moment couples through $J_{ex}$ to the electron bath and an oscillatory RKKY polarization occurs in the band. Therefore $J_{ex}$ can be usually estimated from the broadening of the host NMR [@alloul3; @walkerwalstedt].Applying the standard RKKY theory yields values of the exchange coupling which are very large [@mahajan]. But, we have recently shown in Orsay [@bobroff] that, at least in the underdoped regime, the behaviour of the $^{17}$O linewidth does not follow the expected RKKY $T$ dependence at all, i.e. the NMR width does not scale solely with the impurity magnetization. Let us note here that whatever the method used [@mahajan], [@ishida2], the estimates of the coupling constant are presently such that if one applies a simple exchange model, one would expect a [*large Kondo temperature*]{} $T_{K}$ and correspondingly, a spin susceptibility which would deviate from the Curie dependence at $T$ $\sim $ $% T_{K}$ and saturate below. From SQUID data, Mendels [*et al.*]{} concluded that this is not the case, and that $T_{K}$ does not exceed at most a few K, in the underdoped YBCO compounds. Such a Kondo-like effect was a candidate mechanism for the reduction of the magnitude of the local moment in YBCO$_{7} $:Zn (see for instance Ref. [@nagaosalee]). But obviously the Kondo model needs to be revised in the context of a strongly correlated electron system. Such difficulties had been already pointed out by Hirschfeld [@hirschfeld] in view of our preliminary experimental results. In conclusion, we have detailed here the experimental evidence for the occurence of a local moment behaviour induced by spinless substitutions on the Cu site in CuO$_{2}$ planes of cuprates. The existence of original magnetic behaviour induced by non-magnetic substitutions can be anticipated from current theoretical treatments of [*undoped*]{} low-dimensional spin systems. However, the detailed experimental observations reported here on [*doped*]{} cuprates do not have a thorough interpretation from the theoretical standpoint. We suggest that further experimental and theoretical efforts regarding these properties are essential to lead us towards a comprehensive description of the magnetic and superconducting properties of the cuprates. We would like to thank P. Mendels, J. Bobroff, and A. Macfarlane for useful discussions and comments about the manuscript. Laboratoire de Physique des Solides is a “Unité Mixte de Recherches du Centre National de la Recherche Scientifique et de l’Université Paris-Sud”. Permanent address: Indian Institute of Technology, Powai, Bombay 400 076, India. H. Alloul, T. Ohno, and P. Mendels, Phys. Rev. Lett.  [** 63**]{}, 1700 (1989). W. W.  Warren [*et al.*]{} , Phys. Rev. Letters 62, 1193 (1989); M. Horvatic [*et al.*]{} , Phys. Rev. B [**39**]{}, 7322 (1989). C. Berthier [*et al.*]{}, Appl. Magn. Reson. [**3**]{}, 449 (1992). J.  Rossat-Mignod [*et al.*]{}, Physica C [**185-189**]{}, 86 (1991). R. Birgeneau in Physical Properties of High $T_{c}$ Superconductors vol. I, ed. D. M. Ginsberg (World Scientific, Singapore, 1989). A. M. Finkelstein, V. E. Kataev, E. F.  Kukovitskii, and G. B. Teitelbaum, Physica (Amsterdam) [** 168C**]{}, 370 (1990). H. Alloul, P. Mendels, H. Casalta, J. F. Marucco, and J. Arabski, Phys. Rev. Lett.  [**67**]{}, 3140 (1991). A. V. Mahajan, H. Alloul, G. Collin, J. F. Marucco, Phys. Rev. Lett. [** 72**]{}, 3100 (1994). P. Mendels, J. Bobroff, G. Collin, H. Alloul, M. Gabay, J. F. Marucco,N. Blanchard, and B. Grenier, submitted to Europhysics Letters. R. Dupree, A. Gencten, and D. McK. Paul, Physica C [**193**]{}, 193 (1992) and references therein. R. E. Walstedt, R. F. Bell, L. F. Schneemeyer, J. V. Waszczak, W. W. Warren, R. Dupree, and A. Gencten, Phys. Rev. B [**48**]{}, 10646 (1993). G. V. M. Williams, J. L. Tallon, R. Meinhold, A. Janossy, Phys. Rev. B [**51**]{}, 16503 (1995). K. Ishida, Y. Kitaoka, K. Yamazoe, K. Asayama, and Y. Yamada, Phys. Rev. Lett. [**76**]{}, 531 (1996). A. Janossy, J. R. Cooper, L. C. Brunel, and A. Carrington, Phys. Rev. B [**50**]{}, 3442 (1994). J. R. Cooper, Supercond. Sci. Technol. [**4**]{}, S181 (1991). P. Mendels, H. Alloul, J. H. Brewer, G. D. Morris, T. L. Duty, S. Johnston, E. J. Ansaldo, G. Collin, J. F. Marucco, C. Niedermayer, D. R. Noakes, and C E. Stronach, Phys. Rev. B [**49**]{}, 10035 (1994). S. Shamoto, T. Kiyukora, H. Harashina, and M. Sato, J. Phys. Soc. Japan [**63**]{}, 2324 (1994). K. Kakurai, S. Shamoto, T. Kiyokura, M. Sato, J. M. Tranquada, and G. Shirane, Phys. Rev. B [**48**]{}, 3485 (1993). P. Bourges, Y. Sidis, B. Hennion, R. Villeneuve, J. F. Marucco, and G. Collin, Czechoslovak Journal of Physics [**46**]{}, 1155 (1996); P.  Bourges, Y. Sidis, L. P. Regnault, B. Hennion, R. Villeneuve, G. Collin, C. Vettier, J. Y. Henri, and J. F. Marucco, J. Phys. Chem. Solids [**56**]{}, 1937 (1995); Y.  Sidis [*et al.*]{}, International Journal of Modern Physics B, to be published. T. R. Chien, Z. Z. Wang, and N. P. Ong, Phys. Rev. Lett. [**67**]{}, 2088 (1991). K. Mizuhashi, K. Takenaka, Y. Fukuzumi, and S. Uchida, Phys. Rev. B [**52**]{}, R3884 (1995). J. W. Loram, K. A. Mirza, and P. F. Freeman, Physica (Amsterdam) [**171C**]{}, 243 (1990). K. Ishida, Y. Kitaoka, N. Ogata, T. Kamino, K. Asayama, J. R. Cooper, and N. Athanassopoulou, J. Phys. Soc. Japan [**62**]{}, 2803 (1993). H. Alloul, T. Ohno, H. Casalta, J. F. Marucco, P. Mendels, J. Arabski, G. Collin, and M. Mehbod, Physica C [**171**]{}, 419 (1990). L. Guerrin, H. Alloul, and G. Collin, Physica C [**251**]{}, 219 (1995). A. Lanckbeen, C. Legros, J. F. Marucco and R. Deltour Physica C [**221**]{}, 53 (1994); G. Collin , Private communication. J. Bobroff, H. Alloul, Y. Yoshinari, P. Mendels, A. Keren, N. Blanchard, G. Collin, and J.  F. Marucco, Phys. Rev. Lett. [**79**]{}, 2117 (1997). G. V. M. Williams, J. L. Tallon, and R. Meinhold, Phys. Rev. B [**52**]{}, R7034 (1995). M. Takigawa, P. C. Hammel, R. H. Heffner, Z. Fisk, K. C. Ott, and J. D. Thompson, Phys. Rev. Lett. [**63**]{}, 1865 (1989). T. Moriya, J. Phys. Soc. Japan , 2324 (1969). S. Zagoulaev, P. Monod, and J. Jegoudez, Phys. Rev. B [**52**]{}, 10474 (1995). H. Alloul, J. Bobroff, and P. Mendels, Phys. Rev. Lett. [**78**]{}, 2494 (1997). Y. Yoshinari [**et al.**]{}, Unpublished. G. Zheng, T. Odaguchi, T. Mito, Y. Kitaoka, K. Asayama, and Y. Kodama, J. Phys. Soc. Japan [**62**]{}, 2591 (1993). J. A. Hodges, P. Bonville, P. Imbert, and A. Pinatel-Philippot, Physica C [**323**]{}, 246 (1995). C. Dekker, A. F. M. Arts, and H. W. de Wijn, Phys. Rev. Lett. [**38**]{} 8985 (1988); C. Dekker, A. F. M. Arts, H. W. de Wijn, A. J. van Duyneveldt, and J. A. Mydosh, Phys. Rev. B [**61**]{}, 1780 (1988). M. Takigawa, A. P. Reyes, P. C. Hammel, J. D. Thompson, R. H. Heffner, Z. Fisk, and K. C. Ott, Phys. Rev. B [**43**]{}, 247 (1991). J. B. Boyce and C. P. Slichter, Phys. Rev. Lett.  [**32**]{}, 61 (1974); H. Alloul, F. Hippert, and H. Ishii, J. Phys. F [**4**]{}, 725 (1979) and references therein. N. Nagaosa and T. -K. Ng, Phys. Rev. B [**51**]{}, 15588 (1995). D. Poilblanc, D. J. Scalapino, and Hanke, Phys. Rev. Lett. [**72**]{}, 884 (1994); see also Phys. Rev. B [**50**]{},13020 (1994). G. Khaliullin, R. Killian, S. Krivenko, and P. Fulde, Physica C [**282-287**]{}, 1749 (1997). G. B. Martins, M. Laukamp, J.  Riera, and E. Dagotto, Phys. Rev. Lett. [**78**]{}, 3563 (1997). M. Takigawa, N. Motoyama, H. Eisaki, and S. Uchida, Phys. Rev. B [**55**]{}, 14129 (1997). J. Bobroff, H. Alloul, P. Mendels, V. Viallet, J. F. Marucco, and D. Colson, Phys. Rev. Letters, [**78**]{}, 3757 (1997). R. E. Walstedt and L. R. Walker, Phys. Rev. B [**9**]{}, 4857 (1974). A. Nagaosa and P. Lee, Phys. Rev. Letters, [**79**]{}, 3755 (1997). L. S. Borkowski and P. J. Hirschfeld, Phys. Rev.B [**49**]{}, 15404 (1994). [**Figure Captions**]{} FIG. 1 $^{89}$Y NMR lineshape at 130 K in YBCO$_{6.64}$:Zn$_{1\%}$ when the duty cycle of the pulse sequence, $t_{rep}$, is 20 sec, with the sample $c$ axis aligned parallel or perpendicular to the applied field. The relative intensity of the $nn$ lines is enhanced here, since they have a $T_1$ comparable to $t_{rep}$ while the mainline $T_1$ is much longer. FIG. 2 $^{89}$Y NMR lineshapes at 100 K in YBCO$_{6.64}$:Zn$_{y\%}$, for the sample $c$ axis aligned parallel to the applied field $H$. The relative intensities of the outer and middle lines are seen to qualitatively increase with increasing $y$. FIG. 3 (a) $^{89}$Y NMR lineshape at 90 K in YBCO$_{7}$:Zn$_{1\%}$ showing absence of resolved $nn$ lines indicating that the induced local moment magnitude is weak in YBCO$_7$:Zn as compared to that in YBCO$_{6.64}$:Zn. Also shown is the decomposition of the lineshape for YBCO$_{6.64}$:Zn$_{1\%}$ into three gaussians. FIG. 3 (b) $^{89}$Y NMR lineshape at 80 K in YBCO$_{7}$:Zn$_{2\%}$ for different repetition rates. The arrow indicates the position of the middle line in YBCO$_{6.64}$:Zn. The lineshape stays nearly unchanged indicating the absence of any components relaxing faster than the mainline. FIG. 4 Fully relaxed $^{89}$Y spectra in YBCO$_{6.64}$:Zn$_{y\%}$. The solid lines are fits to three gaussians as explained in the text. (a)1% Zn (b)2% Zn (c)4% Zn. FIG. 5 Variation of the fractional $nn$ line intensity (integrated) as a function of Zn content $y$ for YBCO$_{6.64}$:Zn$_{y\%}$. The solid lines correspond to variations as expected from statistical models as explained in text. The intensity of the outermost line is seen to be in near agreement with that expected from a $^{89}$Y nuclei nearest to the doped Zn. The middle line intensity might agree with that expected from a combination of 2$% ^{nd}$ and 3$^{rd}$ $nn$ $^{89}$Y nuclei. FIG. 6 $^{89}$Y $nn$ line shifts $K$ versus temperature $T$ for YBCO$_{6.64}$:Zn$_{y\%}$. The outer line shift is seen to have a strong upturn with decreasing $T$ indicating a coupling to a Curie-like susceptibility. The solid lines are drawn as guides to the eye. FIG. 7 Linewidths normalized to the applied magnetic field $\Delta H/H$ of the $nn$ lines for YBCO$_{6.64}$:Zn$_{y\%}$ are seen to increase with decreasing $T$. The solid lines are drawn as guides to the eye. FIG. 8 $^{89}$Y mainline shift $K$ versus temperature $T$ for YBCO$_{6.64}$:Zn$_{y\%}$ is nearly unchanged from that of YBCO$_{6.64}$ indicating little change in the hole content with Zn doping. FIG. 9 Normalized linewidth $\Delta H/H$ of the mainline versus the temperature $T$ for YBCO$_{6.64}$:Zn$_{y\%}$. Unlike YBCO$_7$ which has a nearly $T$ independent linewidth, YBCO$_{6.64}$ linewidth decreases below 120 K and increases again at much lower temperatures. Qualitatively, the linewidths for Zn doped YBCO$_{6.64}$ are higher than the undoped compound. The solid lines are drawn as guides to the eye. FIG. 10 The $T$ variation of $^{89}$Y mainline shift $K$ for YBCO$_{7}$:Zn$% _{y\%}$ is unchanged from that in YBCO$_7$ again indicating that the hole content is nearly unchanged with Zn doping. Note that the chemical shift reference is about 150 ppm, hence the change in susceptibility on Zn addition is at most 4 % of the susceptibility. FIG. 11 Normalized linewidth $\Delta H/H$ of the mainline versus the temperature $T$ for YBCO$_{7}$:Zn$_{y\%}$. The linewidth increases in a Curie-like manner with decreasing $T$. Also, the linewidth is larger for larger Zn contents. Solid lines are drawn as guides to the eye. FIG. 12(a) $^{89}$Y mainline shift $K$ versus temperature $T$ for YBCO$% _{6+x} $:Zn$_{4\%}$. The $T$-dependence is similar to that of YBCO$_{6+x}$. FIG. 12(b) Variation of lineshape as a function of $x$ for YBCO$_{6+x}$:Zn$% _{4\%}$. The low frequency tails which correspond to the shifted satellite lines are seen to appear already for oxygen content $x$ = 0.84. FIG. 13 Spectra for YBCO$_{6.64}$:Zn$_{1\%}$ for various $t_{rep}$ values. It is clear that the outer line recovers its full intensity for much smaller $t_{rep}$ values than the mainline and hence has a much shorter $T_1$ than the mainline. FIG. 14 Analysis of the relaxation rate data of Fig. 13 for YBCO$_{6.64}$:Zn$% _{1\%}$. The data have been fitted to a sum of three gaussians and the deduced intensities corresponding to the main and the $nn$ lines are plotted versus the repetition time of the pulse sequence $t_{rep}$ on a semi-log scale. The solid lines are fits to a single exponential recovery. FIG. 15 (a) $^{89}$Y nuclear spin-lattice relaxation rate divided by temperature 1/$T_1T$ versus temperature $T$ for the mainline and the middle satellite in YBCO$_{6.64}$:Zn$_{y\%}$. (b) these data are for the outermost satellite line. The $nn$ lines are seen to have a shorter $T_1$ than the mainline. FIG. 16 Nuclear magnetization corresponding to the mainline versus the repetition time of the pulse sequence $t_{rep}$ for YBCO$_{7}$:Zn$_{y\%}$. The fact that the data can be fit to a single exponential (solid line) indicates the absence of any other components to the relaxation. FIG. 17 $^{89}$Y nuclear spin-lattice relaxation rate divided by temperature 1/$T_1T$ versus temperature $T$ for YBCO$_{7}$:Zn$_{y\%}$. Note the magnified scale of the $y$ axis. As for YBCO$_{6.64}$:Zn, the $T_1$ of the mainline is not affected. FIG. 18(a) Curie component (in addition to a constant) of the $^{89}$Y shift $K$ versus temperature $T$ for the outermost line in YBCO$_{6.64}$:Zn$_{1\%}$. The solid line is a fit (see Eq. (2)) assuming that the local moment is on the oxygen atoms. (b) The solid line is a fit (see Eq. (3)) assuming moments on the copper atoms. FIG. 19 A schematic of the location and the orientation of the local magnetization around a Zn impurity. FIG. 20 The ratio of the relaxation rate of the local moment spin $1/\tau $ to temperature $T$ for the 1$^{st}$ $nn$ $^{89}$Y nuclei in YBCO$_{6.64}$ :Zn. $\tau $ has been determined from Eq. (10). FIG. 21 $^{89}$Y NMR spectrum for YBCO$_{7}$:Zn$_{y\%}$. Expected position of outermost line, based on the macroscopic susceptibility data (see text) is indicated by an arrow. The actual linewidth even in the pure compound is such that any feature at this position cannot be resolved. FIG. 22 Variation of the product of the square of temperature $T^2$ and the $% nn$ linewidth $\Delta H_{corr}$ (linewidth of the pure compound has been subtracted from the measured linewidth) with the temperature $T$ for YBCO$% _{6.64}$:Zn. Parameter $^{89}$Y NMR Gd$^{3+}$ ESR ------------------- ------------------- ---------------------- $\delta \nu$ (Hz) 4 $\times$ 10$^3$ 6.4 $\times$ 10$^8$ $1/T_1$ (Hz) 0.1 2.56 $\times$ 10$^9$ $\Delta \nu$ (Hz) 2 $\times$ 10$^3$ 2.56 $\times$ 10$^9$ : $^{89}$Y NMR measured values (at about 100 K) of the separation of the 1$^{st}$ $nn$ line from the mainline $\protect\delta \protect\nu$, the spin- lattice relaxation rate of the 1$^{st}$ $nn$ $1/T_1$, and the linewidth of the 1$^{st}$ $nn$ $\Delta \protect\nu$ are listed along with calculated values (see text) of the same for Gd ESR in YBCO$_{6.64}$:Zn.
--- abstract: 'The supersymmetric SU($N_C$) Yang-Mills theory coupled to $N_F$ matter fields in the fundamental representation has meta-stable vacua with broken supersymmetry when $N_C < N_F < {3\over 2} N_C$. By gauging the flavor symmetry, this model can be coupled directly to the standard model. We show that it is possible to make a slight deformation to the model so that gaugino masses are generated and the Landau pole problem can be avoided. The deformed model has simple realizations on intersecting branes in string theory, where various features of the meta-stable vacua are encoded geometrically as brane configurations.' --- SLAC-PUB-12252\ CALT-68-2621\ hep-ph/0612139\ [**Direct Mediation of Meta-Stable Supersymmetry Breaking**]{}\ \ $^1$[*Stanford Linear Accelerator Center, Stanford University, Stanford, CA 94309*]{}\ $^2$[*Physics Department, Stanford University, Stanford, CA 94305*]{}\ $^3$[*California Institute of Technology, Pasadena, CA 91125*]{} Introduction ============ Although there is no clear evidence yet, it is plausible that softly broken ${\cal N}=1$ supersymmetry is realized in nature. Not only because it is a symmetry possessed by string theory, there are many phenomenologically attractive features in supersymmetric models, such as cancellation of quadratic divergences and unification of the gauge coupling constants [@Dimopoulos:1981zb; @Dimopoulos:1981yj; @Sakai:1981gr]. It is then a question how supersymmetry is broken and how we feel it. There have been many studies on this subject, but, as is often the case, one of the earliest proposals [@Dine:1981za; @Dimopoulos:1981au] among them seems to be the most elegant and simple idea. The idea is that there is a QCD-like strong interaction which breaks supersymmetry dynamically, and the standard model gauge group is identified with a subgroup of flavor symmetry in this sector. The standard model gauge sector can, therefore, feel the supersymmetry breaking directly via one-loop diagrams. This idea has been discarded for a long time because of its difficulty in realistic model building. First, Witten has shown that there is a supersymmetric vacuum in supersymmetric QCD by using an index argument [@Witten:1982df]. Therefore, we are forced to think of the possibility of chiral gauge theories for supersymmetry breaking, which is already a bit complicated. (See [@Affleck:1983rr; @Affleck:1983mk; @Affleck:1984xz] for dynamical supersymmetry breaking in chiral gauge theories, and [@Poppitz:1996fw; @Arkani-Hamed:1997jv] for models of direct gauge mediation in that context.) There is also a problem of Landau poles of the standard model gauge interactions. Once we embed the gauge group of the standard model into a flavor group of the dynamical sector (this itself is not a trivial task), there appear many particles which transform under the standard model gauge group. These fields contribute to beta functions of the gauge coupling constants and drive them to a Landau pole below the unification scale. Finally, even though the gauge sector of the standard model directly couples to the supersymmetry breaking dynamics, it is non-trivial whether we can obtain the gaugino masses. It is often the case that the leading contribution to gaugino masses cancels out. Very recently, there was a break-through on this subject. Intriligator, Seiberg and Shih (ISS) have shown that there [*is*]{} a meta-stable supersymmetry breaking vacuum in some of supersymmetric QCD theories [@Intriligator:2006dd]. The model is simply SU($N_C$) gauge theory with massive (but light) $N_F$ quarks. Within a range $N_C < N_F < {3\over 2}N_C$, supersymmetry is broken in the meta-stable vacuum. The possibility of direct gauge mediation in this model is also discussed in Ref. [@Intriligator:2006dd].[^1] Because of its simplicity of the model, it is straightforward to embed the standard model gauge group into the SU($N_F$) flavor symmetry. However, it is concluded that there are still problems regarding the Landau pole and the gaugino masses. In the ISS model, there is an unbroken approximate U(1)$_R$ symmetry which prevents us from obtaining the gaugino masses. The U(1)$_R$ problem is a common feature in models of gauge mediation. As is discussed recently in Ref. [@Dine:2006xt], if the low energy effective theory of the dynamical supersymmetry breaking model is of the O’Raifeartaigh type, there is an unbroken $R$-symmetry at the minimum of the potential (the origin of the field space). It has been proposed that the inverted hierarchy mechanism [@Witten:1981kv] can shift the minimum away from the origin by the effect of gauge interactions [@Murayama:1997pb; @Dimopoulos:1997je; @Luty:1997ny; @Agashe:1998wm; @Dine:2006xt]. An alternative possibility that the shift is induced by an $R$-symmetry breaking term in supergravity Lagrangian (the constant term in the superpotential) is recently discussed in Ref. [@Kitano:2006wz]. It is, however, still non-trivial whether we obtain the gaugino masses even with the $R$-symmetry breaking vacuum expectation values in direct gauge mediation models. For example, a model in Ref. [@Izawa:1997gs] generates gaugino masses only at the $F^3$ order even though the $R$-symmetry is broken by assuming the presence of the local minimum away from the origin. Since the scalar masses squared are obtained at the $F^2$ order as usual, gaugino masses are much smaller than the scalar masses unless the messenger scale is $O(10~{\rm TeV})$, that is difficult in models of direct gauge mediation because of the Landau pole problem. In fact, as we will see later, the structure of the messenger particles in the ISS model is the same as that in this model. (The same structure can be found in many models, for example, in Ref. [@Kitano:2006wm] and also in very early proposals of gauge mediation models in Ref. [@Dine:1981gu; @Nappi:1982hm; @Alvarez-Gaume:1981wy].) Therefore, it is not sufficient to destabilize the origin of the field space for generating both gaugino and scalar masses. In this paper we propose a slight deformation to the ISS model with which we can obtain gaugino masses by identifying a flavor subgroup with the standard model gauge group. We add a superpotential term which breaks $R$-symmetry explicitly so that non-vanishing gaugino masses are induced. The vacuum structure becomes richer by the presence of the new term. In addition to the vacuum that is obtained by a slight perturbation to the ISS meta-stable vacuum, which we will call the ISS vacuum, there appear new (but phenomenologically unacceptable) meta-stable vacua. We find that decays of the ISS vacuum into the other vacua are sufficiently slow so that it is phenomenologically viable. We also show that the Landau pole problem can be avoided by keeping the dynamical scale of the ISS sector sufficiently high in a way that is compatible with phenomenological requirements. In addition, if meta-stable vacua exist in a model with the same number of colors and flavors, as suggested by ISS, we can also consider the case where the ISS sector is in the conformal window, ${3 \over 2} N_C \leq N_F < 3 N_C$. In this case, we can take the scales of the ISS sector as low as $O$(100–1000 TeV). The deformed ISS model can be realized on intersecting branes in string theory, where a rich vacuum structure and the meta-stability of vacua can be understood geometrically. The ISS model ============= We first review the ISS model. The model is simply a supersymmetric QCD with light flavors. Perturbative corrections to a scalar potential are calculable in the magnetic dual picture, and they have been found to stabilize a supersymmetry breaking vacuum. The model has an unbroken $R$-symmetry, which prevents it from generating gaugino masses. An explicit one-loop computation of the masses suggests a natural solution to this problem, which we will discuss in the next section. Supersymmetry breaking ---------------------- The model is an SU($N_C$) gauge theory with $N_F$ flavors. The quarks have mass terms: $$\begin{aligned} W = m_i Q_i \bar Q_i \ .\end{aligned}$$ The index $i$ runs for $i = 1, \cdots, N_F$. The masses $m_i$ are assumed to be much smaller than the dynamical scale $\Lambda$. There is a meta-stable supersymmetry breaking vacuum when $N_C < N_F < {3\over 2}N_C$, where there is a weakly coupled description of the theory below the dynamical scale $\Lambda$. The gauge group of the theory is SU($N_F - N_C$) and degrees of freedom at low energy are meson fields $M_{ij} \sim Q_i Q_j$ and dual quarks $q_i$ and $\bar q_i$. There are superpotential terms: $$\begin{aligned} W = m_i M_{ii} - \frac{1}{\hat \Lambda} q_i M_{ij} \bar q_j\ .\end{aligned}$$ A dimensionful parameter $\hat \Lambda$ is introduced so that the dimensionality of the superpotential is correct. A natural scale of $\hat \Lambda$ is $O(\Lambda)$. With this superpotential, the $F_M = 0$ condition for all components of $M_{ij}$ cannot be satisfied. The rank of the matrix $q_i \bar q_j$ is at most $N_F - N_C$ whereas the mass matrix $m_{i}$ has the maximum rank, $N_F$. The lowest energy vacuum is at $$\begin{aligned} M_{ij} = 0\ ,\ \ \ q_i = \bar q_i = \left( \begin{array}{c} \sqrt{m_I \hat \Lambda} \ \delta_{IJ} \\ 0 \end{array} \right)\ ,\end{aligned}$$ where $I$ and $J$ runs from 1 to $N_F - N_C$, and $m_i$ is sorted in descending order. The $F$-components of $M_{ii}$ with $i = N_F-N_C+1, \cdots, N_F$ have non-vanishing value $m_i$. At this vacuum, the gauge symmetry SU($N_F - N_C$) is completely broken. We parametrize fluctuations around this vacuum to be: $$\begin{aligned} \frac{ \delta M_{ij} }{ \hat \Lambda } = h \left( \begin{array}{cc} Y_{IJ} & Z_{I a} \\ \tilde Z_{aI} & \hat \Phi_{ab} \\ \end{array} \right)\ ,\ \ \ \delta q_i = \left( \begin{array}{c} \chi_{IJ} \\ \rho_{Ia} \\ \end{array} \right) \ , \ \ \ \delta \bar q_i = \left( \begin{array}{c} \tilde \chi_{IJ} \\ \tilde \rho_{Ia} \\ \end{array} \right)\ .\end{aligned}$$ We put dimensionless parameter $h$ of $O(1)$ so that components have canonically normalized kinetic term. Again, $I,J = 1, \cdots, N_F - N_C$ and $a,b = 1, \cdots, N_C$. Among these fields $\hat \Phi_{ab}$ and the trace part of $\chi - \tilde \chi$, ${{\rm Tr}}[\chi - \tilde \chi] \equiv {{\rm Tr}}\delta \hat \chi$, remains massless at tree level. The other fields obtain masses of $O(\sqrt{m \Lambda})$. One-loop correction to a potential for the pseudo-moduli $\hat \Phi$ and Re$[{{\rm Tr}}\delta \hat \chi]$ is shown to give positive masses squared, which ensures the stability of the vacuum.[^2] Once we take into account the non-perturbative effect, the true supersymmetric vacuum appears far away from the origin of the meson field $M$. The life-time of the false vacuum can be arbitrarily long if $m_i \ll \Lambda$. Also, interestingly, the supersymmetry breaking vacuum is preferred in the thermal history of the universe  [@reheatingone; @reheatingtwo; @reheatingthree]. Gaugino masses -------------- It is possible to embed the standard model gauge group into a flavor symmetry group of this model. When we take $m_1 = \cdots = m_{N_F - N_C} = m $ and $m_{N_F - N_C +1} = \cdots = m_{N_F} = \mu$, there is a global symmetry; SU($N_F - N_C$)$_F$ $\times$ SU($N_C$)$_F$ $\times$ U(1)$_B$. With $N_F - N_C \geq 5$ or $N_C \geq 5$, we can embed SU(3) $\times$ SU(2) $\times$ U(1) into SU($N_F - N_C$)$_F$ or SU($N_C$)$_F$, respectively. In the case where we embed SU(3) $\times$ SU(2) $\times$ U(1) into the SU($N_F - N_C$)$_F$ flavor symmetry, the standard model gauge group at low energy is a diagonal subgroup of SU(3) $\times$ SU(2) $\times$ U(1) in SU($N_F - N_C$) dual gauge interaction (under which $q$ and $\bar q$ transform and $M$ is neutral) and that in the SU($N_F - N_C$)$_F$ flavor group. As discussed in Ref. [@Intriligator:2006dd], there is an unbroken $R$-symmetry under which $M$ carries charge two and $q$ and $\bar q$ are neutral. Since the $R$-symmetry forbids the gaugino masses, there is no contribution to the gaugino masses of the standard model gauge group even though it is directly coupled to a supersymmetry breaking sector. It is instructive to see how the gaugino masses vanish at one-loop. The fields $\rho$ and $\tilde \rho$ carry quantum numbers of both SU($N_F - N_C$) and SU($N_C$)$_F$ and couple to $\hat{\Phi}$ which has non-vanishing vacuum expectation value in the $F$-component. Therefore $\rho$ and $\tilde \rho$ play a role of messenger fields in gauge mediation.[^3] The relevant superpotential for this discussion is $$\begin{aligned} W = - h \rho \hat \Phi \tilde \rho - h \bar m ( \rho \tilde Z + \tilde \rho Z )\ ,\end{aligned}$$ where we suppressed indices and defined $\bar m \equiv \sqrt{m \hat \Lambda}$. The $\rho$ and $Z$ fields have mixing terms. In a matrix notation, $$\begin{aligned} W = h ( \rho , Z) {\cal M} \left( \begin{array}{c} \tilde \rho \\ \tilde Z\\ \end{array} \right)\end{aligned}$$ where ${\cal M}$ is a mass matrix for the messenger fields $$\begin{aligned} {\cal M} = \left( \begin{array}{cc} \hat \Phi & \bar m \\ \bar m & 0 \\ \end{array} \right)\ .\end{aligned}$$ The formula for the gaugino masses can be generalized for this multi-messenger case as follows: $$\begin{aligned} m_\lambda = \frac{g^2 \bar N}{(4 \pi)^2} F_{\hat \Phi} \frac{\partial}{\partial \hat \Phi} \log \det {\cal M}\ , \label{eq:gaugino}\end{aligned}$$ where $\bar N$ is $N_C$ or $N_F - N_C$ depending on whether we embed the standard model gauge group into the SU($N_F - N_C$)$_F$ or the SU($N_C$)$_F$ flavor symmetry. This formula is valid when $F_{\hat \Phi} \ll \bar m^2$. Since there is no $\hat \Phi$ dependence in $\det {\cal M}$, we obtain $m_\lambda = 0$. We can now clearly see that the gaugino mass would vanish at the leading order in $F_\Phi /\bar m^2$ even if we could obtain a non-vanishing vacuum expectation value for $\hat \Phi$ which breaks the $R$-symmetry [@Izawa:1997gs]. In the following section, we consider a model with explicit $R$-symmetry breaking which generates the gaugino masses at the leading order in $F_\Phi / \bar m^2$. Deformed ISS model ================== Motivated by discussion in the previous section, we consider a modification of the ISS model which contains a mass term for the meson fields $Z$ and $\tilde Z$ so that $\det {\cal M}$ has $\hat \Phi$ dependence. In the electric description, this corresponds to adding the following superpotential term $$\begin{aligned} W \ni - \frac{1}{m_X} (Q_I \bar Q_a) (Q_a \bar Q_I)\ , \label{eq:higher-d}\end{aligned}$$ where the color SU($N_C$) indices are contracted in $(Q \bar Q)$. Though this is a non-renormalizable interaction, it can be generated by integrating out extra massive fields coupled to $(Q_a, Q_I)$ in a renormalizable theory. In section 4, we will show that such a theory can be realized on intersecting branes in string theory. This interaction preserves the global symmetry SU($N_F - N_C$)$_F$ $\times$ SU($N_C$)$_F$ $\times$ U(1)$_B$. We assume the same structure for mass terms of $Q$ and $\bar Q$ as that in the model in the previous section, i.e., $$\begin{aligned} W_{\rm mass} = m (Q_I \bar Q_I) + \mu (Q_a \bar Q_a)\ ,\end{aligned}$$ so that the global symmetry is preserved. In the magnetic description, the mass terms correspond $$\begin{aligned} W_{\rm mass}^{\rm mag.} = \bar m^2 {{\rm Tr}}Y + \bar \mu^2 {{\rm Tr}}\hat{\Phi}\ ,\end{aligned}$$ where $\bar m^2 \equiv m \hat \Lambda$ and $\bar \mu^2 \equiv \mu \hat \Lambda$. In terms of component fields, the full superpotential is given by $$\begin{aligned} W=h {{\rm Tr}}\left[ \bar m^2 Y + \bar \mu^2 \hat{\Phi} - \chi Y \tilde{\chi}-\chi Z \tilde{\rho}-\rho \tilde{Z} \tilde{\chi} -\rho \hat{\Phi} \tilde{\rho} -m_z Z \tilde{Z} \right].\end{aligned}$$ We could have added other terms compatible with the global symmetry. Although the theorem of [@NS] implies that a generic deformation to the superpotential generates a supersymmetry preserving vacuum at tree-level, it may not cause a problem with our scenario as far as the new vacuum is far from the one we are interested in and the transition rate between the vacua is small. However, since there are tree-level flat directions in $\hat{\Phi}$, a deformation by ${{\rm Tr}}\hat{\Phi}^2$ destabilizes the ISS vacuum. Whether such a deformation is prohibited is a question of ultra-violet completions of the theory, but there is an interesting observation we can make from the point of view of the low energy effective theory. As we will see later, we need a certain level of hierarchy between $m$ and $\mu$ ($\mu \ll m$) to suppress a tunneling rate into unwanted vacua and also to avoid a Landau pole of the gauge coupling of the standard model gauge interaction. With this hierarchy this model possesses an approximate (anomalous) $R$-symmetry which is softly broken by the small mass term $\mu$. The charge assignment is $R(Q_I) =R(\bar Q_I) =1 $ and $ R(Q_a) = R(\bar Q_a) = 0$. This symmetry justifies the absence or suppression of other higher dimensional operators such as ${{\rm Tr}}\hat \Phi^2$ which destabilize the supersymmetry breaking vacua. (The supersymmetry breaking vacua remain stable as far as the coefficient for ${{\rm Tr}}\hat \Phi^2$ is smaller than $\mu$.) Vacuum structure ---------------- The introduction of the mass term for $Z$ and $\tilde Z$ makes the vacuum structure of this model quite rich. In addition to the supersymmetric and supersymmetry breaking vacua in the ISS model, there are also several stable supersymmetry breaking vacua. The stability and decay probability between these vacua are controlled by parameters in superpotential. As long as $m_z$ is smaller than $\bar m$, we can think of $m_z$ as a small perturbation to the ISS model, and thus there exists a similar meta-stable supersymmetry breaking vacuum. The effect of a finite value of $m_z$ is a small shift of $\hat \Phi$ of $O(m_z)$. The pseudo-moduli $\hat \Phi$ and Re$[{{\rm Tr}}\delta \hat \chi]$ obtain masses of $O(h^2 \bar \mu^2 / \bar{m})$ as in the ISS model. We show in Figure \[fig:potential\] the one-loop effective potential for the pseudo-moduli $\hat \Phi$. We see a small shift of the minimum. For $m_z > \bar m$, this vacuum is destabilized. Therefore, we assume in the following that $m_z$ is smaller than $\bar m$. The small $m_z$, in fact, modifies the vacuum structure drastically at far away from the origin of the field space. We can find other supersymmetry breaking vacua with $$\begin{aligned} \label{mstable} \rho \tilde{\rho} =\frac{m_z^2}{ m^2}Z\tilde{Z}= {\rm diag}(\bar \mu^2, \dots \bar \mu^2,0 \dots 0), \quad \chi \tilde{\chi}= \bar m {\bf 1}_{N_F-N_C}, \quad\end{aligned}$$ $$\begin{aligned} \label{mstable2} Y = - \frac{\bar \mu^2}{m_z} {\bf 1}_{N_F - N_C}, \quad \hat \Phi = - \frac{\bar m^2}{m_z} {\rm diag} (1, \dots 1, 0, \dots 0)\ , \quad V_{\rm lower}=(N_C-n)|h \bar \mu^2|^2\ ,\end{aligned}$$ where the number of $\bar \mu^2$ in the first equation, denoted $n$, runs from $1$ to $N_F-N_C$. Since these vacua have energy that are lower than that of the ISS vacua, $V_{\rm ISS}=N_C |h \bar \mu^2|^2$, it has non-zero transition probability to these vacua. Below, we show that the decay rate can be made parametrically small by a mass hierarchy, $\bar \mu \ll \bar m$. Although the vacuum with $n=N_F-N_C$ is the global minimum of the classical potential, they are not phenomenologically viable since gauginos cannot get masses at the leading order in $F/\bar m^2$ for the same reason as in the original ISS model when we embed the standard model into some of unbroken global symmetry. As we have seen, our vacuum is not the global minimum of the potential. It can decay into lower energy vacua specified by (\[mstable\]) and (\[mstable2\]). We estimate the decay rate by evaluating the Euclidean action from our vacuum to others. The barrier by the one-loop potential is not high, of order $O(\bar \mu^4)$. Thus, the most efficient path is to climb up the potential of $\hat \Phi$ and then slide down to more stable supersymmetry breaking vacua. The distance between $\langle \hat \Phi \rangle |_{\rm lower}$ and $\langle \hat \Phi \rangle |_{\rm ISS}$ is of order $O(\bar m^2/m_z)$ and is wide compared to the height of the potential. Thus, we can estimate bounce action with triangle approximation [@Duncan], $$\begin{aligned} S\sim {\left(\frac{\bar m }{ \bar \mu }\right)^4\left(\frac{\bar m }{ m_z}\right)^4}. \nonumber \end{aligned}$$ Even if we choose $m_z\sim \bar m$, which will be required below, the Euclidean action can be made parametrically large by taking $\bar \mu \ll \bar m$. Thus, the decay rate is parametrically small. One might think that we can find more efficient path through tree level potential barrier. However at least it has to climb up $V_{\rm peak}\sim O(\bar \mu^2 \bar m^2)$ that is very high, compared to the difference between two supersymmetry breaking vacua, of order $O(\bar \mu^4)$. In this case, we can use the thin wall approximation [@Coleman] to estimate the bounce action and obtain $S \sim ( \bar m / \bar \mu)^8$. Again, we can make it parametrically large when $\bar \mu \ll \bar m$. [*Supersymmetry preserving vacua*]{} So far we studied supersymmetry breaking vacua. In addition to these, the model also has supersymmetric vacua. Here, we will show that these supersymmetry preserving vacua can also be identified in the free magnetic dual description. Following [@Intriligator:2006dd], we look for a supersymmetric vacuum where meson fields get large expectation values. By the vacuum expectation value of $Y$ and $\hat{\Phi}$, dual quarks $\chi,\tilde{\chi}$ and $\rho, \tilde{\rho}$ become massive and can be integrated out. Also in the energy scale $E< hm_z$, $Z$ and $\tilde{Z}$ should be integrated out. Thus, we are left with the superpotential, $$\begin{aligned} W=-h \bar m^2Y -h \bar \mu^2 \hat{\Phi}+(N_F-N_C)\Lambda_{\rm eff}^3. \nonumber \end{aligned}$$ where the last term is generated by non-perturbative dynamics of a pure SU$(N_F-N_C)$ gauge theory. The low energy scale $\Lambda_{\rm eff}$ after decoupling of dual quarks, is given by the matching conditions at the two mass scales $hY$ and $h\hat{\Phi}$, $$\begin{aligned} \Lambda_{\rm eff}^{3}=\langle hY \rangle \langle h\hat{\Phi} \rangle^\frac{N_C }{ N_F-N_C}\Lambda_m^\frac{2N_F-3N_C }{ N_F-N_C}\ . \nonumber \end{aligned}$$ Note that $Z$ and $\tilde{Z}$ are singlets for the gauge group and do not contribute to running of gauge coupling. With the non-perturbative superpotential, $F$-term conditions for light field $Y$ and $\hat{\Phi}$ have solutions of the form, $$\begin{aligned} \langle h \hat{\Phi} \rangle=\bar m^\frac{2(N_F-N_C)}{N_C}\Lambda_m^\frac{3N_C-2N_F }{ N_F-N_C} ,\qquad \langle hY \rangle =\frac{\bar \mu^2 }{ \bar m^\frac{2(2N_C-N_F)}{ N_C}}\Lambda_m^\frac{3N_C-2N_F}{N_C}.\nonumber \end{aligned}$$ Since $\langle h\hat{\Phi} \rangle \gg \langle hY \rangle$ and the difference of the vacuum expectation value $\hat{\Phi}$ between supersymmetric vacua and supersymmetry breaking vacua is very large, compared to the height of supersymmetry breaking vacua, we can estimate the Euclidean action for the decay process by triangle approximation [@Duncan], $$\begin{aligned} S\sim \frac{\langle h \hat{\Phi}\rangle^4 }{ \bar \mu^4}\sim \left( \frac{\bar m}{ \bar \mu} \right)^4 \left( \frac{\Lambda_m}{\bar m} \right)^{4(3 N_C - 2 N_F)/N_C}\ . $$ The factor $3N_C - 2 N_F$ is always positive. Therefore, with the mass hierarchy $\bar \mu \ll \bar m$ and $\bar m \ll \Lambda_m$, we can make the action arbitrarily large, and thus make the meta-stable vacua arbitrarily long-lived. These conditions also allow us to ignore higher order correction to the Kähler potential. Gaugino and scalar masses ------------------------- With the explicit $R$-symmetry breaking by $m_z$, direct mediation of supersymmetry breaking happens. The standard model gauge group can be embedded into either the SU($N_F - N_C$)$_F$ or the SU($N_C$)$_F$ flavor symmetry which is remained unbroken at low energy. The gaugino masses are, in this case, given by the same formula in Eq. (\[eq:gaugino\]) with mass matrix ${\cal M}$: $$\begin{aligned} {\cal M} = \left( \begin{array}{cc} \hat \Phi & \bar m \\ \bar m & m_z \\ \end{array} \right)\ .\end{aligned}$$ Therefore $$\begin{aligned} m_\lambda = \frac{ g^2 \bar N }{(4 \pi)^2} \frac{h \bar \mu^2}{\bar m} \frac{ m_z }{ \bar m} + O\left( \frac{m_z^2}{\bar m^2} \right)\ ,\end{aligned}$$ with $g^2$ the gauge coupling constant of the standard model gauge interaction. The factor $\bar N$ is again $\bar N = N_C$ $(\bar N=N_F - N_C)$ when we embed the standard model gauge group into SU($N_F - N_C$) (SU($N_C$)). Scalar masses are also obtained by two-loop diagrams. It is calculated to be $$\begin{aligned} m_i^2 = 2 \bar N C_2^i \left( \frac{g^2}{(4 \pi)^2} \right)^2 \left( \frac{h \bar \mu^2}{\bar m} \right)^2 + O\left( \frac{m_z^4}{\bar m^4} \right)\ .\end{aligned}$$ $C_2^i$ is a quadratic Casimir factor for a field labeled $i$. For having a similar size of gaugino and scalar masses, $m_z \sim \bar m / \sqrt{ \bar N}$ is required. It is possible to have this relation as long as $m_z < \bar m$ without destabilizing the meta-stable vacuum. Mass spectrum and the Landau pole problem ----------------------------------------- We summarize the mass spectrum at the ISS vacuum here. The massless modes are the Goldstone boson, Im$[{{\rm Tr}}\delta \hat \chi]$, and the fermionic component of ${{\rm Tr}}\delta \hat \chi$. The pseudo-moduli $\hat \Phi$ and Re$[{{\rm Tr}}\delta \hat \chi]$ have masses which are similar size to the gauginos, i.e., $O(100~{\rm GeV})$. Other component fields in the chiral multiplets $Y$, $Z$, $\tilde Z$, $\rho$, $\tilde \rho$, $\chi$ and $\tilde \chi$ have masses of $O(h \bar m)$ or eaten by the gauge/gaugino fields. Discussion of the Landau pole depends on a way of embedding of the standard model gauge group into flavor symmetries. We separately discuss two cases. We find that it is possible to avoid a Landau pole if we embed the standard model gauge group into the SU($N_F - N_C$)$_F$ flavor symmetry and take the dynamical scale and the mass parameter $\bar m$ to be large enough. We also comment on an alternative possibility that the SU($N_C$) gauge theory above the scale $m$ is a conformal field theory (CFT). This possibility allows us to take the mass parameter $m$ and the dynamical scale $\Lambda$ to be much lower than the unification scale without the Landau pole problem. ### Embedding SU(3) $\times$ SU(2) $\times$ U(1) into SU(${\mathbf{ N_F - N_C }}$)$_\mathbf{F}$ In this case, the pseudo-moduli $\hat \Phi$ is a singlet under the standard model gauge group, and thus it does not contribute to the beta function. The beta function coefficients of the SU(3) gauge coupling is $$\begin{aligned} b_3 (\mu_R < h \bar m ) = -3, \quad b_3 ( h \bar m < \mu_R < \Lambda ) = - 3 + 2 N_F - N_C, \quad b_3 ( \mu_R > \Lambda) = -3 + N_C,\end{aligned}$$ where $\mu_R$ is a renormalization scale. Above the mass scale $m_X (\gg \Lambda)$, which is defined in Eq. (\[eq:higher-d\]), the theory should be replaced by a renormalizable theory, where it neccesarily contains additional fields. Therefore, there are contributions from those fields above the scale $m_X$. The size of the contributions depends on a specific ultra-violet completion of the theory. In order for the embedding to be possible, $N_F - N_C \geq 5$, and from the condition $N_C < N_F < {3\over 2}N_C $, we obtain $$\begin{aligned} 2 N_F - N_C > 20, \quad N_C > 10\ .\end{aligned}$$ There is a quite large contribution to the beta function. To avoid a Landau pole below the unification scale, $M_{\rm GUT} \sim 10^{16}$ GeV, the mass scales $h \bar m$ and $\Lambda$ should be high enough. For example, $\Lambda \sim M_{\rm GUT}$ and $h \bar m \gtrsim 10^{13}$ GeV can avoid the Landau pole. Although it is not conclusive, the authors of Ref. [@Intriligator:2006dd] suggested that there is a meta-stable supersymmetry breaking vacuum also when the numbers of colors and flavors are the same. If it is the case, there is an interesting possibility that we can go into the conformal window, ${3\over 2}N_C \leq N_F < 3 N_C$. If $N_F$ is in the conformal window, the gauge coupling of SU($N_C$) flows into the conformal fixed point at some scale $\Lambda_*$. The theory stays as a CFT until the mass term $m (Q_I \bar Q_I)$ becomes important, and eventually at a lower scale $\Lambda \sim m$, the theory exits from the CFT and becomes strongly coupled. The effective theory below the scale $\Lambda \sim m$ is described by an SU($N_C$) gauge theory with $N_C$ flavors with a mass term $\mu (Q_a \bar Q_a)$. This is exactly the ISS model with $N_C$ flavors. Once we assume the existence of the meta-stable supersymmetry breaking vacuum, direct gauge mediation should happen as we discussed in the previous section although we have lost the control of the perturbative calculation. (See [@Izawa:2005yf] for a similar model.) The beta function coefficient $b_3$ is in this case, $$\begin{aligned} b_3 (\mu_R < \Lambda ) = - 3, \quad b_3 (\Lambda < \mu_R < \Lambda_* ) = - 3 + \frac{3 N_C^2}{N_F} + \Delta, \quad b_3 (\mu_R > \Lambda_* ) = - 3 + N_C + \Delta^\prime \ , \label{eq:beta-cft}\end{aligned}$$ where we have included a contribution from anomalous dimensions of $Q$’s in CFTs [@Novikov:1982px; @Shifman:1986zi; @Seiberg:1994bz]. The factors $\Delta$ and $\Delta^\prime$ are unspecified contributions from the fields which generate the $m_z$ term. With ${3\over 2}N_C \leq N_F < 3 N_C$ and $N_F - N_C \geq 5$, we find $$\begin{aligned} N_C \geq 3, \quad N_F \geq 8, \quad {3 N_C^2 \over N_F} \geq {27\over 8}\ .\end{aligned}$$ Therefore, the dynamical scale $\Lambda \sim m$ can be much lower than the unification scale in this case. For example, if we take the ultra-violet completion to be simply adding a pair of massive fields $\eta_{Ia}$ and $\tilde \eta_{aI}$ which couple to $(Q_a \bar Q_I)$ and $(Q_I \bar Q_a)$, respectively, the additional contributions are $\Delta = \Delta^\prime = N_C$. In this case, we can take the dynamical scale $\Lambda \sim m$ to be as low as $O(100-1000~{\rm TeV})$ without a Landau pole problem for $N_C = 3$ and $N_F = 8$. We implicitly took the scale $m_X$, where the $m_z$ term is generated, to be $O(\Lambda)$ in Eq. (\[eq:beta-cft\]) because of the requirement $m_z \sim \bar m$ for the sizes of the gaugino and scalar masses to be similar. With $m_z \sim \Lambda^2 / m_X$ (see Eq. (\[eq:higher-d\])) and $m \sim \Lambda$, we need to take $m_X \sim \Lambda$. However, the actual scale at which new fields appear can be much higher than $\Lambda$ or even $\Lambda_*$ when the anomalous dimensions of $Q$ and $\bar Q$ are large in the CFT. For example, when $N_F \leq 2 N_C$, $(Q_I \bar Q_a) (Q_a \bar Q_I)$ is a marginal or a relevant operator. In this case, it is not required to have an ultra-violet completion of the theory up to $O(\Lambda_*)$ or higher, i.e., $\Delta = 0$, while satisfying $m_z \sim \bar m$. This can be understood by the running of the $1/m_X$ parameter in the CFT: $$\begin{aligned} \frac{1}{m_X (\mu_R)} = \frac{1}{ m_X (\Lambda) } \left( \mu_R \over \Lambda \right)^{(2N_F - 6 N_C)/N_F}\ .\end{aligned}$$ The unspecified contribution $\Delta^\prime$ is not important if $\Lambda_*$ is high enough. If $N_F - N_C > 5$, there are flavors with mass $m$ which are not charged under the standard model gauge group. If we reduce the masses of those fields to be slightly smaller than $m$, the low energy effective theory below $\Lambda$ has more flavors and we can perform a reliable perturbative calculation of the potential for pseudo-moduli. It is interesting to note that this CFT model may be regarded as a dual description of models with a warped extra-dimension in Refs. [@Gherghetta:2000qt; @Gherghetta:2000kr; @Goldberger:2002pc; @Nomura:2004zs], where supersymmetry is broken on an infrared brane, and standard model gauge fields are living in the bulk of the extra-dimension. ### Embedding SU(3) $\times$ SU(2) $\times$ U(1) into SU(${\mathbf{ N_C }}$)$_\mathbf{F}$ In this case, $b_3$ is given by $$\begin{aligned} b_3 (\mu_R < h \bar m ) = -3 + N_C, \quad b_3 ( h \bar m < \mu_R < \Lambda ) = - 3 + 2 N_F - N_C, \quad b_3 ( \mu_R > \Lambda) = -3 + N_C.\end{aligned}$$ The condition for the embedding to be possible is $N_C \geq 5$. Therefore $$\begin{aligned} 2 N_F - N_C > 5, \quad N_C \geq 5\ .\end{aligned}$$ With this constraint, there is always a Landau pole below the unification scale. The situation does not improve even if we consider the possibility of the CFT above the mass scale $m$. To summarize, by embedding the standard model gauge group in the SU($N_F - N_C$)$_F$ subgroup of the flavor symmetry, we can couple the ISS model to the standard model. The gaugino masses are generated at one-loop, and the Landau pole problem can be avoided if the gauge coupling scale of the ISS sector is sufficiently high or if the theory above the mass scale $m$ is a CFT. Ultra-violet completions ======================== The perturbation to the ISS model we considered in the previous section is non-renormalizable in the electric description. In this section we will show that the model can be regarded as a low energy effective theory of a renormalizable gauge theory at high energy. Moreover, this renormalizable theory itself can be realized as a low energy effective theory on intersecting branes and on branes on a local Calabi-Yau manifold in string theory. In order to decouple Kaluza-Klein and string excitations from the gauge theory, the length scale of these brane configurations as well as the string length must be smaller than that of the gauge theory. These brane configurations are so simple that it may be possible to incorporate them in the on-going effort to construct the minimal supersymmetric standard model from string theory compactifications. One way to generate the non-renormalizable interaction (\[eq:higher-d\]) is as follows. Consider an ${\cal N}=2$ quiver gauge theory with the gauge group U($N_1$) $\times$ U($N_2$) $\times$ U($N_3$) with $$\begin{aligned} N_1=N_F-N_C,~N_2=N_C,~N_3=N_C,\end{aligned}$$ and identify U($N_2$) with the gauge group U($N_C$) of the ISS model.[^4] We assume that the scales $\Lambda_1, \Lambda_3$ for the other gauge group factors are so low that we can treat U($N_1$) $\times$ U($N_3$) as a flavor group. We then deform the theory by turning on the superpotential $W_1(X_1)+W_2(X_2)+W_3(X_3)$ for the adjoint fields $X_1, X_2, X_3$ in the ${\cal N}=2$ vector multiplets given by $$\begin{aligned} W_1=\frac{M_X}{2}X_1^2+\alpha_1 X_1, \quad W_2=-\frac{M_X}{2}X_2^2, \quad W_3=\frac{M_X}{2}X_3^2+\alpha_3 X_3 .\nonumber\end{aligned}$$ This breaks ${\cal N}=2$ supersymmetry into ${\cal N}=1$, and the total tree level superpotential of the deformed theory is $$\begin{aligned} W_{tree}=&-Q_{21}X_1Q_{12}+Q_{12}X_2Q_{21}-Q_{32}X_2Q_{23}+Q_{23}X_3Q_{32}\nonumber \\ & +\, W_1(X_1)+W_{2}(X_2)+W_3(X_3) \nonumber \end{aligned}$$ After integrating out massive fields $X_i$, the superpotential can be written as$$\begin{aligned} &W_{tree} = {{{\rm Tr}}} \, m_Q Q\bar{Q} +{{{\rm Tr}}} \, K_1 Q\bar{Q} K_2 Q\bar{Q} \nonumber \\ m_Q&={\rm diag}\left({\alpha_1/M_X},{\alpha_3/M_X}\right),\quad K_1={\rm diag}\left( 0, {1/M_X} \right), \quad K_2={\rm diag}\left( 1, 0 \right). \nonumber\end{aligned}$$ This reproduces the interaction (\[eq:higher-d\]) and the mass terms for $(Q_I, Q_a)$ if we set $$\begin{aligned} \frac{\alpha_1 \Lambda_2}{M_X} = h \bar m^2 ,\qquad \frac{\alpha_3 \Lambda_2}{M_X} = h \bar \mu^2,\qquad \frac{\Lambda_2^2}{M_X}=hm_z . \label{set}\end{aligned}$$ Since we suppose $\Lambda_2 < M_X$, all the equations can be satisfied by appropriately choosing parameters $\alpha_{1,2}$ and $M_X$. Embedding in string theory -------------------------- In the perturbative string theory, the collective coordinates of D-branes are open strings ending on them [@Polchinski]. Since the lightest degrees of freedom of open strings include gauge fields, variety of gauge theories arise on intersecting branes in the low energy limit where the string length becomes small and the coupling of D-branes to the bulk gravitational degrees of freedom becomes negligible [@HW; @EGK; @GK]. We will present an intersecting brane configuration where the deformed ISS model is realized as a low energy effective theory. One should not be confused that our use of the intersecting brane model implies that the theory above the dynamical scale $\Lambda$ is replaced by string theory or a higher dimensional theory. The string length and the compactification scale are much shorter than the gauge theory scale. It is one of the string miracles that quantum moduli spaces of low energy gauge theories are often realized as actual physical spaces such as brane configurations or Calabi-Yau geometry, allowing us to discuss deep infrared physics in the ultra-violet descriptions of the theories. This phenomenon has been well-established for moduli spaces of supersymmetric vacua, and it has just begun to be explored for supersymmetry breaking vacua [@O2II; @FU; @IAS; @ABFK; @ABSV; @Verlinde]. (For earlier works in this direction, see for example [@dBHOO; @Vafa:2000; @Kachru2002].) Here, we will find that meta-stable supersymmetry breaking vacua of the deformed ISS model are realized as geometric configurations of branes. Consider Type IIA superstring theory in the flat 10-dimensional Minkowski spacetime with coordinates $x^{0,\cdots,9}$. Introduce four NS5 branes located at $x^{7,8,9}=0$ and at different points in the $x^6$ direction, and extended in the $x^{0,\cdots,3}$ and $x^{4,5}$ directions. Let us call these NS5 branes from the left to right along the $x^6$ direction as NS5$_1$, NS5$_2$, NS5$_3$, and NS5$_4$. We then suspends $(N_F-N_C)$ D4 branes between NS5$_1$ and NS5$_2$, $N_C$ D4 branes between NS5$_2$ and NS5$_3$, and $N_C$ D4 branes between NS5$_3$ and NS5$_4$. The brane dynamics in the common $x^{0,\cdots,3}$ directions is described by the ${\cal N}=2$ supersymmetric quiver gauge theory with the gauge group U($N_F-N_C$) $\times$ U($N_C$) $\times$ U($N_C$). Note that the gauge coupling constant $g_{{\rm YM}}^{(i)}$ $i=1,2,3$ for the three gauge group factors are given at the string scale by $$\begin{aligned} (g_{{\rm YM}}^{(i)})^2 = g_{\rm s} {\ell_{\rm s} \over L_i},\end{aligned}$$ where $g_{\rm s}$ and $\ell_{\rm s}$ are string coupling constant and string length, and $L_1, L_2, L_3$ are the lengths of the three types of D4 branes suspended between NS5 branes. The gauge couplings $g_{{\rm YM}}^{(i=1,2,3)}$ set the initial conditions for the renormalization group equation at ultra-violet. The ${\cal N}=2$ quiver gauge theory is realized in the low energy limit where $g_s, \ell_{\rm s}, L_i \rightarrow 0$, keeping $g_{\rm YM}^{(i)}$ fixed. We choose $L_2 \ll L_1, L_3$ so that the gauge coupling constants for U($N_1$) $\times$ U($N_3$) are small. We can turn on the superpotentials $W_1+W_2+W_3$ by rotating NS5$_2$ and NS5$_4$ into the $x^{7,8}$ directions. More precisely, we use the complex coordinates $z=x^4+ix^5$ and $w=x^7+ix^8$ and rotate the two NS5 branes on the $z-w$ plane so that they are extended in the direction of $\cos \theta z + \sin\theta w$. The holomorphic rotation preserves ${\cal N}=1$ supersymmetry. In the field theory, this corresponds to turning on $W_1+W_2+W_3$ with $M_X = \tan\theta$ [@barbon]. We can also turn on the quark masses $m$ and $\mu$ by moving NS5$_1$ and NS5$_4$ in the $w$ direction. The resulting configuration is shown in Figure \[figelectric\]. We can also T-dualize the NS5 branes to turn the D4 branes suspended between the NS5 branes into D branes wrapping compact cycles in a local Calabi-Yau manifold [@CFIKV]. Realizations of meta-stable vacua on branes partially wrapping cycles in Calabi-Yau manifolds have been discussed, for example, in [@OO; @ABSV]. The brane configuration shown in Figure 2 is similar to the one appeared recently in [@ABFK]. However, there are some important differences. In the model of [@ABFK], the quark masses $m$ and $\mu$ in the electric description are set equal to zero. Moreover the strong coupling scales of the three gauge group factors are chosen as $\Lambda_1, \Lambda_2 \ll \Lambda_3$ in the model of [@ABFK], whereas $\Lambda_1, \Lambda_3 \ll \Lambda_2$ in our model. These differences have led to different ways of supersymmetry breaking in these models. Despite the differences, some of the results in [@ABFK] may be useful for further studies of our model. Meta-stable supersymmetry breaking vacua on the brane configuration ------------------------------------------------------------------- In [@O2II; @FU], the ISS model and its magnetic dual were studied by realizing them on intersecting branes, and brane configurations for the supersymmetry breaking vacua were identified. The brane configurations provide a geometric way to understand the vacuum structure of the model.[^5] Recently it was used, for example, to study solitonic states on the meta-stable vacuum in the ISS model [@Eto:2006yv]. Here, we will present brane configurations that correspond to the meta-stable vacua in the deformed ISS model. To identify the meta-stable vacua, we need to go to the magnetic description, which is realized on branes by exchanging NS5$_2$ and NS5$_3$. Since we assume $L_2 \ll L_1, L_3$, it is reasonable to expect that the first duality transformation involves only these two NS5 branes. To avoid confusion, let us call the resulting NS5 branes as NS5$_1$, NS5$_2'$, NS5$_3'$, and NS5$_4$ from the left to right in the $x^6$ direction. Note that NS5$_1$ and NS5$_2'$ are parallel to each other, and so are NS5$_3'$ and NS5$_4$. There are $(N_F-N_C)$ D4 branes between NS5$_1$ and NS5$_3'$, $N_C$ anti-D4 branes between NS5$_2'$ and NS5$_3'$, and $N_C$ D4 branes between NS5$_2'$ and NS5$_4$. The ISS vacuum is obtained by bending the $N_C$ D4 branes between NS5$_2'$ and NS5$_4$ toward NS5$_3'$, disconnect each of them at NS5$_3'$, and annihilate their segments between NS5$_2'$ and NS5$_3'$ with the $N_C$ anti-D4 branes by the tachyon condensation. The resulting brane configuration is shown in Figure \[figISS\]. Note that this configuration breaks supersymmetry since the D4 branes between NS5$_1$ and NS5$_3'$ and the D4 branes between NS5$_3'$ and NS5$_4$ are in angles. Since their end-point separation is of the order of $|m|$ whereas the supersymmetry breaking is of the order of their relative angles $\sim |\mu|$, an open string stretched between these D4 branes does not contain a tachyon mode provided $|m| \gg |\mu|$. Since NS5$_3'$ and NS5$_4$ are parallel to each other, the $N_C$ D4 branes between them can move along them. This freedom corresponds to pseudo-moduli $\hat \Phi$. These D4 branes are stabilized by a potential induced by closed string exchange between them and the D4 branes between NS5$_1$ and NS5$_2'$, which is the closed string dual of the Coleman-Weinberg potential. We can also identify the other meta-stable vacua of the deformed ISS model. Let us take $n$ of the $N_C$ D4 branes between NS5$_3'$ and NS5$_4$ and move them toward the $(N_F-N_C)$ D4 branes between NS5$_1$ and NS5$_3'$. Doing this costs energy since these D4 branes have to climb up the Coleman-Weinberg potential. Eventually, as they approach the D4 branes between NS5$_1$ and NS5$_3'$, open strings between the two kinds of D4 branes start developing tachyonic modes. The tachyon condensation then reconnects $n$ pairs of D4 branes, leading to the brane configuration as shown in Figure 4. This process lowers the vacuum energy since the length of the single D4 brane between NS5$_1$ and NS5$_4$ is shorter than the sum of the two D4 branes before the reconnection. One can show that these brane configurations reproduce various features of the corresponding meta-stable vacua, such as their vacuum energies, expectation values of various fields (such as $\rho\tilde\rho$, $Y$, and $\hat\Phi$), and their decay processes. This can be done by a straightforward application of the brane configuration analysis in [@O2II; @FU; @IAS], and we leave it as an exercise for the readers. Meta-stability at finite temperature? ===================================== It has been shown that the meta-stable supersymmetry breaking vacuum in the ISS model is favored in the thermal history of the universe [@reheatingone; @reheatingtwo; @reheatingthree]. The essential observation is that there are more light degrees of freedom in the supersymmetry breaking vacuum compared to the supersymmetric one. Finite temperature effects make the meta-stable vacuum more attractive in this circumstance. In the deformed ISS model we discussed in this paper, there are many other meta-stable vacua. However, interestingly, the desired vacuum (the ISS vacuum) possesses the largest symmetry group among those vacua. In other vacua, number of degrees of freedom of the pseudo-moduli is reduced because some components of $\hat \Phi$ have masses at tree level. Therefore, the desired vacuum is the most attractive in the thermal history of the universe. Acknowledgments {#acknowledgments .unnumbered} =============== We would like to thank K. Intriligator, H. Murayama, E. Silverstein, T. Watari, and T. Yanagida for discussions. RK thanks the hospitality of the high energy theory group at Rutgers University. The work of RK was supported by the U.S. Department of Energy under contract number DE-AC02-76SF00515. The work of HO and YO is supported in part by the U.S. Department of Energy under contract number DE-FG03-92-ER40701. HO is also supported in part by the U.S. National Science Foundation under contract number OISE-0403366. YO is also supported in part by the JSPS Fellowship for Research Abroad. [0]{} S. Dimopoulos and H. Georgi, “Softly broken supersymmetry and SU(5),” Nucl. Phys. B [**193**]{}, 150 (1981). S. Dimopoulos, S. Raby and F. Wilczek, “Supersymmetry and the scale of unification,” Phys. Rev. D [**24**]{}, 1681 (1981). N. Sakai, “Naturalness in supersymmetric GUTS,” Z. Phys. C [**11**]{}, 153 (1981). M. Dine, W. Fischler and M. Srednicki, “Supersymmetric technicolor,” Nucl. Phys. B [**189**]{}, 575 (1981). S. Dimopoulos and S. Raby, “Supercolor,” Nucl. Phys. B [**192**]{}, 353 (1981); E. Witten, “Constraints on supersymmetry breaking,” Nucl. Phys. B [**202**]{}, 253 (1982). I. Affleck, M. Dine and N. Seiberg, “Supersymmetry breaking by instantons,” Phys. Rev. Lett.  [**51**]{}, 1026 (1983). I. Affleck, M. Dine and N. Seiberg, “Dynamical supersymmetry breaking in supersymmetric QCD,” Nucl. Phys. B [**241**]{}, 493 (1984). I. Affleck, M. Dine and N. Seiberg, “Dynamical supersymmetry breaking in four-dimensions and its phenomenological implications,” Nucl. Phys. B [**256**]{}, 557 (1985). E. Poppitz and S. P. Trivedi, “New models of gauge and gravity mediated supersymmetry breaking,” Phys. Rev. D [**55**]{}, 5508 (1997) \[arXiv:hep-ph/9609529\]. N. Arkani-Hamed, J. March-Russell and H. Murayama, “Building models of gauge-mediated supersymmetry breaking without a messenger sector,” Nucl. Phys. B [**509**]{}, 3 (1998) \[arXiv:hep-ph/9701286\]. K. Intriligator, N. Seiberg and D. Shih, “Dynamical SUSY breaking in meta-stable vacua,” JHEP [**0604**]{}, 021 (2006) \[arXiv:hep-th/0602239\]. T. Banks, “Remodeling the pentagon after the events of 2/23/06,” arXiv:hep-ph/0606313. M. Dine and J. Mason, “Gauge mediation in metastable vacua,” arXiv:hep-ph/0611312. E. Witten, “Mass Hierarchies In Supersymmetric Theories,” Phys. Lett. B [**105**]{}, 267 (1981). H. Murayama, “A model of direct gauge mediation,” Phys. Rev. Lett.  [**79**]{}, 18 (1997) \[arXiv:hep-ph/9705271\]. S. Dimopoulos, G. R. Dvali and R. Rattazzi, “A simple complete model of gauge-mediated SUSY-breaking and dynamical relaxation mechanism for solving the mu problem,” Phys. Lett. B [**413**]{}, 336 (1997) \[arXiv:hep-ph/9707537\]. M. A. Luty, “Simple gauge-mediated models with local minima,” Phys. Lett. B [**414**]{}, 71 (1997) \[arXiv:hep-ph/9706554\]. K. Agashe, “An improved model of direct gauge mediation,” Phys. Lett. B [**435**]{}, 83 (1998) \[arXiv:hep-ph/9804450\]. R. Kitano, “Gravitational gauge mediation,” Phys. Lett. B [**641**]{}, 203 (2006) \[arXiv:hep-ph/0607090\]. K. I. Izawa, Y. Nomura, K. Tobe and T. Yanagida, “Direct-transmission models of dynamical supersymmetry breaking,” Phys. Rev. D [**56**]{}, 2886 (1997) \[arXiv:hep-ph/9705228\]. R. Kitano, “Dynamical GUT breaking and mu-term driven supersymmetry breaking,” arXiv:hep-ph/0606129. M. Dine and W. Fischler, “A Phenomenological Model Of Particle Physics Based On Supersymmetry,” Phys. Lett. B [**110**]{}, 227 (1982). C. R. Nappi and B. A. Ovrut, “Supersymmetric extension of the SU(3) $\times$ SU(2) $\times$ U(1) model,” Phys. Lett. B [**113**]{}, 175 (1982). L. Alvarez-Gaume, M. Claudson and M. B. Wise, “Low-energy supersymmetry,” Nucl. Phys. B [**207**]{}, 96 (1982). S. A. Abel, C. S. Chu, J. Jaeckel and V. V. Khoze, “SUSY breaking by a metastable ground state: Why the early universe preferred the non-supersymmetric vacuum,” arXiv:hep-th/0610334. N. J. Craig, P. J. Fox and J. G. Wacker, “Reheating metastable O’Raifeartaigh models,” arXiv:hep-th/0611006. W. Fischler, V. Kaplunovsky, C. Krishnan, L. Mannelli and M. Torres, “Meta-stable supersymmetry breaking in a cooling universe,” arXiv:hep-th/0611018. A. E. Nelson and N. Seiberg, “R symmetry breaking versus supersymmetry breaking,” Nucl. Phys. B [**416**]{}, 46 (1994) \[arXiv:hep-ph/9309299\]. M. J. Duncan and L. G. Jensen, “Exact tunneling solutions in scalar field theory,” Phys. Lett. B [**291**]{}, 109 (1992). S. R. Coleman, “The fate of the false vacuum. 1. semiclassical theory,” Phys. Rev. D [**15**]{}, 2929 (1977) \[Erratum-ibid. D [**16**]{}, 1248 (1977)\]. K. I. Izawa and T. Yanagida, “Strongly coupled gauge mediation,” Prog. Theor. Phys.  [**114**]{}, 433 (2005) \[arXiv:hep-ph/0501254\]. V. A. Novikov, M. A. Shifman, A. I. Vainshtein and V. I. Zakharov, “Instantons in supersymmetric theories,” Nucl. Phys. B [**223**]{}, 445 (1983). M. A. Shifman and A. I. Vainshtein, “Solution of the anomaly puzzle in SUSY gauge theories and the Wilson operator expansion,” Nucl. Phys. B [**277**]{}, 456 (1986) \[Sov. Phys. JETP [**64**]{}, 428 (1986 ZETFA,91,723-744.1986)\]. N. Seiberg, “Exact results on the space of vacua of four-dimensional susy gauge theories,” Phys. Rev. D [**49**]{}, 6857 (1994) \[arXiv:hep-th/9402044\]. T. Gherghetta and A. Pomarol, “Bulk fields and supersymmetry in a slice of AdS,” Nucl. Phys. B [**586**]{}, 141 (2000) \[arXiv:hep-ph/0003129\]. T. Gherghetta and A. Pomarol, “A warped supersymmetric standard model,” Nucl. Phys. B [**602**]{}, 3 (2001) \[arXiv:hep-ph/0012378\]. W. D. Goldberger, Y. Nomura and D. R. Smith, “Warped supersymmetric grand unification,” Phys. Rev. D [**67**]{}, 075021 (2003) \[arXiv:hep-ph/0209158\]. Y. Nomura, “Supersymmetric unification in warped space,” arXiv:hep-ph/0410348. J. Polchinski, “Dirichlet-Branes and Ramond-Ramond Charges,” Phys. Rev. Lett.  [**75**]{}, 4724 (1995) \[arXiv:hep-th/9510017\]. A. Hanany and E. Witten, “Type IIB superstrings, BPS monopoles, and three-dimensional gauge dynamics,” Nucl. Phys. B [**492**]{}, 152 (1997) \[arXiv:hep-th/9611230\]. S. Elitzur, A. Giveon and D. Kutasov, “Branes and ${\cal N} = 1$ duality in string theory,” Phys. Lett. B [**400**]{}, 269 (1997) \[arXiv:hep-th/9702014\]. A. Giveon and D. Kutasov, “Brane dynamics and gauge theory,” Rev. Mod. Phys.  [**71**]{}, 983 (1999) \[arXiv:hep-th/9802067\]. H. Ooguri and Y. Ookouchi, “Meta-stable supersymmetry breaking vacua on intersecting branes,” Phys. Lett. B [**641**]{}, 323 (2006) \[arXiv:hep-th/0607183\]. S. Franco, I. Garcia-Etxebarria and A. M. Uranga, “Non-supersymmetric meta-stable vacua from brane configurations,” arXiv:hep-th/0607218. I. Bena, E. Gorbatov, S. Hellerman, N. Seiberg and D. Shih, “A note on (meta)stable brane configurations in MQCD,” arXiv:hep-th/0608157. R. Argurio, M. Bertolini, S. Franco and S. Kachru, “Gauge/gravity duality and meta-stable dynamical supersymmetry breaking,” arXiv:hep-th/0610212. M. Aganagic, C. Beem, J. Seo and C. Vafa, “Geometrically induced metastability and holography,” arXiv:hep-th/0610249. H. Verlinde, “On metastable branes and a new type of magnetic monopole,” arXiv:hep-th/0611069. J. de Boer, K. Hori, H. Ooguri and Y. Oz, “Branes and dynamical supersymmetry breaking,” Nucl. Phys. B [**522**]{}, 20 (1998) \[arXiv:hep-th/9801060\]. C. Vafa, “Superstrings and topological strings at large N,” J. Math. Phys.  [**42**]{}, 2798 (2001) \[arXiv:hep-th/0008142\]. S. Kachru, J. Pearson and H. L. Verlinde, “Brane/flux annihilation and the string dual of a non-supersymmetric field theory,” JHEP [**0206**]{}, 021 (2002) \[arXiv:hep-th/0112197\]. J. L. F. Barbon, “Rotated branes and ${\cal N} = 1$ duality,” Phys. Lett. B [**402**]{}, 59 (1997) \[arXiv:hep-th/9703051\]. F. Cachazo, B. Fiol, K. A. Intriligator, S. Katz and C. Vafa, “A geometric unification of dualities,” Nucl. Phys. B [**628**]{}, 3 (2002) \[arXiv:hep-th/0110028\]. H. Ooguri and Y. Ookouchi, “Landscape of supersymmetry breaking vacua in geometrically realized gauge theories,” Nucl. Phys. B [**755**]{}, 239 (2006) \[arXiv:hep-th/0606061\]. M. Eto, K. Hashimoto and S. Terashima, “Solitons in supersymmety breaking meta-stable vacua,” arXiv:hep-th/0610042. [^1]: See also [@Banks:2006ma] for a related work. [^2]: Imaginary part of ${{\rm Tr}}\delta \hat \chi$ is a Goldstone boson associated with a broken U(1)$_B$ symmetry. [^3]: The standard model gauge group at low energy partly comes from SU($N_F - N_C$) when we embed the SU(3) $\times$ SU(2) $\times$ U(1) into SU($N_F - N_C$)$_F$. One-loop diagrams with the $\rho$ and $\tilde \rho$ fields, therefore, contribute to the gaugino masses also in this case, although they are not charged under SU($N_F- N_C$)$_F$. [^4]: In the previous sections, we consider the case when the gauge group is SU($N_C$). When the gauge group is U($N_C$), the “baryon” symmetry is gauged and one of the pseudo-moduli ${\rm Tr} \delta \hat \chi$ becomes massive at tree-level due to the additional D-term condition. Otherwise, there is no major difference in properties of meta-stable vacua. [^5]: See [@IAS] on issues that arise when one turns on finite string coupling in these brane configurations. These issues are not relevant to our discussion below since we mostly deal with tree-level properties of Type IIA superstring theory.
--- abstract: 'Gravitons in a squeezed vacuum state, the natural result of quantum creation in the early universe or by black holes, will introduce metric fluctuations. These metric fluctuations will introduce fluctuations of the lightcone. It is shown that when the various two-point functions of a quantized field are averaged over the metric fluctuations, the lightcone singularity disappears for distinct points. The metric averaged functions remain singular in the limit of coincident points. The metric averaged retarded Green’s function for a massless field becomes a Gaussian which is nonzero both inside and outside of the classical lightcone. This implies some photons propagate faster than the classical light speed, whereas others propagate slower. The possible effects of metric fluctuations upon one-loop quantum processes are discussed and illustrated by the calculation of the one-loop electron self-energy.' --- =cmbx10 scaled 4 -36pt 0.64cm 0.64cm gr-qc/9410047\ TUTP-94-15\ October 1994 [GRAVITONS AND LIGHTCONE\ 0.2in FLUCTUATIONS ]{} .5in L.H. Ford\ .3in Institute of Cosmology\ Department of Physics and Astronomy\ Tufts University\ Medford, Massachusetts 02155\ 0.5in Introduction ============ It was conjectured several years ago by Pauli[@Pauli] that the ultraviolet divergences of quantum field theory might be removed in a theory in which gravity is quantized. The basis of Pauli’s conjecture was the observation that these divergences arise from the lightcone singularities of two-point functions, and that quantum fluctuations of the spacetime metric ought to smear out the lightcone, possibly removing these singularities. This conjecture was discussed further by Deser[@Deser], in the context of a path integral approach to the quantization of gravity, and by Isham, Salam, and Strathdee[@ISS]. However, there seems to have been little progress on this question in the intervening years. Indeed, it is well known that perturbative quantum gravity, far from being a universal regulator, is afflicted with nonrenormalizable infinities of its own. In the present work, the issue of lightcone fluctuations will be examined in a context where they are produced by gravitons propagating on a flat background. We assume that the gravitons are in a squeezed vacuum state, which is the appropriate state for relic gravitons created by quantum particle creation processes in the early universe[@Grishchuk] or by black hole evaporation. More generally, a squeezed vacuum state is the quantum state which arises in any quantum particle creation process in which the state of the created particles is an in-vacuum state represented in an out-Fock space. It will be shown that averaging over the metric fluctuations associated with such gravitons has the effect of smearing out the lightcone. It should be noted that the metric fluctuations being considered in this paper are distinct from those due to fluctuations in the energy-momentum tensor of the source[@F82; @Kuo]. It is possible for the energy density, for example, to exhibit large fluctuations. This arises in the Casimir effect and in quantum states in which the expectation value of the energy density is negative. This means that the gravitational field of such a system is not described by a fixed classical metric, but rather by a fluctuating metric. However, these metric fluctuations are “passive” in the sense that they are driven by fluctuations in the degrees of freedom of the matter field. In contrast, the metric fluctuations due to gravitons in a squeezed state are “active” fluctuations produced by quantized degrees of freedom of the gravitational field itself. In Section  \[sec:aveGF\], the retarded, Hadamard, and Feynman functions will be averaged over metric fluctuations. The resulting smearing of the lightcone is also discussed. The average of the square of the Feynman propagator for a scalar field is also calculated. The results are given in terms of the mean square of the squared geodesic separation between points. In Section  \[sec:form\], this quantity is calculated explicitly for various cases. In this section, gravitons in an expanding universe are also discussed, and some estimates for the present background of relict gravitons are given. The one-loop electron self-energy is calculated in the presence of metric fluctuations is calculated and discussed in Section  \[sec:oneloop\]. The results of the paper are summarized and discussed in Section  \[sec:summary\]. Averaging Two-Point Functions over Metric Fluctuations {#sec:aveGF} ====================================================== The Retarded Green’s Function ----------------------------- Let us consider a flat background spacetime with a linearized perturbation $h_{\mu\nu}$ propagating upon it. Thus the spacetime metric may be written as $$ds^2 = g_{\mu\nu}dx^\mu dx^\nu = (\eta_{\mu\nu} +h_{\mu\nu})dx^\mu dx^\nu = dt^2 -d{\bf x}^2 + h_{\mu\nu}dx^\mu dx^\nu \, . \label{eq:metric}$$ In the unperturbed spacetime, the square of the geodesic separation of points $x$ and $x'$ is $2\sigma_0 =(x-x')^2 = (t-t')^2 -({\bf x}-{\bf x}')^2$. In the presence of the perturbation, let this squared separation be $2\sigma$, and write $$\sigma= \sigma_0 + \sigma_1 + O(h_{\mu\nu}^2),$$ so $\sigma_1$ is the shift in $\sigma$ to first order in $h_{\mu\nu}$. Let us consider the retarded Green’s function for a massless scalar field. In flat spacetime, this function is $$G_{ret}^{(0)}(x-x') = {{\theta(t-t')}\over {4\pi}} \delta(\sigma_0)\, ,$$ which has a delta-function singularity on the future lightcone and is zero elsewhere. In the presence of a classical metric perturbation, the retarded Green’s function has its delta-function singularity on the perturbed lightcone, where $\sigma=0$. In general, it may also become nonzero on the interior of the lightcone due to backscattering off of the curvature. However, we are primarily interested in the behavior near the new lightcone, and so let us replace $G_{ret}^{(0)}(x-x')$ by $$G_{ret}(x,x') = {{\theta(t-t')}\over {4\pi}} \delta(\sigma)\, . \label{eq:gret0}$$ We are assuming that the curved space Green’s functions have the Hadamard form, in which case their leading asymptotic behavior near the lightcone is the same as in flat space [@Fulling]. One may regard this assumption as a restriction on the physically allowable quantum states. If we terminate the expansion of $\sigma$ at first order (higher orders will be discussed below), then Eq. (\[eq:gret0\]) may be expressed as $$G_{ret}(x,x') = {{\theta(t-t')}\over {8\pi^2}} \int_{-\infty}^{\infty} d\alpha\, e^{i\alpha \sigma_0}\, e^{i\alpha \sigma_1}\, . \label{eq:gretrep}$$ We now replace the classical metric perturbations by gravitons in a squeezed vacuum state $|\psi\rangle$. Then $\sigma_1$ becomes a quantum operator which is linear in the graviton field operator, $h_{\mu\nu}$. A squeezed vacuum state is a state such that $\sigma_1$ may be decomposed into positive and negative frequency parts. Thus we may find $\sigma_1^{+}$ and $\sigma_1^{-}$ so that $$\sigma_1^{+} |\psi\rangle =0, \qquad \langle \psi| \sigma_1^{-}=0\, , \label{eq:posfreq}$$ where $\sigma_1 = \sigma_1^{+} + \sigma_1^{-}$. In terms of annihilation and creation operators, $\sigma_1^{+} = \sum_j a_j f_j$ and $\sigma_1^{-} = \sum_j a^\dagger_j f^*_j$, where the $f_j$ are mode functions. We now write $$e^{i\alpha\sigma_1} = e^{i\alpha(\sigma_1^{+} + \sigma_1^{-})} = e^{i\alpha\sigma_1^{-}} e^{-{1\over 2}\alpha^2 [\sigma_1^{+}, \sigma_1^{-}]} e^{i\alpha\sigma_1^{+}}\,. \label{eq:expop}$$ In the second step we used the Campbell-Baker-Hausdorff formula, that $e^{A+B} = e^A e^{\frac{1}{2}[A,B]} e^B$ for any pair of operators $A$ and $B$ that each commute with their commutator, $[A,B]$. We now take the expectation value of this expression and use the facts that $e^{i\alpha\sigma_1^{+}}|\psi\rangle = |\psi\rangle$ and $\langle \psi|e^{i\alpha\sigma_1^{-}} = \langle \psi|$, which follow immediately from Eq. ( \[eq:posfreq\]) if the exponentials are expanded in a power series. Finally, we use $[\sigma_1^{+}, \sigma_1^{-}] =\sum_j f_j f^*_j = \langle {\sigma_1}^2 \rangle$ to write $$\Bigl\langle e^{i\alpha \sigma_1} \Bigr\rangle = e^{-{1\over 2}\alpha^2 \langle \sigma_1^2 \rangle} \, . \label{eq:expav}$$ Thus when we average over the metric fluctuations, the retarded Green’s function is replaced by its quantum expectation value: $$\Bigl\langle G_{ret}(x,x') \Bigr\rangle = {{\theta(t-t')}\over {8\pi^2}} \int_{-\infty}^{\infty} d\alpha \,e^{i\alpha \sigma_0} \, e^{-{1\over 2}\alpha^2 \langle \sigma_1^2 \rangle} \, .$$ The expectation value of $\sigma_1^2$ is formally divergent. However, in flat spacetime this divergence may be removed by subtraction of the expectation value in the Minkowski vacuum state. Henceforth, we will take $\langle \sigma_1^2 \rangle$ to denote this renormalized expectation value. The above integral converges only if $\langle \sigma_1^2 \rangle > 0$, in which case it may be evaluated to yield $$\Bigl\langle G_{ret}(x,x') \Bigr\rangle = {{\theta(t-t')}\over {8\pi^2}} \sqrt{\pi \over {2\langle \sigma_1^2 \rangle}} \; \exp\Bigl(-{{\sigma_0^2}\over {2\langle \sigma_1^2 \rangle}}\Bigr)\, . \label{eq:retav}$$ Note that this averaged Green’s function is indeed finite at $\sigma_0 =0$ provided that $\langle \sigma_1^2 \rangle \not= 0$. Thus the lightcone singularity has been smeared out. Note that the smearing occurs in both the timelike and spacelike directions. This smearing may be interpreted as due to the fact that photons may be either slowed down or boosted by the metric fluctuations. Photon propagation now becomes a statistical phenomenon; some photons travel slower than light on the classical spacetime, whereas others travel faster. We have now the possibility of “faster than light” signals. This need not cause any causal paradoxes, however, because the system is no longer Lorentz invariant. The graviton state defines a preferred frame of reference. The usual argument linking superluminal signals with causality violation assumes Lorentz invariance [@Pirani]. The effects of lightcone fluctuations upon photon propagation are in principle observable. Consider a source which emits evenly spaced pulses. An observer at a distance $D$ from the source will detect pulses whose spacing varies by an amount of the order of $\Delta t$. For a pulse which is delayed by time $\Delta t$, $$\sigma = {1\over 2}[(D +\Delta t)^2 -D^2] \approx D \Delta t \, , \qquad \Delta t \ll D \, .$$ Thus the typical time delay or advance is of the order of $$\Delta t \approx {{\sqrt{\langle \sigma_1^2 \rangle}}\over D} \, . \label{eq:delt}$$ This effect leads to the broadening of spectral lines. The observer will detect a line which is broadened in wavelength by $\Delta \lambda =\Delta t$. Some observational aspects of this effect will be discussed in more detail in Section \[sec:form\]. Note that it is essential that the gravitons be in a nonclassical state, such as a squeezed vacuum, in order to obtain lightcone smearing. Gravitons in a coherent state will represent a classical gravity wave. In this case, the retarded Green’s function will still have a delta function singularity in the lightcone of the perturbed spacetime. In the above calculation of $\bigl\langle G_{ret}(x,x') \bigr\rangle$, the expansion of $\sigma$ was truncated after the first order. However, it is of interest to consider the effect of second order terms. This is particularly pertinent in view of the fact that the crucial corrections involve $\langle \sigma_1^2 \rangle$, which is itself second order in $h_{\mu\nu}$ [@Boulware]. We now write $$\sigma= \sigma_0 + \sigma_1 + \sigma_2 + O(h_{\mu\nu}^3),$$ so that $\sigma_2$ is the second order correction. We now wish to include this correction in the calculation of $\bigl\langle G_{ret}(x,x') \bigr\rangle$. Let us first write $$\sigma_2 = :\sigma_2: + \langle\sigma_2\rangle \, ,$$ where the colons denote normal ordering with respect to the state $|\psi\rangle$, and the expectation value is understood to be in this state. Equation (\[eq:expop\]) is now replaced by $$e^{i\alpha(\sigma_1 +\sigma_2)} = e^{i\alpha\sigma_1^{-}} e^{-{1\over 2}\alpha^2 [\sigma_1^{+}, \sigma_1^{-}]} e^{i\alpha\sigma_1^{+}} e^{i\alpha\langle\sigma_2\rangle}\,e^{i\alpha:\sigma_2:} \,. \label{eq:expop2}$$ Here we have ignored all terms which are of third order or higher, including those which arise when $:\sigma_2:$ is commuted past $\sigma_1^{\pm}$. We use the fact that $$e^{i\alpha:\sigma_2:} |\psi \rangle = |\psi\rangle \,,$$ to write the analog of Eq. (\[eq:expav\]): $$\Bigl\langle e^{i\alpha (\sigma_1 +\sigma_2)} \Bigr\rangle = e^{i\alpha\langle\sigma_2\rangle -{1\over 2}\alpha^2 \langle \sigma_1^2 \rangle} \, . \label{eq:expav2}$$ As in the case of $\langle \sigma_1^2 \rangle$, we assume that $\langle\sigma_2\rangle$ is a renormalized expectation value. Now the metric averaged Green’s function becomes $$\Bigl\langle G_{ret}(x,x') \Bigr\rangle = {{\theta(t-t')}\over {8\pi^2}} \sqrt{\pi \over {2\langle \sigma_1^2 \rangle}} \; \exp\Bigl(-{{\sigma_0^2 +\langle\sigma_2\rangle }\over {2\langle \sigma_1^2 \rangle}}\Bigr)\, . \label{eq:retav2}$$ Comparison with Eq. (\[eq:retav\]) reveals that the effect of retaining the $\sigma_2$ term is simply to shift slightly the position of the peak of the Gaussian. Thus $\langle\sigma_2\rangle$ enters in a different way from $\langle \sigma_1^2 \rangle$, due to the different powers of $\alpha$ in Eq. (\[eq:expav2\]). The same phenomenon would occur for the other functions to be discussed below, so henceforth the $\sigma_2$ terms will be ignored. It should be noted that although we are expanding $\sigma$ in powers of the metric perturbation $h_{\mu\nu}$, the averaging procedure used to obtain $\bigl\langle G_{ret}(x,x') \bigr\rangle$ retains terms of all orders in $h_{\mu\nu}$. This is essential in order to obtain nontrivial results. We can think of this as an expansion of the [*argument*]{} of the exponential functions in Eqs. (\[eq:gretrep\]) or (\[eq:expop2\]) but not of the functions themselves. This seems to be self-consistent in that retaining successively higher terms in $\sigma$ leads to small changes in the form of the results, as we saw in going from Eq. (\[eq:retav\]) to Eq. (\[eq:retav2\]). The Hadamard Function --------------------- In addition to the retarded and advanced Green’s functions discussed in the previous subsection, there are several other singular functions in quantum field theory which can be expressed as vacuum expectation values of products of field operators. In particular, the [*Hadamard function*]{} for a scalar field $\phi$ is defined as $$G_1 (x,x') \equiv \langle 0|\phi(x) \phi(x')+ \phi(x') \phi(x)|0\rangle,$$ where $|0\rangle$ is the vacuum state. In the massless case in flat spacetime, it has the explicit form: $$G_1(x,x') = -{1 \over {4\pi^2 \sigma}}. \label{eq:Had}$$ Recall that $\sigma$ is one-half of the square of the geodesic distance between $x$ and $x'$, and in flat spacetime, $\sigma = {1\over 2} (x-x')^2$. Even in the massive case, and/or in curved spacetime, Eq. (\[eq:Had\]) gives the asymptotic behavior of $G_1(x,x')$ near the lightcone. As in the case of the retarded Green’s function, we now wish to replace $\sigma$ by $\sigma_0 +\sigma_1$ and take the quantum expectation value of the result. Let us use the identities $$\int_0^\infty d\alpha \, e^{i\alpha x} = {i \over x} +\pi \delta(x), \label{eq:delta}$$ and $$\int_0^\infty d\alpha \, e^{-i\alpha x} = -{i \over x} +\pi \delta(x),$$ to write $${1 \over {\sigma_0 +\sigma_1}} = -{i \over 2} \int_0^\infty d\alpha \, \bigl[e^{i(\sigma_0 +\sigma_1)\alpha} -e^{-i(\sigma_0 +\sigma_1)\alpha}\bigr].$$ Now use Eq. (\[eq:expav\]) to take the expectation value of the above expression and write $$\Bigl\langle G_1 (x,x') \Bigr\rangle = -{1 \over {4\pi^2}} \Biggl\langle{1 \over {(\sigma_0 +\sigma_1)}}\Biggr\rangle =-{1 \over {4\pi^2}} \int_0^\infty d\alpha \, \sin \sigma_0\alpha \,\, e^{-{1\over 2}\alpha^2 \langle \sigma_1^2 \rangle}. \label{eq:Hadav1}$$ This expression gives us the Hadamard function averaged over metric fluctuations for the case that $\langle \sigma_1^2 \rangle > 0$. Let us examine the asymptotic forms of this result. Near the classical lightcone, $\sigma_0 \rightarrow 0$. If we expand the integrand of the above expression to lowest order in $\sigma_0$, and perform the integration, we find that $$\Bigl\langle G_1 (x,x') \Bigr\rangle \sim -{{\sigma_0}\over {4\pi^2 \langle \sigma_1^2 \rangle}}, \qquad \sigma_0 \rightarrow 0.$$ Thus the lightcone singularity is removed so long as $\langle \sigma_1^2 \rangle \not= 0$, which will generally be the case for non-coincident points. Equation (\[eq:Hadav1\]) may be rewritten as $$\Bigl\langle G_1 (x,x') \Bigr\rangle = -{1 \over {4\pi^2 \sigma_0}} \Bigl[ 1 -{{\langle \sigma_1^2 \rangle}\over {\sigma_0^2}} \int_0^\infty dt\, t \, \cos t \,\, \exp\Bigl(-{{\langle \sigma_1^2 \rangle t^2}\over {2 \sigma_0^2}}\Bigr)\Bigr].$$ In the limit that $\sigma_0^2 \gg \langle \sigma_1^2 \rangle$, the second term above is negligible and we recover the classical form of $G_{1}$ : $$\Bigl\langle G_{1}(x,x') \Bigr\rangle \sim -{1 \over{4\pi^2 \sigma_0}}. \label{eq:classlim}$$ The above expression for $\Bigl\langle G_1 (x,x') \Bigr\rangle$ is valid for $\langle \sigma_1^2 \rangle > 0$. We can, however, obtain an alternative form valid for the case that $\langle \sigma_1^2 \rangle < 0$. To do so, we use the representation $${1 \over {\sigma_0 +\sigma_1}} = \int_0^\infty d\alpha \, e^{-(\sigma_0 +\sigma_1)\alpha}.$$ Now we have $$\Bigl\langle G_1 (x,x') \Bigr\rangle = -{1 \over {4\pi^2}} \Biggl\langle{1 \over {(\sigma_0 +\sigma_1)}}\Biggr\rangle =-{1 \over {4\pi^2}} \int_0^\infty d\alpha \, e^{-\sigma_0\alpha}\, e^{{1\over 2}\alpha^2 \langle \sigma_1^2 \rangle}. \label{eq:Hadav2}$$ Near the lightcone, this quantity is finite: $$\Bigl\langle G_1 (x,x') \Bigr\rangle \rightarrow -{1 \over {4\pi^2}} \sqrt{\pi \over {2|\langle \sigma_1^2 \rangle|}}, \qquad \sigma_0 \rightarrow 0.$$ We may rewrite Eq. (\[eq:Hadav2\]) as $$\Bigl\langle G_1 (x,x') \Bigr\rangle = -{1 \over {4\pi^2 \sigma_0}} \Bigl[ 1 +{{\langle \sigma_1^2 \rangle}\over {\sigma_0^2}} \int_0^\infty dt\, t \, e^{-t} \, \exp\Bigl({{\langle \sigma_1^2 \rangle t^2}\over {2 \sigma_0^2}}\Bigr)\Bigr]. \label{eq:Hadav2b}$$ From this form, we again obtain Eq. (\[eq:classlim\]) when $\sigma_0^2 \gg |\langle \sigma_1^2 \rangle|$. Alternatively, Eq. (\[eq:Hadav2b\]) may be derived by expanding $(\sigma_0 +\sigma_1)^{-1}$ in a power series in $\sigma_1$, using Wick’s theorem to replace $\langle \sigma_1^{2n} \rangle$ by $(2n-1)!! \,\langle \sigma_1^{2} \rangle^n$, and finally resuming the result by Borel summation. The Feynman Propagator ---------------------- The average of the Feynman propagator, $G_F$, over the metric fluctuations can readily be obtained by combining the results of the previous two subsections. We use the identity $$G_F(x,x') = -{1\over 2}\bigl[G_{ret}(x,x') + G_{adv}(x,x')\bigr] - {i\over 2}G_1 (x,x'), \label{eq:GF}$$ and the fact that the advanced Green’s function is related to the retarded Green’s function by $$G_{adv}(x,x') = G_{ret}(x',x).$$ We restrict our attention to the case that $\langle \sigma_1^2 \rangle > 0$, both because it is only here that we have a formula for $\Bigl\langle G_{ret}\Bigr\rangle$, and it is the case of greater physical interest. Combining Eqs. (\[eq:retav\]) and (\[eq:Hadav1\]), we obtain $$\Bigl\langle G_{F}(x,x') \Bigr\rangle = - { 1 \over {16\pi^2}} \sqrt{\pi \over {2\langle \sigma_1^2 \rangle}} \; \exp\Bigl(-{{\sigma_0^2}\over {2\langle \sigma_1^2 \rangle}}\Bigr) +{i \over {8\pi^2}} \int_0^\infty d\alpha \, \sin \sigma_0\alpha \,\, e^{-{1\over 2}\alpha^2 \langle \sigma_1^2 \rangle}\, . \label{eq:Feyav}$$ Again, this quantity is finite except in the coincidence limit, $x' \rightarrow x$. Alternately, we can write $$G_F(x,x') = {1\over {8\pi^2}}\biggl[ {i \over \sigma} - \pi \delta(\sigma) \biggr] = -{1\over {8\pi^2}}\int_0^\infty d\alpha \, e^{-i\alpha \sigma}\, . \label{eq:GFrep}$$ Averaging this integral form for $G_F$ over metric fluctuations yields $$\Bigl\langle G_{F}(x,x') \Bigr\rangle = -{1 \over {8\pi^2}} \int_0^\infty d\alpha \, e^{-i\sigma_0\alpha} \,\, e^{-{1\over 2}\alpha^2 \langle \sigma_1^2 \rangle}\, . \label{eq:Feyav2}$$ This form is equivalent to Eq. (\[eq:Feyav\]). Note that whereas the real part of the above integral may be expressed in terms of elementary functions, the imaginary part may not. The Square of the Feynman Propagator {#sec:GF2} ------------------------------------ Earlier in this section, we obtained expressions for the various singular functions averaged over metric fluctuations. However, the Feynman diagrams for one-loop processes often involve products of at least two Feynman propagators. Thus, if we wish to study the effect of metric fluctuations upon these processes, we need an expression for quantities such as $\Bigl\langle G_{F}^2 \Bigr\rangle$, the average of the square of the Feynman propagator. We will again assume that $\langle \sigma_1^2 \rangle >0$. We may use Eq. (\[eq:GFrep\]) to write $$G_{F}^2 = {1 \over {(8\pi^2)}^2}\int_0^\infty d\alpha \, d\beta \, e^{i(\alpha +\beta)\sigma}.$$ If we set $\sigma= \sigma_0 + \sigma_1$, and average over the metric fluctuations, the result is $$\Bigl\langle G_{F}^2 \Bigr\rangle = {1 \over {(8\pi^2)}^2} \int_0^\infty d\alpha \, d\beta \, e^{-i(\alpha +\beta)\sigma_0} e^{-{1\over 2}(\alpha+\beta)^2 \langle \sigma_1^2 \rangle} \, .$$ We next change the integration variables, first to polar coordinates defined by $\alpha = \rho \cos \theta$ and $\beta = \rho \sin \theta$, and then to a rescaled radial coordinate defined by $t =(\cos \theta + \sin \theta)\rho$ : $$\begin{aligned} \Bigl\langle G_{F}^2 \Bigr\rangle &=& {1 \over {(8\pi^2)}^2} \int_0^{\pi \over 2} d\theta \int_0^\infty d\rho\, \rho\, e^{-i(\cos \theta + \sin \theta)\sigma_0 \rho} \exp[{-{1\over 2}(\cos \theta + \sin \theta)^2 \langle \sigma_1^2 \rangle\rho^2}] \nonumber \\ &=& {1 \over {(8\pi^2)}^2} \int_0^{\pi \over 2} {{d\theta}\over (\cos \theta + \sin \theta)^2} \int_0^\infty dt\, t\, e^{-i\sigma_0 t} e^{-{1\over 2}\langle \sigma_1^2 \rangle t^2} \, .\end{aligned}$$ We now use the identity $$\int_0^{\pi \over 2} {{d\theta}\over (\cos \theta + \sin \theta)^2} =1\, ,$$ to write our result (for later use, it is convenient to relabel the integration variable): $$\Bigl\langle G_{F}^2 \Bigr\rangle = {1 \over {(8\pi^2)}^2} \int_0^\infty d\alpha\, \alpha\, e^{-i\sigma_0 \alpha} e^{-{1\over 2}\langle \sigma_1^2 \rangle \alpha^2} \, . \label{eq:GFsqav}$$ In this case, the imaginary part of the integral can be expressed in terms of elementary functions to write $$\Bigl\langle G_{F}^2 \Bigr\rangle = {1 \over {64\pi^4}}\int_0^\infty d\alpha \,\alpha\, \cos\,\sigma_0\alpha \,\, e^{-{1\over 2}\alpha^2 \langle \sigma_1^2 \rangle} +i{{\sqrt{2\pi}\sigma_0}\over {128\pi^4\langle \sigma_1^2 \rangle^{3\over 2}}}\: \exp\Bigl(-{{\sigma_0^2}\over {2\langle \sigma_1^2 \rangle}}\Bigr)\, . \label{eq:GFsqav2}$$ Again, these forms hold for the case that $\langle \sigma_1^2 \rangle >0$. As in the case of the averaged Green’s functions, this quantity is finite on the classical lightcone, $\sigma_0 =0$, so long as the points are not actually coincident, so $\langle \sigma_1^2 \rangle \not= 0$. Gravitons and the Form of $\langle \sigma_1^2 \rangle$ {#sec:form} ======================================================= Gravitons in Flat Spacetime --------------------------- So far, we have only assumed that the quantum state $|\psi\rangle$ of the gravitons is such that $\sigma_1$ can be decomposed into positive and negative parts which satisfy Eq. (\[eq:posfreq\]). However, we must have more information about the state before we can determine the explicit form of $\langle \sigma_1^2 \rangle$. Even the calculation of $\sigma_1$ for a given classical metric perturbation can be a difficult task, involving the integration of the square root of Eq. (\[eq:metric\]) along a geodesic. However, as we are interested in gravitational wave perturbations, we may simplify the analysis by the adoption of the transverse-tracefree gauge, which is specified by the conditions $$h^j_j = \partial_j h^{ij} = h^{0\nu} = 0\, . \label{eq:TT}$$ In particular, $h_{\mu\nu}$ has purely spatial components, $h_{ij}$, in a chosen coordinate system. Thus, in this gauge, a null geodesic is specified by $$dt^2 = d{\bf x}^2 - h_{ij}dx^i dx^j \, ,$$ and along a future-directed null geodesic, one has $$dt = \sqrt{1 - h_{ij}n^i n^j }\, dr \approx \left(1 - {1\over 2} h_{ij} n^i n^j \right)\,dr \, .$$ Here $dr = |d{\bf x}|$, and $n^i ={{dx^i}/{dr}}$ is the unit three-vector defining the spatial direction of the geodesic. Thus the time interval $\Delta t$ and spatial interval $\Delta r = r_1 -r_0$ traversed by a null ray are related by $$\Delta t = \Delta r - {1\over 2}\int_{r_0}^{r_1} h_{ij} n^i n^j \,dr\,.$$ Denote the right-hand side of the above expression by $\Delta \ell$, the proper spatial distance interval between the endpoints. Now consider an arbitrary pair of points (not necessarily null separated). The square of the geodesic separation between these points is $$2\sigma = (\Delta t)^2 - (\Delta \ell)^2 \approx (\Delta t)^2 - (\Delta r)^2 + \Delta r \int_{r_0}^{r_1} h_{ij} n^i n^j \,dr\,,$$ so $$\sigma_1 = {1\over 2}\Delta r \int_{r_0}^{r_1} h_{ij} n^i n^j\,dr\,.$$ If we now treat $h_{ij}$ as a quantized metric perturbation, we obtain a formula for $\langle \sigma_1^2 \rangle$ : $$\langle \sigma_1^2 \rangle = {1\over 4}(\Delta r)^2 \int_{r_0}^{r_1} dr \int_{r_0}^{r_1} dr' \:\, n^i n^j n^k n^m \:\, \langle h_{ij}(x) h_{km}(x') \rangle \,.$$ Here the graviton two-point function, $\langle h_{ij}(x) h_{km}(x') \rangle$, is understood to be renormalized, so that it is finite when $x=x'$ and vanishes when the quantum state of the gravitons is the vacuum state. Of particular interest is the case where only modes with wavelengths long compared to $\Delta r$ are excited, so the two-point function is approximately constant in both variables. Then $$\langle \sigma_1^2 \rangle \approx {1\over 4}\langle h_{ij} h_{km}\, \rangle \Delta x^i\Delta x^j \Delta x^k\Delta x^m \, ,$$ where $\Delta x^i = ({{dx^i}/{dr}})\, \Delta r$ is the spatial coordinate separation of the endpoints. In this frame of reference, $\langle \sigma_1^2 \rangle$ will depend only upon $\Delta x^i$. We may illustrate the calculation of $\langle \sigma_1^2 \rangle$ more explicitly. The field operator $h_{\mu\nu}$ may be expanded in terms of plane waves as $$h_{\mu\nu} = \sum_{{\bf k},\lambda}\, [a_{{\bf k}, \lambda} e_{\mu\nu} ({{\bf k}, \lambda}) f_{\bf k} + H.c. ],$$ where H.c. denotes the Hermitian conjugate, $\lambda$ labels the polarization states, $f_{\bf k} = (2\omega V)^{-{1\over 2}} e^{i({\bf k \cdot x} -\omega t)}$ is a box normalized mode function, and the $e_{\mu\nu} ({{\bf k}, \lambda})$ are polarization tensors. (Here units in which $32\pi G =1$, where $G$ is Newton’s constant are used.) Let us consider the particular case of gravitons in a squeezed vacuum state of a single linearly polarized plane wave mode. Let the mode have frequency $\omega$ and be propagating in the $+z$ direction. Take the polarization tensor to have the nonzero components $e_{xx}= -e_{yy}= {1/{\sqrt{2}}}$. This is the “$+$” polarization in the notation of Ref. [@MTW]. Then we have that $$\langle \sigma_1^2 \rangle = {{[(\Delta x)^2 -(\Delta y)^2)]^2}\over {16\omega V}}\,\, {\rm Re} \bigl[\langle a^\dagger a \rangle + \langle a^2 \rangle e^{2i\omega(z-t)}\bigr].$$ A squeezed vacuum state for a single mode can be defined by[@Caves81] $$|\zeta\rangle=S(\zeta)\,|0\rangle,$$ where $S(\zeta)$ is the squeeze operator defined by $$S(\zeta) = \exp[{1\over 2}\zeta^\ast a^2 -{1\over 2}\zeta ({a^\dagger})^2],.$$ Here $$\zeta = re^{i\delta}$$ is an arbitrary complex number. The squeeze operator has the properties that $$S^{\dagger}(\zeta)\,a\,S(\zeta)= a\,\cosh r-a^{\dagger}e^{i\delta}\sinh r,$$ and $$S^{\dagger}(\zeta)\,a^{\dagger}\,S(\zeta)= a^{\dagger}\,\cosh r-ae^{-i\delta}\sinh r.$$ From these properties, we may show that $$\langle a^\dagger a \rangle = \sinh^2 r \, ,$$ and$$\langle a^2 \rangle = -e^{i\delta} \sinh r \cosh r \, .$$ Hence in this example $$\langle \sigma_1^2 \rangle = {{\left[(\Delta x)^2 -(\Delta y)^2\right]^2}\over {16\omega V}} \sinh r \Bigl\{\sinh r - \cosh r \cos\left[2\omega(z-t) +\delta\right]\Bigr\}.$$ Here $\langle \sigma_1^2 \rangle$ will be positive in some regions and negative in others. Of particular interest to us will be the case of an isotropic bath of gravitons. Here rotational symmetry and the tracelessness condition imply that $$\langle h_{ij}h_{kl} \rangle = A\bigl(\delta_{ij}\delta_{kl} - {3\over 2}\delta_{ik}\delta_{jl} -{3\over 2}\delta_{il}\delta_{jk} \bigr),$$ where $A = -{1\over {15}}\langle h_{ij}h^{ij} \rangle$. In this case, we have that $$\langle \sigma_1^2 \rangle = h^2 r^4, \label{eq:r4}$$ where $r=|\Delta {\bf x}|$ is the magnitude of the spatial separation, and $$h^2 = -{1\over 2}A = {1\over {30}}\langle h_{ij}h^{ij} \rangle$$ is a measure of the mean squared metric fluctuations. In some cases, the gravitons may be regarded as being in a thermal state. Although a thermal state is a mixed state rather than a pure quantum state, quantum particle creation processes often give rise to a thermal spectrum of particles. In the case of gravitons created by the Hawking effect, this correspondence is exact. In the case of cosmological particle production, it is possible to obtain an approximately thermal spectrum in some cases[@Parker]. We may find $h^2$ for a thermal bath of gravitons by noting that here, due to the two polarization states for gravitons, $\langle h_{ij}h^{ij} \rangle = 2\langle \varphi^2 \rangle$, where $\varphi$ is a massless scalar field. In a thermal state at temperature $T$, it is well known that $\langle \varphi^2 \rangle = {{T^2} \over {12}}$. Thus, for a thermal bath of gravitons at temperature $T$, $$\langle \sigma_1^2 \rangle = {1\over {180}}T^2 r^4 \,. \label{eq:thermal}$$ Note that in this case, $\langle \sigma_1^2 \rangle >0$, whereas more generally it may have either sign. Recall that the forms of the averaged Green’s functions obtained in Section \[sec:aveGF\] depend upon the sign of $\langle \sigma_1^2 \rangle$, and it is only for the case $\langle \sigma_1^2 \rangle >0$ that expressions were found for $\Bigl\langle G_{ret}\Bigr\rangle$ and $\Bigl\langle G_{F}\Bigr\rangle$. Gravitons in an Expanding Universe {#sec:Grav} ---------------------------------- For the most part in this paper, we are concerned with gravitons and lightcone fluctuations on a background of flat spacetime. However, relict gravitons from the early universe are one of the more likely sources of metric fluctuations. Thus we need to discuss gravitons on a cosmological background, which we will take to be a spatially flat Robertson-Walker universe. The metric can be written as $$ds^2 = dt^2 -a^2(t) d{\bf x}^2 \, ,$$ where $a(t)$ is the scale factor. Linearized perturbations of this metric were investigated by Lifshitz [@Lifshitz], who showed that it is still possible to impose the transverse-tracefree gauge conditions, Eq. (\[eq:TT\]). The non-zero components of the perturbation satisfy $$a^{-3} {\partial \over {\partial t}} \Bigl(a^{3} {{\partial h^i_j} \over {\partial t}} \Bigl) - a^{-2} \nabla^2 h^i_j =0 \, . \label{eq:perteq}$$ However, this is just the equation satisfied by a minimally coupled scalar field in this background, $$\Box \varphi = 0 \,.$$ Thus the graviton field may be treated as a pair (one for each polarization) of massless, minimally coupled scalar fields. The quantization of cosmological metric perturbations in this framework was discussed in Ref. [@FP]. Consider a power law expansion, for which $$a(t) = c t^\alpha \, .$$ In this case, the solutions of Eq. (\[eq:perteq\]) are of the form $\psi_k \, e^{i{\bf k \cdot x}}$, where $$\psi_k = \eta^{1\over {2b}} \bigl[c_1 H^{(1)}_\nu (k\eta) + c_2 H^{(2)}_\nu (k\eta) \bigr] \, .$$ Here $b= (\alpha -1)(3\alpha -1)^{-1}$ and $\nu = (2|b|)^{-1}$. Furthermore, $c_1$ and $c_2$ are arbitrary constants, and $\eta$ is the conformal time given by $$\eta = \int a^{-1} dt = [c(1-\alpha)]^{-1} t^{1-\alpha} \, .$$ We are interested in the late time behavior of these solutions, which will indicate how quantities such as $\langle \sigma_1^2 \rangle$ or $h^2$ scale with the expansion of the universe. As $t \rightarrow \infty$, $\eta \rightarrow \infty$ if $\alpha < 1$, and $\eta \rightarrow 0$ if $\alpha > 1$. In the former case, we use the large argument limit of the Hankel functions: $$|H^{(1)}_\nu (k\eta)| \sim |H^{(2)}_\nu (k\eta)| \sim \sqrt{{2 \over {\pi|k\eta|}}} \,,$$ as $|\eta| \rightarrow \infty$ for fixed $k$. In the latter case, we use the small argument limit: $$|H^{(1)}_\nu (k\eta)| \sim |H^{(2)}_\nu (k\eta)| \sim {{\Gamma(\nu)}\over \pi} ({1\over 2}|k\eta|)^{-\nu} \,, \quad |k\eta| \rightarrow 0\,.$$ From these forms, we find that $|\psi_k| \sim a^{-1}$ if $\alpha <1$, and $|\psi_k| \sim {\rm const.}$ if $\alpha >1$. Thus as $t \rightarrow \infty$, $$\begin{aligned} h^2 \sim {1 \over {a^2}}\, , \qquad \alpha <1 \, , \label{eq:h1} \\ h^2 \rightarrow {\rm constant} \, , \qquad \alpha >1 \, . \label{eq:h2}\end{aligned}$$ Now let us make some estimates of the magnitude of $h^2$ due to a background of relict cosmological gravitons. The creation of gravitons in an expanding universe is a topic upon which there is a vast literature [@gravrefs]. Let us consider a model in which gravitons are created at the end of an inflationary epoch. This type of model was discussed in Ref. [@F87], where it was argued that the typical energy density of gravitons present just after inflation will be of the order of the energy density associated with the Gibbons-Hawking temperature of the deSitter phase. Let $\rho_V$ be the vacuum energy density during inflation and $\rho_P$ be the Planck density. Then the energy density of the created gravitons at the end of inflation will be of the order of $$\rho_i \approx {{\rho_V^2}\over {\rho_P}}.$$ This energy density will subsequently be redshifted by the expansion of the universe to an energy density at the present time of the order of $$\rho \approx \rho_i \Bigl({{3K}\over {T_R}}\Bigr)^4 \approx {{\rho_V^2}\over {\rho_P}}\Bigl({{3K}\over {T_R}}\Bigr)^4 \, ,$$ where $T_R$ is the temperature of reheating after inflation. Here we are assuming that the subsequent expansion rate of the universe corresponds to $\alpha <1$, so that the gravitons redshift as ordinary massless particles. The typical wavelength of the gravitons at the time of creation is $$\lambda_i \approx (\rho_i)^{1\over 4}\, ,$$ and will be redshifted at the present time to a wavelength of the order of $$\lambda \approx \lambda_i \Bigl({{3K}\over {T_R}}\Bigr)\,.$$ The corresponding mean squared metric fluctuation will be of the order of $$h^2 \approx \rho \lambda^2 \, .$$ If, for example, inflation were to occur at an energy scale of $10^{15} {\rm Gev}$, and the reheating occurs to the same energy scale, this model would predict a present-day mean graviton wavelength of the order of $\lambda \approx 10^4 {\rm cm}$ and $h \approx 10^{-36}$. For most purposes, the effects of these gravitons will be completely negligible. For example, the lightcone fluctuations will produce a spread in arrival times of pulses, from Eq. (\[eq:delt\]), of the order of $\Delta t \approx 10^{-36} D$, where $D \leq 10^4 {\rm cm}$. This is a time spread of no more than one Planck time and is hence unobservably small. The best hope for observing the effects of the lightcone fluctuations seems to be through their indirect influence upon virtual processes, which will be the topic of the next section. One-Loop Processes: The Electron Self-Energy {#sec:oneloop} ============================================ In this section, we wish to explore the extent to which quantum metric fluctuations can act as a regulator of the ultraviolet divergences of quantum field theory. These divergences typically appear in one-loop processes, which represent the lowest order quantum corrections to the classical theory. We will focus our attention upon the one-loop electron self-energy. The self-energy function, $\Sigma(p)$, is formally given by the divergent momentum space integral: $$\Sigma(p) = ie^2 \int {{d^4k}\over {(2\pi)^4}} D^{\mu\nu}_F (k) \gamma_\mu S_F(p-k) \gamma_\nu.$$ Here $D^{\mu\nu}_F (k)$ and $S_F(p-k)$ are the momentum space photon and electron propagators, respectively, and the $\gamma_\mu$ are Dirac matrices. This integral is logarithmically divergent for large $k$. In the conventional approach to field theory, this divergence is absorbed by mass renormalization. Here we wish to investigate the effects of introducing metric fluctuations. First, let us rewrite the expression for $\Sigma$ as a coordinate space integral by use of the following relations between momentum space and coordinate space propagators: $$D^{\mu\nu}_F (k) = -\int d^4x\, e^{ikx}\,D^{\mu\nu}_F (x)\, ,$$ and $$S_F(k) = \int d^4x\, e^{ikx}\,S_F(x) \, .$$ The electron propagator, $S_F(x)$, is expressible in terms of the scalar propagator by the relation $$S_F(x) = -(i\gamma^\mu \nabla_\mu + m_0) G_F(x). \label{eq:SFrep}$$ Here $m_0$ might be interpreted as a bare mass. If we adopt the Feynman gauge, the photon propagator becomes $$D^{\mu\nu}_F (x) = -g^{\mu\nu} G_F(x). \label{eq:DFrep}$$ Note that the scalar propagator, $G_F(x)$, in Eq. (\[eq:SFrep\]) is that for a massive field, whereas Eq. (\[eq:DFrep\]) is that for a massless field. However, we are interested in the behavior near the classical lightcone, and so ignore the mass-dependence of the former. Recall that $\Sigma$ is a $4\times 4$ matrix. The mass shift can be expressed as $$\delta m = {1\over 4} {\rm Re} \bigl[Tr \Sigma(0)\bigr].$$ If we combine the above relations and use the fact that $Tr(\gamma^\mu)=0$, this may be written as $$\delta m = m_0 e^2 \,{\rm Im} \int d^4x\, G^2_F(x).$$ This relation has been obtained assuming a fixed, flat background metric. However, we will assume that it also holds to leading order when we introduce small metric perturbations. Now we wish to average over metric fluctuations and write $$\Delta m = \langle \delta m \rangle = m_0 e^2 \,{\rm Im} \int d^4x\, \langle G^2_F(x) \rangle. \label{eq:Delm}$$ Use Eqs. (\[eq:GFsqav\]) and (\[eq:r4\]) to write $$\int d^4x\, \langle G^2_F(x) \rangle = {1 \over {(8\pi^2)}^2} \int_0^\infty d\alpha\, \alpha\, \int d^4x\, e^{-{1\over 2}i(t^2-r^2)\alpha} e^{-{1\over 2} h^2 r^4 \alpha^2} \, . \label{eq:int1}$$ If we ignore any space or time dependence in $h$, then this integral may be explicitly evaluated. This should be an excellent approximation, as $h$ is expected to vary on a cosmological time scale, whereas the dominant contributions to $\Delta m$ should come from scales of the order of or less than the electron Compton wavelength. If we deform the contour for the $\alpha$-integration into the lower half plane, then the $t$-integration becomes absolutely convergent, and we can write $$\int_{-\infty}^\infty dt \, e^{-{1\over 2}i t^2 \alpha} = \sqrt{\pi \over \alpha} e^{-{1\over 4}i \pi} \, .$$ If we perform the $t$ and angular integrations in Eq. (\[eq:int1\]), and then replace $\alpha$ by the variable $u = \alpha r^2$ , we find $$\int d^4x\, \langle G^2_F(x) \rangle = {\sqrt{\pi} \over {16 \pi^3}} e^{-{i\over 4}\pi} \int_0^\infty {{dr}\over r} \, \int_0^\infty du \,\sqrt{u}\, e^{{i\over 2}u}\, e^{-{1\over 2} h^2 u^2 } \, . \label{eq:int2}$$ The $r$-integration is logarithmically divergent at both limits. The infrared divergence at large $r$ is an artifact of our having neglected the electron mass in the electron propagator. The ultraviolet divergence at small $r$ is more serious, and reflects the failure of metric fluctuations to render quantum field theory fully finite. The basic problem is that although the lightcone singularity has been removed, quantities such as $\langle G_{F}^2 \rangle$ are still singular at coincident points. Nonetheless, it is still of some interest to determine the $h$-dependence of our expressions. The $u$-integration may be performed explicitly [@GR1] to yield $$\int d^4x\, \langle G^2_F(x) \rangle = {{e^{-{1\over 4}i \pi}} \over {32 \pi^2}}\, h^{-{3\over 2}} \, e^{-(16 h^2)^{-1}} \,D_{-{3\over 2}} \Bigl(-{i\over{2h}}\Bigr) \, \int_0^\infty {{dr}\over r} \,, \label{eq:int3}$$ where $D_p (z)$ is the parabolic cylinder function. If $h \ll 1$, we may use the large argument expansion [@GR2] of $D_p (z)$: $$D_p (z) \sim e^{-{1\over 4}{z^2}} \, z^p \, \biggl[ 1 - \frac{p(p-1)}{z^2} +\cdots \biggr]\,, \qquad |arg(z)| < \frac{3}{4} \, ,$$ to write $$D_{-{3\over 2}} \Bigl(-{i\over{2h}}\Bigl) \sim e^{{3\over 4}i \pi}\,\, h^{3\over 2}\, \, e^{16 h^2}\,\, (1+ 15h^2 +\cdots)\,, \qquad h \ll 1 \,.$$ If we now combine this result with Eqs. (\[eq:Delm\]) and (\[eq:int3\]), we finally obtain the formal expression for the mass shift to be $$\Delta m = \frac{m_0 e^2}{8\pi^2}\, (1+ 15h^2 +\cdots)\, \int_0^\infty {{dr}\over r} \, . \label{eq:Delm2}$$ This expression is divergent, and hence still needs to be carefully regularized and renormalized. Here we will simply observe that the dependence of $\Delta m$ upon $h$ seems to be rather weak. If one were to absorb the divergent integral into a redefinition of $m_0$, then the self-energy would seem to be time-dependent if $h$ decreases as the universe expands. However, this time-dependence would be extremely small at the present time. Even if one were to identify the renormalized one-loop self energy with the observed mass of the electron (There could be a piece of non-electromagnetic origin.), one would have a time-dependent electron mass with ${\dot m}/m = 30 h {\dot h}$. If $|{\dot h}/h| \approx 10^{-10}/{\rm yr}$, and $h$ is of the order of the estimate given in the last paragraph of Sec. \[sec:Grav\], then $|{\dot m}/m| \approx 10^{-80}/{\rm yr}$. This is well within the observational limits on the time-variation of the electron mass, which are of the order of [@SV] $$\biggl|{{\dot m} \over m}\biggr| \leq 10^{-13}/{\rm yr}.$$ Summary and Discussion {#sec:summary} ====================== We have seen that the introduction of metric fluctuations, such as those due to gravitons in a squeezed vacuum state, can modify the behavior of Green’s functions near the lightcone. For distinct but lightlike separated points, the usual singularity is removed. However, the singularity for coincident points remains. The smearing of the lightcone leads to the possibility of “faster-than-light light”, in the sense that some photons will traverse the interval between a source and a detector in less than the classical propagation time. The smearing of the lightcone is expected to modify virtual processes. This was explored through the calculation of the one-loop electron self-energy in the presence of metric fluctuations. The results were somewhat ambiguous, due to the presence of the remaining ultraviolet divergences. They can, however, be interpreted as supporting a very small time-dependent contribution to the mass of the electron in an expanding universe. Of course, the dominant source of metric fluctuations need not be relict gravitons. Any stochastic bath of gravitons will also contribute to $h$. It is possible that the majority of gravitons at the present time are those due to local sources (thermal processes, etc) rather than those of cosmological origin. It is also possible that passive metric fluctuations due to quantum fluctuations of the energy-momentum tensor of matter produce the dominant effect in smearing the lightcone. It would be of particular interest to find a one-loop process which is rendered finite by the effects of the metric fluctuations. Such a process would presumably lead to observable quantities whose values depend upon the graviton background. Thus theories in which gravitons regulate ultraviolet divergences can have the property that local observable quantities may be determined by the large scale structure or history of the universe. [**Acknowledgement:**]{} This work was supported in part by the National Science Foundation under Grant PHY-9208805. [–]{} W. Pauli, Helv. Phys. Acta. Suppl. [**4**]{}, 69 (1956). This reference consists of some remarks made by Pauli during the discussion of a talk by O. Klein at the 1955 conference in Bern, on the 50th anniversary of relativity theory. S. Deser, Rev. Mod Phys. [**29**]{}, 417 (1957). C.J. Isham, A. Salam, and J. Strathdee, Phys. Rev. D [**3**]{}, 1805 (1971); [**5**]{}, 2548 (1972). L.P. Grishchuk and Y.V. Sidorov, Phys. Rev. D [**42**]{}, 3413 (1990). L.H. Ford, Ann. Phys (NY) [**144**]{}, 238 (1982). C.-I Kuo and L.H. Ford, Phys. Rev. D [**47**]{}, 4510 (1993). See, for example, S.A. Fulling, [*Aspects of Quantum Field Theory in Curved Space-Time*]{}, Cambridge University Press, Cambridge, 1989, Chap. 9. F.A.E. Pirani, Phys. Rev. D [**1**]{}, 3224 (1970). The author would like to thank David Boulware for emphasizing this point to him. C.W. Misner, K. Thorne, and J.A. Wheeler, [*Gravitation*]{}, (W.H. Freeman, San Francisco, 1973), Sect. 35.6. C. M. Caves, Phys. Rev. D [**23**]{}, 1693 (1981). L. Parker, Nature [**261**]{}, 20 (1976). E.M. Lifshitz, Zh. Eksp. Teor. Phys. [**16**]{}, 587 (1946) \[English translation in J. Phys. USSR [**10**]{}, 116 (1946)\]. L.H. Ford and L. Parker, Phys. Rev. D [**16**]{}, 1601 (1977). A few of the references on this topic are the following: L. Grishchuk, Zh. Eksp. Teor. Fiz. [**67**]{}, 825 (1974) \[Sov. Phys. JETP [**40**]{}, 409 (1975)\]; L.F. Abbott and M.B. Wise, Nucl. Phys. [**B244**]{}, 541 (1984); B. Allen, Phys. Rev. D [**37**]{}, 2078 (1988); V. Sahni, Phys. Rev. D [**42**]{}, 453 (1990); M.R. de Garcia Maia, Phys. Rev. D [**48**]{}, 647 (1993); A.B. Henriques, Phys. Rev. D [**49**]{}, 1771 (1994). L.H. Ford, Phys. Rev. D [**35**]{}, 2955 (1987). I.S. Gradshteyn and I.M. Ryzhik, [*Table of Integrals, Series, and Products*]{}, (Academic Press, New York, 1980), p 337. , p 1065. P. Sisterna and H. Vucetich, Phys. Rev. D [**41**]{}, 1034 (1990); [**44**]{}, 3096 (1991).
--- abstract: '*A simple three rules supplemented by five steps scheme is proposed to produce Kochen-Specker (KS) sets with 30 rank-2 projectors that occur twice each. The KS sets provide state-independent proof of KS theorem based on a system of three qubits. A small adjustment of the scheme enables us to manually generate a large number of KS sets with a mixture of rank-1 and rank-2 projectors.*' author: - | S.P. Toh [[^1]]{}\ *Faculty of Engineering, The University of Nottingham Malaysia Campus*\ *Jalan Broga, 43500 Semenyih, Selangor Darul Ehsan, Malaysia* title: 'Kochen-Specker Sets with Thirty Rank-Two Projectors in Three-Qubit System' --- (1,0)[300]{} *Keywords*: Kochen-Specker theorem; Contextuality; Hidden variable; Three-qubit. (1,0)[300]{} Introduction {#Section1} ============ The Kochen-Specker (KS) theorem demonstrates the inconsistency between predictions of quantum mechanics (QM) and noncontextual hidden-variable (NCHV) theories. Contextuality is one of the classically unattainable features of QM. The results of measurements in QM depend on context and do not reveal preexisting values. A context is a set of maximally collection of compatible observables. The results of measurements in QM depend on the choice of other compatible measurements that are carried out previously or simultaneously. The simplest system that can be used to prove KS theorem is a single qutrit. As a qutrit does not refer to nonlocality, it shows that KS theorem is a more general theorem compare to Bell theorem that rules out the local hidden variable model of QM. The possibility of testing KS theorem experimentally was once doubted due to the finiteness in measurement times and precision [@R1; @R2]. Cabello [@R3] and others [@R4] suggested how KS theorem might be experimentally tested by deriving a set of noncontextual inequalities that are violated by QM for any quantum states but are satisfied by any NCHV theories. Recently, there are many successful experiments that show the violation of noncontextual inequality, for example the experiments on a pair of trapped ions [@R5], neutrons [@R6], single photons [@R7], two photonic qubits [@R8] and nuclear spins [@R9]. The original proof of KS theorem involves 117 directions in three-dimensional real Hilbert space [@R10]. Peres [@R11] found a simpler proof with 33 and 24 rays for three- and four-dimensional systems, respectively. Mermin [@R12] used an array of nine observables for two spin-$\frac{1}{2}$ particles to show quantum contextuality. Similar mathematical simplicity is also shown in KS theorem proof for the three-qubit eight-dimensional system using ten observables [@R12]. Up to now the smallest numbers of rays required in the proof of KS theorem are 31 [@R13], 18 [@R14] and 36 [@R15] in three-, four- and eight-dimensional systems, respectively. The KS sets used to prove the KS theorem are difficult to obtain previously. For example, there is only one KS set reported in [@R16] and [@R15] with 20 and 36 rays in four- and eight-dimensional real Hilbert spaces, respectively. Recently, with the aid of computer, the number of KS sets available increases tremendously. For instance, the number of KS sets with 36 rays in three-qubit system is 320 according to [@R17]. In this Letter, we adopt a set of simple rules supplemented by a few steps to construct KS sets that consist of 30 rank-2 projectors without relying on computer computation. In Sec. \[Sec2\] a brief introduction to the 25 bases formed by 40 rays of Kernaghan and Peres is given [@R15]. An example is given in Sec. \[Sec3\] to explicitly show the steps to obtain KS sets involving 30 rank-2 projectors from KS sets formed by 40 rank-1 projectors provided in [@R17]. We generalize the steps in Sec. \[Sec4\] and conclude in Sec. \[Sec5\]. Kochen-Specker sets with 15 bases formed by 40 rays {#Sec2} =================================================== For the sake of completeness, we furnish in this section some necessary basic facts prior to a detail discussion on the procedure of constructing rank-2 projectors (or plane) KS sets. Based on the Mermin pentagram that consists of five sets of four mutually commuting operators, Kernaghan and Peres [@R15] derived 40 rank-1 projectors (or rays) to form 25 bases, where each of the bases is a set of mutually orthogonal projectors that spans an eight-dimensional real Hilbert space. Table \[T1\] lists the 40 rank-1 projectors, $R_i$ with , and Table \[T2\] which is taken from [@R17] lists the 25 bases. The first five bases in Table \[T2\] are called pure bases ($PB_i$, ) [@R17] and their mixture give rise to remaining hybrid bases ($HB_i$, ). Each of the rank-1 projectors occurs once in $PB$ and four times in $HB$. --- ---------- ---- -------------------------- ---- -------------------------- ---- -------------------------- ---- ---------------------------------- 1 10000000 9 11110000 17 11001100 25 10101010 33 100101$\bar{1}$ 0 2 01000000 10 11$\bar{1}$$\bar{1}$0000 18 1100$\bar{1}$$\bar{1}$00 26 1010$\bar{1}$0$\bar{1}$0 34 100$\bar{1}$0110 3 00100000 11 1$\bar{1}$$1\bar{1}$0000 19 1$\bar{1}$001$\bar{1}$00 27 10$\bar{1}$010$\bar{1}$0 35 10010$\bar{1}$10 4 00010000 12 1$\bar{1}$$\bar{1}$10000 20 1$\bar{1}$00$\bar{1}$100 28 10$\bar{1}$0$\bar{1}$010 36 100$\bar{1}$0$\bar{1}$$\bar{1}$0 5 00001000 13 00001111 21 00110011 29 01010101 37 0110$\bar{1}$001 6 00000100 14 000011$\bar{1}$$\bar{1}$ 22 001100$\bar{1}$$\bar{1}$ 30 01010$\bar{1}$$0\bar{1}$ 38 01$\bar{1}$01001 7 00000010 15 00001$\bar{1}$1$\bar{1}$ 23 001$\bar{1}$001$\bar{1}$ 31 010$\bar{1}$010$\bar{1}$ 39 0$\bar{1}$101001 8 00000001 16 00001$\bar{1}$$\bar{1}$1 24 001$\bar{1}$00$\bar{1}$1 32 010$\bar{1}$0$\bar{1}$01 40 0$\bar{1}$$\bar{1}$0$\bar{1}$001 --- ---------- ---- -------------------------- ---- -------------------------- ---- -------------------------- ---- ---------------------------------- : The 40 rays derived by Kernaghan and Peres for KS proof in three-qubit system. The symbol $\bar{1}$ is used to denote $-1$. \[T1\] ---- ---- ---- ---- ---- ---- ---- ---- ---- 1 1 2 3 4 5 6 7 8 2 9 10 11 12 13 14 15 16 3 17 18 19 20 21 22 23 24 4 25 26 27 28 29 30 31 32 5 33 34 35 36 37 38 39 40 6 1 2 3 4 13 14 15 16 7 1 2 5 6 21 22 23 24 8 1 3 5 7 29 30 31 32 9 1 4 6 7 37 38 39 40 10 2 3 5 8 33 34 35 36 11 2 4 6 8 25 26 27 28 12 3 4 7 8 17 18 19 20 13 5 6 7 8 9 10 11 12 14 9 10 13 14 19 20 23 24 15 9 11 13 15 27 28 31 32 16 9 12 14 15 34 36 38 39 17 10 11 13 16 33 35 37 40 18 10 12 14 16 25 26 29 30 19 11 12 15 16 17 18 21 22 20 17 19 21 23 26 28 30 32 21 17 20 22 23 35 36 37 39 22 18 19 21 24 33 34 38 40 23 18 20 22 24 25 27 29 31 24 25 28 30 31 33 36 37 38 25 26 27 29 32 34 35 39 40 ---- ---- ---- ---- ---- ---- ---- ---- ---- : Bases formed by eight-dimensional rays listed in Table \[T1\]. \[T2\] As a result of computer search, Waegell and Aravind [@R17] found 64 KS sets that are composed of 40 rays and 15 bases. A manual construction of these 64 KS sets can be found in [@R18]. Since these KS sets have 20 rays that occur twice each, 20 rays that occur four times each among its 15 bases, and each base contains 8 rays, they are labeled as $20_220_4 \textendash 15_8$ [@R17]. The 15 bases are contributed by 5 $PB$s and 10 $HB$s. An example of $20_220_4 \textendash 15_8$ KS sets is given in Table \[T3\]. ---- ------ ------ ------ ------ ------ ------ ------ ------ 1 *1* *2* *3* 4 *5* 6 7 8 2 *9* 10 11 12 *13* *14* *15* 16 3 17 18 *19* 20 *21* 22 *23* *24* 4 25 26 27 *28* 29 *30* *31* *32* 5 *33* *34* 35 *36* 37 *38* 39 40 6 *1* *2* *3* 4 *13* *14* *15* 16 7 *1* *2* *5* 6 *21* 22 *23* *24* 8 *1* *3* *5* 7 29 *30* *31* *32* 10 *2* *3* *5* 8 *33* *34* 35 *36* 14 *9* 10 *13* *14* *19* 20 *23* *24* 15 *9* 11 *13* *15* 27 *28* *31* *32* 16 *9* 12 *14* *15* *34* *36* *38* 39 20 17 *19* *21* *23* 26 *28* *30* *32* 22 18 *19* *21* *24* *33* *34* *38* 40 24 25 *28* *30* *31* *33* *36* 37 *38* ---- ------ ------ ------ ------ ------ ------ ------ ------ : KS set that consists of 40 rays and 15 bases. The 20 rays that occur four times each are typed in italic and the 20 rays that occur twice each are in plain type.\[T3\] The KS sets in the form of $20_220_4 \textendash 15_8$ is constructed completely by rank-1 projectors. However, they can easily be transformed to KS sets that composed merely of rank-2 projectors, see Section \[Sec3\]. A Concrete Example: Steps of Construction {#Sec3} ========================================= Example given in Table \[T3\] is a KS set that involves 40 rank-1 projectors. We propose in this section steps to transform it to a KS set that involves 30 rank-2 projectors, where each of the projectors occurs twice among the 15 bases, as is shown in Table \[T4\]. 1 (*1*, 7) (*2*, 8) (*3*, 4) (*5*, 6) ---- ------------ -------------- ------------ -------------- 2 (*9*, 12) (*13*, 16) (*14*, 10) (*15*, 11) 3 (*19*, 20) (*21*, 22) (*23*, 17) (*24*, 18) 4 (*28*, 27) (*30*, 29) (*31*, 25) (*32*, 26) 5 (*33*, 35) (*34*, 40) (*36*, 37) (*38*, 39) 6 (*1*, *2*) (*3*, 4) (*13*, 16) (*14*, *15*) 7 (*1*, *2*) (*5*, 6) (*21*, 22) (*23*, *24*) 8 (*3*, *5*) (*1*, 7) (*30*, 29) (*31*, *32*) 10 (*3*, *5*) (*2*, 8) (*33*, 35) (*34*, *36*) 14 (*14*, 10) (*9*, *13*) (*19*, 20) (*23*, *24*) 15 (*15*, 11) (*9*, *13*) (*28*, 27) (*31*, *32*) 16 (*9*, 12) (*14*, *15*) (*38*, 39) (*34*, *36*) 20 (*23*, 17) (*19*, *21*) (*32*, 26) (*28*, *30*) 22 (*24*, 18) (*19*, *21*) (*34*, 40) (*33*, *38*) 24 (*31*, 25) (*28*, *30*) (*36*, 37) (*33*, *38*) : KS set consists of 30 rank-2 projectors obtained from the KS set given in Table \[T3\].[]{data-label="T4"} The rank-1 projectors in italic for a specific $PB_i$ form the set $\Gamma^i$, and the remaining rank-1 projectors form the set $\neg \Gamma^i$. Our steps of construction are guided by the following three rules: 1. Rule 1 ($\Re1$):\ For $\Gamma^i= \{ \alpha, \beta, \gamma, \delta \}$, we can extract 4 $HB$s that contain subsets labeled by $\Gamma^i_j$, i.e., $\Gamma^i_1=\{ \alpha, \beta, \gamma \}$, $\Gamma^i_2=\{ \alpha, \beta, \delta \}$, $\Gamma^i_3=\{ \alpha, \gamma, \delta \}$ and $\Gamma^i_4=\{ \beta, \gamma, \delta \}$. 2. Rule 2 ($\Re2$):\ Rank-1 projectors from $\Gamma^i$ must be coupled with rank-1 projectors from $\neg \Gamma^i$ to form 4 rank-2 projectors in $PB$ and each of these rank-2 projectors repeats itself once in $HB$. 3. Rule 3 ($\Re3$):\ Rays from $\Gamma^i$ must form 2 rank-2 projectors in $HB$. Note that the sequence of the above rules must be taken care of. It is important to apply the rules in the given order, i.e., $\Re1$ first, followed by $\Re2$ and lastly $\Re3$. Now, let us apply them to our example. Step 1 ($S1$) : Take $\Gamma^1 = \{R1, R2, R3, R5\}$. Apply $\Re1$, $\Re2$ and $\Re3$. The results obtained after the execution of $S1$ are shown in Table \[T5\]. Note that $\alpha = R1$, $\beta = R2$, $\gamma = R3$ and $\delta = R5$. By applying $\Re1$, we obtained bases 6, 7, 8 and 10. These bases contain $\Gamma^1_1 = \{R1, R2, R3\}$, $\Gamma^2_2 = \{R1, R2, R5\}$, $\Gamma^2_3 = \{R1, R3, R5\}$ and $\Gamma^2_4 = \{R2, R3, R5\}$, respectively. By applying $\Re2$, namely coupling the rays from $\Gamma^1$ to the rays from $\neg \Gamma^1=\{ R4, R6, R7, R8 \} $, we obtain 4 rank-2 projectors in base 1. Note that the 4 rank-2 projectors in base 1 repeat themselves in the other 4 bases, as shown in Table \[T5\]. By applying $\Re3$, we obtain rank-2 projectors (*1*, *2*) and (*3*, *5*). All the rank-2 projectors formed are written in parentheses. 1 (*1*, 7) (*2*, 8) (*3*, 4) (*5*, 6) ---- ------------ ---------- ---------- ---------- 6 (*1*, *2*) (*3*, 4) 7 (*1*, *2*) (*5*, 6) 8 (*3*, *5*) (*1*, 7) 10 (*3*, *5*) (*2*, 8) : Rank-2 projectors obtained after the execution of $S1$. \[T5\] Step 2 ($S2$) : Take $\Gamma^2 = \{R9, R13, R14, R15\}$. Apply $\Re1$, $\Re2$ and $\Re3$. The results obtained after the execution of $S2$ are shown in Table \[T6\]. Applying $\Re1$ produces bases 6, 14, 15 and 16. Applying $\Re2$ produces (*9*, 12), (*13*, 16), (*14*, 10) and (*15*, 11). Applying $\Re3$ produces (*9*, *13*) and (*14*, *15*). As for the results of $S1$, carrying out the three rules in $S2$ produces six pairs of rank-2 projectors. Note that (*1*, *2*) and (*3*, 4) in base 6 have been produced prior to the execution of $S2$. 2 (*9*, 12) (*13*, 16) (*14*, 10) (*15*, 11) ---- ------------ -------------- ------------ -------------- 6 (*1*, *2*) (*3*, 4) (*13*, 16) (*14*, *15*) 14 (*14*, 10) (*9*, *13*) 15 (*15*, 11) (*9*, *13*) 16 (*9*, 12) (*14*, *15*) : Rank-2 projectors obtained after the execution of $S2$. \[T6\] Step 3 ($S3$) : Take $\Gamma^3 = \{R19, R21, R23, R24\}$. Apply $\Re1$, $\Re2$ and $\Re3$. The results obtained after the execution of $S3$ are shown in Table \[T7\]. Applying $\Re1$ produces bases 7, 14, 20 and 22. Applying $\Re2$ produces (*19*, 20), (*21*, 22), (*23*, 17) and (*24*, 18). Applying $\Re3$ produces (*19*, *21*) and (*23*, *24*). Note that (*1*, *2*) and (*5*, 6) in base 7 and (*14*, 10) and (*9*, *13*) in base 14 have been produced prior to the execution of $S3$. 3 (*19*, 20) (*21*, 22) (*23*, 17) (*24*, 18) ---- ------------ -------------- ------------ -------------- 7 (*1*, *2*) (*5*, 6) (*21*, 22) (*23*, *24*) 14 (*14*, 10) (*9*, *13*) (*19*, 20) (*23*, *24*) 20 (*23*, 17) (*19*, *21*) 22 (*24*, 18) (*19*, *21*) : Rank-2 projectors obtained after the execution of $S3$. \[T7\] Step 4 ($S4$) : Take $\Gamma^4 = \{R28, R30, R31, R32\}$. Apply $\Re1$, $\Re2$ and $\Re3$. The results obtained after the execution of $S4$ are shown in Table \[T8\]. Applying $\Re1$ produces bases 8, 15, 20 and 24. Applying $\Re2$ produces (*28*, 27), (*30*, 29), (*31*, 25) and (*32*, 26). Applying $\Re3$ produces (*28*, *30*) and (*31*, *32*). Note that (*3*, *5*) and (*1*, 7) in base 8, (*15*, 11) and (*9*, *13*) in base 15 and (*23*, 17) and (*19*, *21*) in base 20 have been produced prior to the execution of $S4$. 4 (*28*, 27) (*30*, 29) (*31*, 25) (*32*, 26) ---- ------------ -------------- ------------ -------------- 8 (*3*, *5*) (*1*, 7) (*30*, 29) (*31*, *32*) 15 (*15*, 11) (*9*, *13*) (*28*, 27) (*31*, *32*) 20 (*23*, 17) (*19*, *21*) (*32*, 26) (*28*, *30*) 24 (*31*, 25) (*28*, *30*) : Rank-2 projectors obtained after the execution of $S4$. \[T8\] Step 5 ($S5$) : Take $\Gamma^5 = \{R33, R34, R36, R38\}$. Apply $\Re1$, $\Re2$ and $\Re3$. The results obtained after the execution of $S5$ are shown in Table \[T9\]. Applying $\Re1$ produces bases 10, 16, 22 and 24. Applying $\Re2$ produces (*33*, 35), (*34*, 40), (*36*, 37) and (*38*, 39). Applying $\Re3$ produces (*33*, *38*) and (*34*, *36*). Note that (*3*, *5*) and (*2*, 8) in base 10, (*9*, 12) and (*14*, *15*) in base 16, (*24*, 18) and (*19*, *21*) in base 22 and (*31*, 25) and (*28*, *30*) in base 24 have been produced prior to the execution of $S5$. 5 (*33*, 35) (*34*, 40) (*36*, 37) (*38*, 39) ---- ------------ -------------- ------------ -------------- 10 (*3*, *5*) (*2*, 8) (*33*, 35) (*34*, *36*) 16 (*9*, 12) (*14*, *15*) (*38*, 39) (*34*, *36*) 22 (*24*, 18) (*19*, *21*) (*34*, 40) (*33*, *38*) 24 (*31*, 25) (*28*, *30*) (*36*, 37) (*33*, *38*) : Rank-2 projectors obtained after the execution of S5. \[T9\] Table \[T5\] to Table \[T9\] list in parentheses the rank-2 projectors formed after the execution of $S1$ to $S5$, respectively, and it is conspicuous that there are overlapping bases. After the completion of the five steps, we extract every different bases once, and for those that occur more than once, we pick the one that is maximally filled. The result obtained would be a KS set shown in Table \[T4\]. As there are 30 rank-2 projectors and each of them occurs twice among the 15 bases, the KS set obtained can be used to provide state independent parity proof of the KS theorem. Discussion {#Sec4} ========== The scheme proposed in Sec. \[Sec3\] is conceived based on the properties shared by all KS sets in the type of $20_220_4 \textendash 15_8$. Apart from the features reflected by the symbol $20_220_4 \textendash 15_8$, we would like to stress that these 15 bases must be composed of 5 $PB$s an 10 $HB$s. Most importantly, the 20 rays that repeat four times each provide us clues to form the rank-2 projectors. Due to the common features shared, $S1$ to $S5$ used to construct KS set of 30 rank-2 projectors in Sec. \[Sec3\] can be generalized and apply to all 64 KS sets with $20_220_4 \textendash 15_8$, as follows, Step 1 ($S1^\prime$) : Apply $\Re1$, $\Re2$ and $\Re3$ to $\Gamma^1$.\ Step 2 ($S2^\prime$) : Apply $\Re1$, $\Re2$ and $\Re3$ to $\Gamma^2$.\ Step 3 ($S3^\prime$) : Apply $\Re1$, $\Re2$ and $\Re3$ to $\Gamma^3$.\ Step 4 ($S4^\prime$) : Apply $\Re1$, $\Re2$ and $\Re3$ to $\Gamma^4$.\ Step 5 ($S5^\prime$) : Apply $\Re1$, $\Re2$ and $\Re3$ to $\Gamma^5$.\ In $S1$ of the example in Sec. \[Sec3\], there are in fact three ways to form rank-2 projectors while applying $\Re2$ to base 1. Specifically, R1 can couple either with R4, R6 or R7 to form (*1*, 4), (*1*, 6) or (*1*, 7), respectively. On the other hand, (*1*, 8) is disallowed as it doest not appear the second time in any bases of 6, 7, 8 or 10. The application of $\Re3$ in $S1$ corresponding to the options of (*1*, 4), (*1*, 6) or (*1*, 7) produces three pair of rank-2 projectors, i.e., (*1*, *5*) and (*2*, *3*), (*1*, *3*) and (*2*, *5*) or (*1*, *2*) and (*3*, *5*), respectively. Similar situation happens during $S2$ to $S4$ as well. Therefore, based on the generalization, we know that there are three ways in each step, from $S1^\prime$ to $S5^\prime$, to form 6 rank-2 projectors and the total number of KS sets of 30 rank-2 projectors that are transformed from each of the KS sets in the type of $20_220_4 \textendash 15_8$ is $3^5=243$. Each application of $\Re2$ and $\Re3$ produces 4 and 2 rank-2 projectors, respectively. This clearly explains why there are in total 30 rank-2 projectors formed upon the completion of $S1^\prime$ to $S5^\prime$. However, there are various combinations of invalidate or removing $\Re2$ or $\Re3$ through out the process of construction in order to obtain various numbers, ranging from two to thirty, of rank-2 projectors. Let us now consider one of the scenarios and investigate how, without $\Re3$, the number of rank-2 projectors is affected. The aforementioned scheme needs to be further generalized as follow, Step 1 ($S1^{\prime \prime}$) : Apply $\Re1$ and $\Re2$ to $\Gamma^1$. Check if $\Re3$ is applicable.\ Step 2 ($S1^{\prime \prime}$) : Apply $\Re1$ and $\Re2$ to $\Gamma^2$. Check if $\Re3$ is applicable.\ Step 3 ($S1^{\prime \prime}$) : Apply $\Re1$ and $\Re2$ to $\Gamma^3$. Check if $\Re3$ is applicable.\ Step 4 ($S1^{\prime \prime}$) : Apply $\Re1$ and $\Re2$ to $\Gamma^4$. Check if $\Re3$ is applicable.\ Step 5 ($S1^{\prime \prime}$) : Apply $\Re1$ and $\Re2$ to $\Gamma^5$. Check if $\Re3$ is applicable.\ Note that if $\Re3$ is applicable, it increases the number of rank-2 projectors formed by two every time we apply it. In the $S2$ of our example (cf. Sec. \[Sec3\]), the choice of rank-2 projectors for base 2 shown in Table \[T6\] guarantees the applicability of $\Re3$. There are two more ways that make the $\Re3$ applicable in $S2$. However, we can, for example, choose (*9*, 10), (*13*, 16), (*14*, 12) and (*15*, 11), for base 2 instead, but it will then make $\Re3$ inapplicable. There are in total six ways of forming rank-2 projectors for base 2 that make $\Re3$ inapplicable. Table \[T10\] lists all the nine ways of forming rank-2 projectors for base 2. The same situation happens in $S3$ to $S5$ as well. (*9*, 12) (*13*, 16) (*14*, 10) (*15*, 11) ----------- ------------ ------------ ------------ (*9*, 11) (*13*, 10) (*14*, 16) (*15*, 12) (*9*, 10) (*13*, 11) (*14*, 12) (*15*, 16) (*9*, 10) (*13*, 16) (*14*, 12) (*15*, 11) (*9*, 11) (*13*, 16) (*14*, 10) (*15*, 12) (*9*, 10) (*13*, 11) (*14*, 16) (*15*, 12) (*9*, 12) (*13*, 10) (*14*, 16) (*15*, 11) (*9*, 11) (*13*, 10) (*14*, 12) (*15*, 16) (*9*, 12) (*13*, 11) (*14*, 10) (*15*, 16) : Each of the nine rows shows different way of forming rank-2 projectors for base 2 as a result of applying $\Re2$. The first three ways make $\Re3$ applicable while the other six ways render $\Re3$ fails. The first way shown in the first row is the one adopted in Table \[T6\]. \[T10\] In the scenarios where $\Re2$ and $\Re3$ are both applicable, we always have the freedom to choose not to apply $\Re3$ after the execution of $\Re2$, depends on how many rank-2 projectors we aim to get in the transformed KS sets. However, in $S1$, as mentioned before, there are three ways of applying $\Re2$ on base 1 that guarantee the applicability of $\Re3$ and none of the cases make $\Re2$ satisfied and $\Re3$ dissatisfied. Again, our analysis of the example in Sec. \[Sec3\] can be generalized to $S1^{\prime \prime}$ to $S5^{\prime \prime}$. In short, there are three (six) ways of forming 4 rank-2 projectors in $S1^{\prime \prime}$ (each of $S2^{\prime \prime}$ to $S5^{\prime \prime}$) by applying $\Re2$ and not to execute $\Re3$ although it is applicable, three ways of forming 6 (4+2) rank-2 projectors in each of $S1^{\prime \prime}$ to $S5^{\prime \prime}$ by applying both $\Re2$ and $\Re3$ and six ways of forming 4 rank-2 projectors in each of $S2^{\prime \prime}$ to $S5^{\prime \prime}$ by applying only $\Re2$ due to the inapplicability of $\Re3$. Table \[T11\] shows the numbers of KS sets with various numbers of rank-1 and rank-2 projectors that can be generated via the adjustment on the number of times $\Re3$ is applied throughout $S2^{\prime \prime}$ to $S5^{\prime \prime}$ (we always apply $\Re3$ on $S1^{\prime \prime}$ for the ease of computation in Table \[T11\]). Note that as $N_{\Re3}$ does not reflect specifically at which step the $\Re3$ is inapplicable or not to be executed (in the case of $\Re3$ is applicable), the result of $N_{KS}$ shown is for only one case. So far we consider only one of the examples of KS sets in the form of $20_220_4 \textendash 15_8$, it is obvious that the number of KS sets with the mixture of rank-1 and rank-2 projectors that can be generated from our scheme is indeed huge. Finally, note that when $N_{\Re3}=0$, $S1^{\prime \prime}$ to $S5^{\prime \prime}$ reduced to $S1$ to $S5$ , and $N_{KS}=243$ is the same as the number of KS sets we deduced before in our example. $N_{\Re3}$ $N_{KS}$ $N_{2}$ $N_{1}$ ------------ ------------------------- --------- --------- 0 $3^5 \times 6^0 = 243$ 30 0 1 $3^4 \times 6 = 486$ 28 4 2 $3^3 \times 6^2 = 972$ 26 8 3 $3^2 \times 6^3 = 1944$ 24 12 4 $3 \times 6^4 = 3888$ 22 16 : The number of KS sets generated by applying $\Re1$ and $\Re2$ while invalidating or not executing $\Re3$ throughout $S2^{\prime \prime}$ to $S5^{\prime \prime}$. Note that $\Re3$ is always executed on $S1^{\prime \prime}$ here. The symbols $N_{\Re3}$, N$_{KS}$, $N_{2}$ and $N_{1}$ denote the number of times $\Re3$ is invalidated or not executed, the number of KS sets generated, the number of rank-2 projectors formed and the number of the remaining rank-1 projectors, respectively. \[T11\] Conclusion {#Sec5} ========== We proposed a simple scheme of three rules supplemented by five steps to transform the $20_220_4 \textendash 15_8$ Kochen-Specker (KS) sets into KS sets that involve a mixture of rank-1 and rank-2 projectors. A concrete example is provided as illustration. By manipulating the rules throughout the five steps, we can determine the number of rank-2 projectors formed in the resultant KS sets. The simplest result obtained is the KS sets with 30 rank-2 projectors that occur twice each among 15 bases. To our knowledge, this is the first rank-2 projectors KS sets produced for three-qubit system based on the Mermin’s pentagram. It can be cast in the form of testable inequality proposed by Cabello (see first inequality in [@R3]) . It is also noteworthy that a considerable number of KS sets can be generated by our scheme without resorting to any computer calculation. Acknowledgements {#acknowledgements .unnumbered} ================ The author thanks B. A. Tay for improving the English in the manuscript. This work is supported by the Ministry of Higher Education of Malaysia (MOHE) under the FRGS grant FRGS/1/2011/ST/UNIM/03/1. [99]{} D. A. Meyer, *Phys. Rev. Lett. * **83** (1999) 3751. A. Kent, *Phys. Rev. Lett. * **83** (1999) 3755. A. Cabello, *Phys. Rev. Lett. * **101** (2008) 210401. P. Badzikag *et al*. *Phys. Rev. Lett. * **103** (2009) 050401. G. Kirchmair, *et al*. *Nature* **460** (2009) 494. H. Bartosik, *et al*. *Phys. Rev. Lett. * **103** (2010) 040403. E. Amselem, M. Radmark, M. Bourennane and A. Cabello, *Phys. Rev. Lett. * **103** (2009) 160405. G. Borges *et al*. arXiv: 1304.4512v1 \[quant-ph\]. O. Moussa, *Phys. Rev. Lett. * **104** (2010) 160501. K. Kochen and E. P. Specker, *J. Math. Mech. * **17** (1967) 59. A. Peres, *J. Phys. A: Math. Gen. * **24** (1991) L175. D. Mermin, *Rev. Mod. Phys. * **65** (1993) 803. J. H. Conway and S. Kochen, reported by A. Peres, in *Quantum Theory: Concepts and Method*, Dordrecht: Kluwer, 1993, p.114. A. Cabello, J. M. Estebaranz and G. García-Alcaine, *Phys. Lett. A* **212** (1996) 183. M. Kernaghan and A. Peres, *Phys. Lett. A* **198** (1995) 1. M. Kernaghan, *J. Phys. A: Math. Gen. * **27** (1994) L829. M. Waegell and P. K. Aravind, *J. Phys. A: Math. Theor. * **45** (2012) 405301. S. P. Toh, arXiv: 1207.5982v3 \[quant-ph\]. [^1]: Email address: [email protected]; [email protected] =0.5cm Tel: +6(03)8924 8628 Fax: +6(03)8924 8017
--- abstract: 'Low rank approximation ([*LRA*]{}) of a matrix is a hot subject of modern computations. In application to Big Data mining and analysis the input matrices are usually so immense that one must apply [*superfast*]{} algorithms, which only access a tiny fraction of the input entries and involve much fewer memory cells and flops[^1] than an input matrix has entries. Recently we devised and analyzed some superfast LRA algorithms; in this paper we extend a classical algorithm to superfast refinement of a crude but reasonably close LRA and also list some heuristic recipes for superfast a posteriori estimation of the errors of LRA and support our superfast refinement algorithm with some superfast heuristic recipes for a posteriori error estimation of LRA and with superfast back and forth transition between any LRA of a matrix and its SVD.' author: - 'Victor Y. Pan' - 'Victor Y. Pan$^{[1, 2],[a]}$ and Qi Luan$^{[2],[b]}$\' - | \ $^{[1]}$ Department of Computer Science\ Lehman College of the City University of New York\ Bronx, NY 10468 USA\ $^{[2]}$ Ph.D. Programs in Computer Science and Mathematics\ The Graduate Center of the City University of New York\ New York, NY 10036 USA\ $^{[a]}$ [email protected]\ http://comet.lehman.cuny.edu/vpan/\ $^{[b]}$ qi\[email protected]\ title: Superfast Refinement of Low Rank Approximation of a Matrix --- #### **Key Words:** Low-rank approximation, Superfast algorithms, A posteriori error estimation, Iterative refinement #### **2000 Math. Subject Classification:** 65Y20, 65F30, 68Q25, 68W20 Introduction {#sintr} ============ [**(a) Superfast accurate LRA: the problem and our recent progress.**]{} Low rank approximation (LRA) of a matrix is a hot subject of Numerical Linear and Multilinear Algebra and Data Mining and Analysis, with applications ranging from machine learning theory and neural networks to term document data and DNA SNP data (see surveys [@HMT11], [@M11], and [@KS17]). Matrices representing Big Data are usually so immense that realistically one can only access a tiny fraction of their entries, but quite typically these matrices admit LRA, that is, are close to low rank matrices,[^2] with which one operates superfast, that is, by using much fewer memory cells and flops than an input matrix has entries. Every superfast LRA algorithm fails on a worst case input matrix and even on the small families of matrices of our Appendix \[shrdin\]. In worldwide computational practice and in our extensive tests, however, some superfast algorithms consistently compute reasonably accurate LRA of matrices admitting LRA, and the papers [@PLSZ16], [@PLSZ17], [@Pa], [@PLSZa], [@LPSa], [@LP20], and [@PLSZ20] provide some formal support for this empirical observation. Presently we complement that work by extending to LRA the popular methods for iterative refinement of the solution as well as least squares solution of a linear system of equations (see [@S98 Sections 3.3.4 and 4.2.5], [@H02 Chapter 12 and Section 20.5], [@DHK06], [@GL13 Sections 3.5.3 and 5.3.8], [@B15 Sections 1.4.6, 2.1.4, and 2.3.7]). Such refinement algorithms are also known for other matrix computations, including nonlinear ones (see [@S98 page 223 and the references on page 225]), but to the best of our knowledge they have not been applied to LRA so far, possibly because superfast computation of LRA has not been studied as much as it deserves. Our work seems to be the first attempt of this kind; our tests provide its empirical support. We support our superfast refinement algorithm with some superfast heuristic recipes for a posteriori error estimation of LRA and with superfast back and forth transition between any LRA of a matrix and its SVD. [**Organization of our paper.**]{} In the next section we recall some background material. In Section \[slrasvd\] we transform any LRA into its SVD superfast. We describe a superfast algorithm for iterative refinement of an LRA in Sections \[sitref\] and study the problem of superfast a posteriori error estimation for LRA in Section \[serrest\]. We devote Section \[ststs\], the contribution of the second author, to numerical tests. In Appendix \[shrdin\] we specify some small families of matrices on which every LRA algorithm fails if it runs superfast, even though every matrix of these families admits LRA. In Appendix \[spstr\] we recall the known estimates for the output errors of superfast LRA in a very special but important case where an input matrix is filled with independent identically distributed [*(i.i.d.)*]{} values of a single random variable. In Appendix \[ssvdcur\] we transform SVD into LRA superfast. LRA background {#sbckgr} ============== Hereafter $|\cdot|$ unifies definitions of the spectral norm $||\cdot||$ and the Frobenius norm $||\cdot||_F$. \[defrnk\] (i) An $m\times n$ matrix $\tilde M$ has rank at most $\rho$, $\operatorname{rank}(\tilde M)\le \rho$, if $$\label{eqrnkr} \tilde M=AB,~A\in \mathbb C^{m\times \rho},~{\rm and}~B\in \mathbb C^{\rho\times n}.$$ (ii) An $m\times n$ matrix $M$ has $\epsilon$-rank at most $\rho$ for a fixed tolerance $\epsilon$ if $M$ admits its approximation within an error norm $\epsilon$ by a matrix $\tilde M$ of rank at most $\rho$ or equivalently if there exist three matrices $A$, $B$ and $E$ such that $$\label{eqlra} M=\tilde M+E~{\rm where}~|E|\le \epsilon |M|,~\tilde M=AB,~A\in \mathbb C^{m\times \rho},~{\rm and}~B\in \mathbb C^{\rho\times n}.$$ $\epsilon$-rank is numerically unstable if $\epsilon$ lies in or near a cluster of singular values of $M$, but otherwise it is convenient to define [*numerical rank*]{} of a matrix $M$ as its $\epsilon$-rank for a small tolerance $\epsilon$ and to say that a matrix admits its close approximation by a matrix of rank at most $\rho$ if and only if it has numerical rank at most $\rho$. We adopt these common definitions and let $\operatorname{nrank}(M)$ denote numerical rank of $M$. A 2-factor LRA $AB$ of $M$ of (\[eqlra\]) can be generalized to a 3-factor LRA $$\label{eqrnkrho} M=\tilde M+E,~|E|\le \epsilon,~\tilde M=XTY,~X\in \mathbb C^{m\times k},~T\in \mathbb C^{k\times l},~Y\in \mathbb C^{l\times n},$$ $$\label{eqklmn} \rho=\operatorname{rank}(\tilde M)\le k\le m,~\rho\le l\le n.$$ \[re3to2\] The pairs of the maps $XT\rightarrow A$ and $Y\rightarrow B$ as well as $X\rightarrow A$ and $TY\rightarrow B$ turn a 3-factor LRA $XTY$ of (\[eqrnkrho\]) into a 2-factor LRA $AB$ of (\[eqlra\]). An important 3-factor LRA of a matrix $M$ of rank at least $\rho$ is its $\rho$-[*top SVD*]{} $M^{(\rho)}=U^{(\rho)}\Sigma^{(\rho)} V^{(\rho)*}$ where $\Sigma^{(\rho)}$ is the diagonal matrix of the $\rho$ top (largest) singular values of $M$ and where $U^{(\rho)}$ and $V^{(\rho)}$ are the unitary (orthogonal) matrices of the associated top singular spaces of $M$. $M^{(\rho)}$ is said to be the $\rho$-[*truncation*]{} of $M$, obtained from $M$ by setting to zero all its singular values but the $\rho$ largest ones. $M^{(\rho)}=M$ for a matrix $M$ of rank $r$, and then its $\rho$-top SVD is just its [*compact SVD*]{} $$M=U_M\Sigma_MV^*_M,~{\rm for}~U_M=U^{(\rho)},~\Sigma_M=\Sigma^{(\rho)},~{\rm and}~V_M=V^{(\rho)}.$$ The $\rho$-top SVD of a matrix defines its optimal 3-factor LRA under both spectral and Frobenius norms: \[thtrnc\] [[@GL13 Theorem 2.4.8].]{} Write $\tau_{\rho+1}(M):=|M^{(\rho)}-M|= \min_{N:~\operatorname{rank}(N)=\rho} |M-N|.$ Then $\tau_{\rho+1}(M)=\sigma_{\rho+1}(M)$ under the spectral norm $|\cdot|=||\cdot||$ and $\tau_{\rho+1}(M)=\sigma_{F,\rho+1}(M):=\sum_{j\ge \rho}\sigma_j^2(M)$ under the Frobenius norm $|\cdot|=||\cdot||_F$. \[lesngr\] [[@GL13 Corollary 8.6.2].]{} For $m\ge n$ and a pair of ${m\times n}$ matrices $M$ and $M+E$ it holds that $$|\sigma_j(M+E)-\sigma_j(M)|\le||E||~{\rm for}~j=1,\dots,n.$$ \[lehg\] [\[The norm of the pseudo inverse of a matrix product.\]]{} Suppose that $A\in\mathbb C^{k\times r}$, $B\in\mathbb C^{r\times l}$ and the matrices $A$ and $B$ have full rank $r\le \min\{k,l\}$. Then $|(AB)^+| \le |A^+|~|B^+|$. Computation of a $\rho$-top SVD of an LRA {#slrasvd} ========================================= The following simple algorithm computes $\rho$-top SVD of an LRA. \[alglratpsvd\] [\[Computation of a $\rho$-top SVD of an LRA.\]]{} [Input:]{} : Four integers $\rho$, $k$, $m$, and $n$ such that $0<\rho\le k\le \min\{m,n\}$ and two matrices $A\in \mathbb C^{m\times k}$ and $B\in \mathbb C^{k\times n}$. [Output:]{} : Three matrices $U\in \mathbb C^{m\times \rho }$ (unitary), $\Sigma\in \mathbb R^{\rho \times \rho}$ (diagonal), and $V\in \mathbb C^{n\times \rho}$ (unitary) such that $(AB)^{(\rho)}=U\Sigma V^*$ is a $\rho$-top SVD of $AB$. [Computations:]{} : 1. Compute SVDs $A=U_A\Sigma_AV^*_A$ and $B=U_B\Sigma_BV^*_B$ where $U_B\in \mathbb C^{m\times k}$, $V_B^*\in \mathbb C^{k\times n}$, and $\Sigma_A,V^*_A,U_B,\Sigma_B\in \mathbb C^{k\times k}$. 2. Compute $k\times k$ matrices $W=\Sigma_AV^*_AU_B\Sigma_B$, $U_W$, $\Sigma_W$, and $V^*_W$ such that $W=U_W\Sigma_WV_W^*$ is SVD, $\Sigma_W=\operatorname{diag}(\Sigma,\Sigma')$, and $\Sigma=\operatorname{diag}(\sigma_j)_{j=1}^{\rho}$ and $\Sigma'=\operatorname{diag}(\sigma_j)_{j=\rho+1}^{k}$ are the matrices of the $\rho$ top (largest) the $k-\rho$ trailing (smallest) singular values of the matrix $W$, respectively. Output the matrix $\Sigma$. 3. Compute and output the matrices $U$ and $V$ made up of the first $\rho$ columns of the matrices $U_AU_W$ and $V_BV_W$, respectively. The algorithm involves $O((m+n)k^2)$ flops; it is superfast if $k^2\ll \min\{m,n\}$. Its correctness follows from equations $AB=U_AWV^*_B$, $W=U_W\Sigma_W V^*_W$, and $\Sigma_W=\operatorname{diag}(\Sigma,\Sigma')$. For every matrix $M$ triangle inequality implies that $$\label{eqpr6.1} |M-(AB)^{(\rho)}|\le |M-AB|+|AB-(AB)^{(\rho)}|=|M-AB|+\tau_{\rho}(AB).$$ Furthermore $\tau_{\rho}(AB)\le |AB-M^{(\rho)}|\le |AB-M|+\tau_{\rho}(M)$, and so (cf. [@TYUC17 Proposition 6.1]) $$\label{eqtrsd} |M-(AB)^{(\rho)}|\le 2|M-AB|+\tau_{\rho}(M).$$ We can transform any 3-factor LRA $M=XTY$ at first into a 2-factor LRA (cf. Remark \[re3to2\]) and then into the $\rho$-top SVD by applying Algorithm \[alglratpsvd\]. Superfast iterative refinement of an LRA {#sitref} ======================================== Next we describe iterative refinement of a sufficiently close LRA of $M$. At the $i$th step we try to improve a current LRA $\tilde M^{(i)}$ by applying a fixed LRA algorithm to the current error matrix $E^{(i)}=\tilde M^{(i)}-M$. At every iteration the rank of the new tentative LRA is at least doubled, but we periodically cut it back to the value $\operatorname{nrank}(M)$ (see Remark \[reref\]). \[algitrrf\] [(Superfast iterative refinement of a CUR LRA. See Remarks \[reref\] and \[reprec\].)]{} [Input:]{} : Three integers $m$, $n$, and $\rho$, $\rho\le \min\{m, n\}$, an $m\times n$ matrix $M$ of numerical rank $\rho$, a Subalgorithm APPROX(r), which for a fixed positive integer $r$ computes a rank-$r$ approximation of its input matrix (this can be a superfast algorithm such as MAXVOL of [@GOSTZ10] or the algorithm of [@TYUC17] using a pair of sparse multipliers $\Omega$ and $\Psi$), and a Stopping Criterion, which signals when the current candidate LRA is accepted as satisfactory (see the next section). [Initialization:]{} : $\tilde M^{(0)}=O^{m\times n}$. [Computations:]{} : Recursively for $i =0, 1, 2,\dots$ do: 1. Apply Subalgorithms APPROX($r_{1}$) to the matrix $E^{(i)}=M-\tilde M^{(i)}$ for $r_{i}=\operatorname{rank}(\tilde M^{(i)})+\rho$. 2. Let $\Delta^{(i)}$ denote its output matrix of rank at most $r_{i}$. Compute a new approximation $\tilde M^{(i+1)}=\tilde M^{(i)}+ \Delta^{(i)}$ of $M$ and the matrix $E^{(i+1)}=M-\tilde M^{(i+1)}$ of numerical rank at most $2^i \rho$. 3. Replace $i$ by $i + 1$ and repeat stages 1 and 2 until either $i$ exceeds the allowed tolerance, and then stop and output FAILURE, or until Stopping Criterion is satisfied for some integer $i$, and then stop and output the matrix $\tilde M=\tilde M^{(i+1)}$. [[Progress in refinement.]{} ]{}\[reprgrss\] Write $e_{i}=|E^{(i)}|$ for all $i$ and observe that $E^{(i)}=E^{(i-1)}-\Delta^{(i-1)}$, and so $e_{i}<e_{i-1}$ if $\Delta^{(i-1)}$ approximates $E^{(i-1)}$ closer than the matrix $O$ filled with 0s. Furthermore equation $r_{i}=\operatorname{rank}(\tilde M^{(i)})+\rho$ at stage 1 implies that $\tau_{r_i}(M-\tilde M^{(i)})\le \tau_{\rho}(M)$. \[reref\] [Management of the rank growth.]{} The bound on the rank of the matrix $E^{(i)}$ is doubled at every iteration of Algorithm \[algitrrf\]; by allowing its increase we obtain more accurate LRA, but increase the complexity of an iteration. In order to bound the complexity we periodically compress the computed LRA $\tilde M^{(i)}$ into its $\rho$-truncation $(\tilde M^{(i)})^{(\rho)}$ by applying Algorithm \[alglratpsvd\]. The errors of LRA grow in compression within bounds (\[eqpr6.1\]) and (\[eqtrsd\]) for $AB=\tilde M^{(i)}$ and $\rho=r_i$, that is, less significantly if an LRA $\tilde M^{(i)}$ is close to an input matrix $M$. \[reprec\] [Management of the precision of computing.]{} As this is customary in iterative refinement we apply mixed precision technique, that is, perform the subtraction stage 2 with a higher precision than stage 1. Superfast heuristic a posteriori error estimation for LRA {#serrest} ========================================================= Superfast accurate a posteriori error estimation is impossible even for the small input families of Appendix \[shrdin\], but next we list some simple heuristic recipes for this task. (i) Clearly the value $|e_{i,j}|$ for every entry $e_{i,j}$ of the error matrix $E=(e_{i,j})_{i,j}=\tilde M-M$ of an LRA of (\[eqlra\]) is a lower bound on the norm $|E|$ of this matrix. \(ii) The above deterministic lower bound on the LRA error norm $|E|$ also implies its a posteriori randomized upper bound if the error matrix $E$ is filled with i.i.d. values of a single random variable and has sufficiently many entries, e.g., 100 entries or more (see Appendix \[spstr\]). \(iii) By generalizing the technique of part (i) we obtain deterministic lower bounds on the error norm $|E|$ given by the ratios $|FE|/|F|$, $|EH|/|H|$ or $|FEH|/(|F| ~|H|)$ for any pair of matrices $F\in \mathbb C^{k\times m}$ and $H\in \mathbb C^{n\times l}$. The computation also defines randomized upper bounds on the error norm $|E|$ for random matrices $F$ and/or $H$ and sufficiently large integers $k$ and/or $l$ (see [@F79]) and is superfast if the matrices $F$ and $H$ are full rank submatrices of permutation matrices or are other sufficiently sparse matrices. \(iv) We used the following heuristic recipes in our tests of iterative refinement described in Section \[ststs\]. Suppose that we have computed a set of LRAs $\tilde M_1,\dots,\tilde M_s$ for a matrix $M$ by applying to it various superfast LRA algorithms or the same algorithm with distinct parameters, e.g., a superfast version of the algorithm of [@TYUC17], which we accelerate by means of replacing Gaussian multipliers ${\bf \Psi}$ and ${\bf \Omega}$ of [@TYUC17] with a pair of sparse multipliers. Furthermore suppose that we have computed a median $\tilde M=\tilde M_j$ for some $j$ in the range $1\le j\le s$. Then we can consider this median a heuristic LRA of $M$. For superfast computation of a median select the median index $j=j({\bf f},{\bf h})$ for the product ${\bf f}^*\tilde M_j{\bf h}$ over all subscripts $j$ in the range $1\le j\le s$ for a fixed pair of vectors ${\bf f}$ and ${\bf h}$. For another heuristic but more dependable selection one may first compute such medians for a number of pairs of vectors ${\bf f}$ and ${\bf h}$ and then select a median of these medians. All these recipes for superfast a posteriori error estimation can be applied to any LRA. In [@PLSZa] and [@PLSZ20] we deduce such estimates for LRA output by some specific superfast algorithms. Numerical Experiments {#ststs} ===================== In this subsection we present the test results for Algorithm \[algitrrf\] on inputs of four types made up of synthetic and real-world data with various spectra. Our figures display the relative error ratio $$r = \frac{||M - \tilde M||_F}{||M - M^{(\rho)}||_F}$$ where $M$ denotes the input matrix, $\tilde M$ denotes its approximation output by Algorithm \[algitrrf\], $M^{(\rho)}$ denotes the $\rho$ truncation of $M$, and $\rho$ is a target rank for each input matrix. Unless the output of the algorithm was corrupted by rounding errors, the ratio $r$ was not supposed to be noticeably exceeded by 1, and it was not in our experiments. The algorithm was implemented in Python, and we run all experiments on a 64bit MacOS Sierra 10.12.6 machine with 1.6GHz CPU and 4GB Memory. We called scipy.linalg version 0.4.9 for numerical linear algebra routines such as QR factorization with pivoting, Moore-Penrose matrix inversion, and linear least squares regression. [**Synthetic Input:**]{} We used random synthetic $1024\times 1024$ input matrices of two kinds – with fast and slowly decaying spectra. In both cases we generated these matrices as the products $U\Sigma V^T$, where $U$ and $V$ were the matrices of the left and right singular vectors of a random Gaussian matrix. By letting $\Sigma = \textrm{diag}(v)$, where $v_i = 1$ for $i = 1, 2, 3,\dots 40$, $v_i = \frac{1}{2}^{i}$ for $i = 41, \dots, 100$, and $v_i = 0$ for $i > 100$, we generated input matrices with fast decaying spectra. By letting $\Sigma = \textrm{diag}(u)$, where $u_i = 1$ for $i = 1, 2, 3,\dots 40$, and $u_i = \frac{1}{1+ i}$ for $i > 40$, we generated input matrices with slowly decaying spectra. [**Real-wold Input:**]{} The input matrices of this category were $1000\times 1000$ dense matrices with real values having low numerical rank. They represented discretization of Integral Equations, provided in the built-in problems of the Regularization Tools[^3]. We used two test matrices. One of them, called [**gravity**]{}, came from a one-dimensional gravity surveying model problem; the other one, called [**shaw**]{}, came from a one-dimensional image restoration model problem. We display the distribution of their singular values in figure \[specGravityShaw\]; we padded these matrices with 0s in order to increase their size to $1024\times 1024$. **Subalgorithm:** Our subalgorithm modifies the sketching algorithm of [@TYUC17]. Namely, in the $i$th iteration step we fix two multipliers $F$ and $H$, then approximate the residual $R_i = M - \tilde{M}_{i-1}$ with $\tilde{R}_i = R_iH(FR_iH)^+FR_i$, and finally compute the $i$th approximation $\tilde{M_i} = \tilde{M}_{i-1} + \tilde{R}_i$. Unlike [@TYUC17] we skip orthogonalization of the matrix $R_iH$. We used Gaussian multipliers $F$ and $H$, as in [@TYUC17], where they are denoted ${\bf \Omega}$ and ${\bf \Psi}$, but only for comparison with the main results of our tests performed with sparse orthogonal multipliers, namely, with the *abridged* Hadamard multipliers of [@PLSZ16] and [@PLSZ17] having size $5\times 1024$ and recursion depth 3. For each input matrix, we 100 times performed Algorithm \[algitrrf\] for 100 random choices of pairs $F$ and $H$ of multipliers ${\bf \Omega}$ and ${\bf \Psi}$ in the algorithm of [@TYUC17]. We recorded the mean relative error ratio for every iteration step in figure \[testResult\]. The results of our tests with abridged SRHT multipliers were similar to the results with Gaussian multipliers and were only slightly worse in few places. This minor deterioration of the output accuracy was a reasonable price for using abridged (very sparse) Hadamard multipliers, with which we only access a small fraction of the input matrix at each iteration step. Even in the tests for some random input matrices with slowly decaying spectrum the relative error ratio did not decrease below 1, which could be caused by the “heavy" tail in the spectrum. In our tests with some other inputs that were not “well-mixed" in some sense, it was necessary to increase the recursion depth of the abridged Hadamard multipliers in order to bring the relative error ratio close to 1. [**Appendix**]{} Small families of hard inputs for superfast LRA {#shrdin} =============================================== Any sublinear cost LRA algorithm fails on the following small families of LRA inputs. \[exdlt\] Let $\Delta_{i,j}$ denote an $m\times n$ matrix of rank 1 filled with 0s except for its $(i,j)$th entry filled with 1. The $mn$ such matrices $\{\Delta_{i,j}\}_{i,j=1}^{m,n}$ form a family of $\delta$-[*matrices*]{}. We also include the $m\times n$ null matrix $O_{m,n}$ filled with 0s into this family. Now fix any sublinear cost algorithm; it does not access the $(i,j)$th entry of its input matrices for some pair of $i$ and $j$. Therefore it outputs the same approximation of the matrices $\Delta_{i,j}$ and $O_{m,n}$, with an undetected error at least 1/2. Arrive at the same conclusion by applying the same argument to the set of $mn+1$ small-norm perturbations of the matrices of the above family and to the $mn+1$ sums of the latter matrices with any fixed $m\times n$ matrix of low rank. Finally, the same argument shows that a posteriori estimation of the output errors of an LRA algorithm applied to the same input families cannot run at sublinear cost. Superfast a posteriori error estimation for LRA of a matrix filled with i.i.d. values of a single variable {#spstr} ========================================================================================================== In our randomized a posteriori error estimation below we assume that the error matrix $E$ of an LRA has enough entries, say, 100 or more, and that they are the observed i.i.d. values of a single random variable. This is realistic, for example, where the deviation of the matrix $W$ from its rank-$\rho$ approximation is due to the errors of measurement or rounding. In this case the Central Limit Theorem implies that the distribution of the variable is close to Gaussian (see [@EW07]). Fix a pair of integers $q$ and $s$ such that $qs$ is large enough (say, exceeds 100), but $qs=O((m+n)kl )$ and $qs\ll mn$; then apply our tests just to a random $q\times s$ submatrix of the $m\times n$ error matrix. Under this policy we compute the error matrix at a dominated arithmetic cost in $O((m+n)kl )$ but still verify correctness with high confidence, by applying the customary rules of [*hypothesis testing for the variance of a Gaussian variable.*]{} Namely suppose that we have observed the values $g_1,\dots,g_K$ of a Gaussian random variable $g$ with a mean value $\mu$ and a variance $\sigma^2$ and that we have computed the observed average value and variance $$\mu_K=\frac{1}{K}\sum_{i=1}^K |g_i|~ {\rm and }~\sigma_K^2=\frac{1}{K}\sum_{i=1}^K |g_i-\mu_K|^2,$$ respectively. Then, for a fixed reasonably large $K$, both $${\rm Probability}~\{|\mu_K-\mu|\ge t|\mu|\}~{\rm and~Probability}\{|\sigma_K^2-\sigma^2|\ge t\sigma^2\}$$ converge to 0 exponentially fast as $t$ grows to the infinity (see [@C46]). From $\rho$-top SVD to CUR LRA {#ssvdcur} ------------------------------ In Section \[slrasvd\] we have readily computed $\rho$-top SVD of an LRA of an $m\times n$ matrix $M$, and this computation is superfast where $\rho$. Next we complement that algorithm with transformation of the $\rho$-top SVD of a matrix $M$ into its rank-$\rho$ CUR decomposition, which is also superfast for small $\rho$. $\rho$-top SVD of a matrix $M$ is SVD of $M_{\rho}$, and so every nonsingular $\rho\times \rho$ submatrix $G$ of $M_{\rho}$ generates its exact CUR decomposition, which is an optimal rank-$\rho$ approximation of $M$, but this we are going to stabilize this decomposition numerically by bounding the norm of $G$. \[algsvdtocur\] [*\[Transition from $\rho$-top SVD to CUR LRA.\]*]{} [Input:]{} : Five integers $k$, $l$, $m$, $n$, and $\rho$ satisfying (\[eqklmn\]) and four matrices $M\in \mathbb R^{m\times n}$, $\Sigma\in \mathbb R^{\rho\times \rho}$ (diagonal), $U\in \mathbb R^{m\times \rho}$, and $V\in \mathbb R^{n\times \rho}$ (both orthogonal) such that $M:=U\Sigma V^*$ is SVD. [Output:]{} : Three matrices[^4] $C\in \mathbb R^{m\times l}$, $N\in \mathbb R^{l\times k}$, and $R\in \mathbb R^{k\times n}$ such that $C$ and $R$ are submatrices made up of $l$ columns and $k$ rows of $M$, respectively, and $$M=CNR.$$ [Computations:]{} : 1. By applying to the matrices $U$ and $V$ the algorithms of [@GE96] or [@P00] compute the submatrices $U_{\mathcal I,:}\in \mathbb R^{k\times \rho}$ and $V^*_{:,\mathcal J}\in \mathbb R^{\rho\times l}$, respectively. Output the CUR factors $C= U\Sigma V^*_{:,\mathcal J}$ and $R= U_{\mathcal I,:}\Sigma V^*$. 2. Define a CUR generator $G:= U_{\mathcal I,:}\Sigma V^*_{:,\mathcal J}$ and output a nucleus $N:=G^{+}= V_{:,\mathcal J}^{*+}\Sigma^{-1} U_{\mathcal I,:}^{+}$. \[Prove the latter equation by verifying the Moore – Penrose conditions for the matrix $G^+$.\] [*Correctness verification.*]{} Substitute the expressions for $C$, $N$ and $R$ and obtain $CNR=(U\Sigma V^*_{:,\mathcal J}) (V_{:,\mathcal J}^{*+}\Sigma^{-1} U_{\mathcal I,:}^+)(U_{\mathcal I,:}\Sigma V^*)$. Substitute the equations $V^*_{:,\mathcal J}V^{*+}_{:,\mathcal J}= U_{\mathcal I,:}^+U_{\mathcal I,:}=I_{\rho}$, which hold because $V^*_{:,\mathcal J}\in \mathbb R^{\rho\times l}$, $U_{\mathcal I,:}^+\in \mathbb R^{k\times \rho}$, and $\rho\le \min\{k,l\}$ by assumption, and obtain $CNR=U\Sigma V^*=M'$. [*Cost bounds.*]{} The algorithm uses $nk+ml+kl$ memory cells and $O(mk^2+nl^2)$ flops; these cost bounds are sublinear for $k^2\ll n$ and $l^2\ll m$ and are dominated at stage 2. Let us also estimate [*the norm of the nucleus*]{} $||N||$. Observe that $||N||\le || V_{:,\mathcal J}^{*+}|| ~||\Sigma^{-1}||~ ||U_{\mathcal I,:}^{+}||$ by virtue of Lemma \[lehg\] because $\operatorname{rank}(V_{:,\mathcal J})=\operatorname{rank}(\Sigma)= \operatorname{rank}(U_{\mathcal I,:})=\rho$. Recall that $||\Sigma^{-1}||=||M^+||=1/\sigma_{\rho}(M)$. Write $t_{q,s,h}^2:=(q-s)sh^2+1$, allow any choice of $h>1$, say, $h=1.1$, and then recall that $||U_{\mathcal I,:}^{+}||\le t_{m,k,h}^a$, $||(V_{:,\mathcal J}^{*})^{+}||\le t_{n,l,h}^a$, and consequently $$||N||\le t_{m,\rho,h}^at_{n,\rho,h}^a/\sigma_{\rho}(M)$$ where $a=1$ if we apply the algorithms of [@GE96] at stage and $a=2$ if we apply those of [@P00]. [**Acknowledgements:**]{} Our work has been supported by NSF Grants CCF–1116736, CCF–1563942 and CCF–1733834 and PSC CUNY Award 69813 00 48. [hspace[0.5in]{}]{} A. Bj[ö]{}rk, [*Numerical Methods in Matrix Computations*]{}, Springer, New York, 2015. C. Boutsidis, D. Woodruff, Optimal CUR Matrix Decompositions, [*SIAM Journal on Computing*]{}, [**46, 2**]{}, 543–589, 2017, DOI:10.1137/140977898. Harald Cramér, [*Mathematical Methods of Statistics*]{}, Princeton University Press, 575 pages, 1999 (first edition 1946). J. Demmel, Y. Hida, W. Kahan, X. S. Li, S. Mukherjee, E. J. Riedy, Error bounds from extra-precise iterative refinement, [*ACM Trans. Math. Softw.*]{}, [**32, 2,**]{} 325–351, 2006. P. Drineas, M.W. Mahoney, S. Muthukrishnan, Relative-error CUR Matrix Decompositions, [*SIAM Journal on Matrix Analysis and Applications*]{}, [**30, 2**]{}, 844–881, 2008. A. C. Elliott, W. A. Woodward, [*Statistical Analysis Quick Reference Guidebook: With SPSS Examples*]{}, Sage, 2007. R. Freivalds, Fast probabilistic algorithms, [*Mathematical Foundations of Computer Science*]{}, 57–69, [*Lecture Notes in Comp. Sci.*]{}, [**74**]{}, Springer, Berlin, Heidelberg, 1979. M. Gu, S.C. Eisenstat, An Efficient Algorithm for Computing a Strong Rank Revealing QR Factorization, [*SIAM J. Sci. Comput.*]{}, [**17**]{}, 848–869, 1996. G. H. Golub, C. F. Van Loan, [*Matrix Computations*]{}, The Johns Hopkins University Press, Baltimore, Maryland, 2013 (fourth edition). S. Goreinov, I. Oseledets, D. Savostyanov, E. Tyrtyshnikov, N. Zamarashkin, How to Find a Good Submatrix, in [*Matrix Methods: Theory, Algorithms, Applications*]{} (dedicated to the Memory of Gene Golub, edited by V. Olshevsky and E. Tyrtyshnikov), pages 247–256, World Scientific Publishing, New Jersey, ISBN-13 978-981-283-601-4, ISBN-10-981-283-601-2, 2010. N. J. Higham, [*Accuracy and Stability in Numerical Analysis*]{}, SIAM, Philadelphia, 2002 (second edition). N. Halko, P. G. Martinsson, J. A. Tropp, Finding Structure with Randomness: Probabilistic Algorithms for Constructing Approximate Matrix Decompositions, [*SIAM Review*]{}, [**53, 2**]{}, 217–288, 2011. N. Kishore Kumar, J. Schneider, Literature Survey on Low Rank Approximation of Matrices, [*Linear and Multilinear Algebra,*]{} [**65 (11)**]{}, 2212–2244, 2017, and arXiv:1606.06511v1 \[math.NA\] 21 June 2016. Q. Luan, V. Y. Pan, CUR LRA at Sublinear Cost Based on Volume Maximization, In [*LNCS*]{} [**11989**]{}, [*Book: Mathematical Aspects of Computer and Information Sciences (MACIS 2019)*]{}, D. Salmanig et al (Eds.), Springer Nature Switzerland AG 2020, Chapter No: [**10**]{}, pages 1–17, Springer Nature Switzerland AG 2020 Chapter DOI:10.1007/978-3-030-43120-4\_10 Q. Luan, V. Y. Pan, J. Svadlenka, Low Rank Approximation Directed by Leverage Scores and Computed at Sublinear Cost, arXiv:1906.04929 (Submitted on 10 Jun 2019). M. W. Mahoney, Randomized Algorithms for Matrices and Data, [*Foundations and Trends in Machine Learning*]{}, NOW Publishers, [**3, 2**]{}, 2011. Preprint: arXiv:1104.5557 (2011) (Abridged version in: [*Advances in Machine Learning and Data Mining for Astronomy*]{}, edited by M. J. Way et al., pp. 647–672, 2012.) C.-T. Pan, On the Existence and Computation of Rank-Revealing LU Factorizations, [*Linear Algebra and its Applications*]{}, [**316**]{}, 199–222, 2000. V. Y. Pan, Low Rank Approximation of a Matrix at Sublinear Cost,\ arXiv:1907.10481, 21 July 2019. V. Y. Pan, Q. Luan, J. Svadlenka, L.Zhao, Primitive and Cynical Low Rank Approximation, Preprocessing and Extensions, arXiv 1611.01391 (November 2016). V. Y. Pan, Q. Luan, J. Svadlenka, L. Zhao, Superfast Accurate Low Rank Approximation, preprint, arXiv:1710.07946 (October 2017). V. Y. Pan, Q. Luan, J. Svadlenka, L. Zhao, CUR Low Rank Approximation at Sublinear Cost, arXiv:1906.04112 (Submitted on 10 Jun 2019). V. Y. Pan, Q. Luan, J. Svadlenka, L. Zhao, Sublinear Cost Low Rank Approximation via Subspace Sampling, In [*LNCS*]{} [**11989**]{}, [*Book: Mathematical Aspects of Computer and Information Sciences (MACIS 2019)*]{}, D. Salmanig et al (Eds.), Springer Nature Switzerland AG 2020, Chapter No: [**9**]{}, pages 1–16, Springer Nature Switzerland AG 2020 Chapter DOI:10.1007/978-3-030-43120-4\_9 and arXiv:1906.04327 (Submitted on 10 Jun 2019). G. W. Stewart, Error and Perturbation Bounds for Subspaces Associated with Certain Eigenvalue Problems, [*SIAM Review*]{}, [**15, 4**]{}, 727–764, 1973. G. W. Stewart, [*Matrix Algorithms, Vol. [**I**]{}: Basic Decompositions*]{}, SIAM, 1998. J. A. Tropp, A. Yurtsever, M. Udell, V. Cevher, Practical Sketching Algorithms for Low-rank Matrix Approximation, [*SIAM J. Matrix Anal.*]{}, [**38,  4**]{}, 1454–1485, 2017. [^1]: Flop is a floating point arithmetic operation. [^2]: Here and throughout we use such concepts as “low", “small", “nearby", “much fewer" etc. defined in context. [^3]: See http://www.math.sjsu.edu/singular/matrices and http://www2.imm.dtu.dk/$\sim$pch/Regutools For more details see Ch4 of http://www.imm.dtu.dk/$\sim$pcha/Regutools/RTv4manual.pdf [^4]: Here we denote nucleus by $N$ rather than $U$ in order to avoid conflict with the factor $U$ in SVD.
--- address: 'S. Baier, School of Mathematics, University of East Anglia, Norwich, NR4 7TJ, England' author: - 'S. Baier' title: Multiplicative inverses in short intervals --- Introduction and results ======================== T.D. Browning and A. Haynes [@BrHa] considered the following problem. Let $p$ be a prime, $J$ be an integer and $I_1^{(j)},I_2^{(j)}$ with $1\le j\le J$ be finite sequences of subintervals of $(0,p)$. Under which conditions is there a $j$ such that $$\label{congruence} xy\equiv 1 \bmod{p}, \quad (x,y)\in \left(I_1^{(j)}\times I_2^{(j)}\right)\cap \mathbb{Z}^2$$ has a solution? They proved the following. \[theorem1\] Let $H,K>0$ and let $I_1^{(j)},I_2^{(j)} \subseteq (0,p)$ be subintervals, for $1\le j\le J$, such that $$\left|I_1^{(j)}\right|=H \quad \mbox{and} \quad \left|I_2^{(j)}\right|=K$$ and $$\label{empty} I_1^{(j)}\cap I_1^{(k)}=\emptyset \quad \mbox{for all} \quad j\not=k.$$ Then there exists $j\in\{1,...,J\}$ for which has a solution if $$\label{Jineq} J\gg \frac{p^3\log^4 p}{H^2K^2}.$$ The proof of this theorem in [@BrHa] relies on the following new mean value theorem for short Kloosterman sums by Browning and Haynes. \[theorem2\] If $I_1,...,I_J \subseteq (0,p)$ are disjoint subintervals, with $H/2\le |I_j|\le H$ for each $j$, then for any $l\in \left(\mathbb{Z}/p\mathbb{Z}\right)^{\ast}$, we have $$\sum\limits_{j=1}^J \left|\sum\limits_{n\in I_j} e\left(\frac{l\overline{n}}{p} \right)\right|^2 \le 2^{12}p\log^2 H.$$ In this note, we prove Theorem \[theorem1\], in a slightly generalized and refined form, by a different method which doesn’t use Theorem \[theorem2\] but Poisson summation and Weil’s estimate for Kloosterman sums. Moreover, we improve this result under certain additional conditions on the spacing of the intervals $I_i^{(j)}$. We note that we could add the assumption $$\label{Hass} H\ge \log p$$ to Theorem \[theorem1\] without weakening the result because $J\ll p/H$ under the conditions of this theorem which contradicts if $H<\log p$ since $K\le p$. We want to assume throughout the sequel. Moreover, we want to assume without loss of generality that the intervals $I_1^{(j)}$ and $I_2^{(j)}$ are closed and centered at integers $N_j$ and $M_j$, respectively. We prove the following. \[theorem3\] Assume that all conditions in Theorem \[theorem1\] are satisfied, except possibly . Assume the integers $M_j$ are $X$-spaced modulo $p$, i.e. $$p\cdot \left|\left| \frac{M_j-M_k}{p} \right|\right| \ge X \quad \mbox{for all $j,k$ with } j\not=k,$$ for some $X\ge 1$, where $||z||$ denotes the distance of the real number $z$ to the nearest integer. Then there exists $j\in \{1,...,J\}$ for which has a solution if $$\label{Jlowerbound} J\gg \frac{p^3\log^{3+\varepsilon} p}{HK^2\min\{H,X\}}.$$ Clearly, Theorem \[theorem3\] implies Theorem \[theorem1\]. If the intervals $I_1^{(j)}$ and $I_2^{(j)}$ are equispaced, respectively, we obtain an improvement of the result above, provided that the intervals $I_1^{(j)}$ are not spaced too far away. \[theorem4\] Assume that all conditions in Theorem \[theorem1\] are satisfied. Assume further that the integers $M_j$ and $N_j$ lie in arithmetic progression, respectively, i.e. $$M_j=M+jX \quad \mbox{and} \quad N_j=N+jY$$ for certain integers $M,N,X,Y$ and all $j\in \{1,...,J\}$. Then there exists $j\in\{1,...,J\}$ for which has a solution if $$\label{has} X\ll \frac{HK}{p^{1/2}(\log p)^{1+\varepsilon}} \quad \mbox{and} \quad J\gg \frac{p^{3/2}\log^{2+\varepsilon} p}{HK}.$$ Basic approach ============== Set $$w(t):=\exp\left(-\pi t^2\right).$$ This has Fourier transform $$\label{Fourier} \hat{w}(t)=w(t)=\exp\left(-\pi t^2\right).$$ Set $$\label{6} x:=\frac{H}{(\log p)^{1/2+\varepsilon}}, \quad y:=\frac{K}{(\log p)^{1/2+\varepsilon}}$$ and $$T:=\sum\limits_{j=1}^J \mathop{\sum\limits_{|m-M_j|\le H/2} \sum\limits_{|n-N_j|\le K/2}}_{m\equiv \overline{n} \bmod{p}} w\left(\frac{m-M_j}{x}\right) w\left(\frac{n-N_j}{y}\right),$$ where here and in the sequel, we assume that $(n,p)=1$, and $\overline{n}$ denotes the multiplicative inverse of $n$ modulo $p$. Then clearly, has a solution if $T>0$. Now the general strategy is to extend the sums over $m$ and $n$ to all integers and use Poisson summation and Weil’s estimate for Kloosterman sums. We write $$\label{write} T=S-S_1-S_2,$$ where $$S:=\sum\limits_{j=1}^J \mathop{\sum\limits_{m} \sum\limits_{n}}_{m\equiv \overline{n} \bmod{p}} w\left(\frac{m-M_j}{x}\right) w\left(\frac{n-N_j}{y}\right),$$ $$S_1:=\sum\limits_{j=1}^J \mathop{\sum\limits_{|m-M_j|> H/2} \sum\limits_{n}}_{m\equiv \overline{n} \bmod{p}} w\left(\frac{m-M_j}{x}\right) w\left(\frac{n-N_j}{y}\right)$$ and $$S_2:=\sum\limits_{j=1}^J \mathop{\sum\limits_{|m-M_j|\le H/2} \sum\limits_{|n-N_j|>K/2}}_{m\equiv \overline{n} \bmod{p}} w\left(\frac{m-M_j}{x}\right) w\left(\frac{n-N_j}{y}\right).$$ From $J,H,K\le p$ ($J\le p$ following from $X\ge 1$), and , it is evident that $S_1$ and $S_2$ are negligible, i.e. $$\label{negligible} S_1,S_2\ll_A p^{-A} \quad \mbox{for any } A>0.$$ In the following section, we estimate the sum $S$. Application of Poisson summation ================================ We have $$\label{trans} \begin{split} S= & \sum\limits_{j=1}^J \mathop{\sum\limits_{m} \sum\limits_{n}}_{m\equiv \overline{n} \bmod{p}} w\left(\frac{m-M_j}{x}\right)w\left(\frac{n-N_j}{y}\right) \\ = & \frac{1}{p} \sum\limits_{k=-(p-1)/2}^{(p-1)/2} \sum\limits_{j=1}^J \sum\limits_{m} \sum\limits_{n} w\left(\frac{m-M_j}{x}\right) w\left(\frac{n-N_j}{y}\right) e\left(k\cdot \frac{m-\overline{n}}{p}\right) \\ = & \frac{1}{p} J \sum\limits_{m} w\left(\frac{m}{x}\right)\sum\limits_{n} w\left(\frac{n}{y}\right)+\\ & \frac{1}{p} \sum\limits_{\substack{k=-(p-1)/2\\ k\not=0}}^{(p-1)/2} \sum\limits_{j=1}^J \sum\limits_{m} w\left(\frac{m-M_j}{x}\right)e\left(k\cdot \frac{m}{p}\right)\sum\limits_{n} w\left(\frac{n-N_j}{y}\right)e\left(-k\cdot \frac{\overline{n}}{p}\right). \end{split}$$ Using Poisson summation, the terms in the last line can be transformed as follow. First, $$\label{poisson1} \frac{1}{p} J \sum\limits_{m} w\left(\frac{m}{x}\right)\sum\limits_{n} w\left(\frac{n}{y}\right)= \frac{Jxy}{p} \cdot \hat{w}(0)^2.$$ Second, $$\label{poisson2} \begin{split} & \sum\limits_{m} w\left(\frac{m-M_j}{x}\right)e\left(k\cdot \frac{m}{p}\right) = \sum\limits_{m} w\left(\frac{m}{x}\right) e\left(k\cdot \frac{m+M_j}{p}\right)\\ =& x\cdot e\left(k\cdot \frac{M_j}{p}\right) \cdot \sum\limits_{m\equiv k \bmod{p}} \hat{w}\left(\frac{mx}{p}\right)=x\cdot e\left(k\cdot \frac{M_j}{p}\right) \cdot F_k(x), \quad \mbox{say.} \end{split}$$ Third, $$\label{poitsson3} \begin{split} & \sum\limits_{n} w\left(\frac{n-N_j}{y}\right)e\left(-k\cdot \frac{\overline{n}}{p}\right) = \sum\limits_{c\bmod{p}} e\left(-k\cdot \frac{\overline{c}}{p}\right) \sum\limits_{n \equiv c+N_j\bmod{p}} w\left(\frac{n}{y}\right)\\ =& \frac{y}{p}\cdot \sum\limits_{c\bmod{p}} e\left(-k\cdot \frac{\overline{c}}{p}\right) \sum\limits_{l} \hat{w}\left(\frac{ly}{p}\right)e\left(l\cdot \frac{c+N_j}{p}\right)\\ =& \frac{y}{p} \cdot \sum\limits_{l} \hat{w}\left(\frac{ly}{p}\right) e\left(l\cdot \frac{N_j}{p}\right) S(l,-k;p), \end{split}$$ where $$S(l,-k;p)=\sum\limits_{c=1}^{p-1} e\left(\frac{lc-k\overline{c}}{p}\right)$$ is the Kloosterman sum. Putting the above together, we get $$\label{above} \begin{split} S= & \frac{Jxy}{p} \cdot \hat{w}(0)^2+\\ & \frac{xy}{p^2} \cdot \sum\limits_{\substack{k=-(p-1)/2\\ k\not=0}}^{(p-1)/2} \sum\limits_{l} S(l,-k;p) F_k(x) \hat{w}\left(\frac{ly}{p}\right) \sum\limits_{j=1}^J e\left(\frac{kM_j+lN_j}{p}\right). \end{split}$$ We note that if $-(p-1)/2\le k\le (p-1)/2$, then $$\label{Fkx} \begin{split} F_k(x)= & \sum\limits_{m\equiv k \bmod{p}} \hat{w}\left(\frac{mx}{p}\right) = \sum\limits_{r\in \mathbb{Z}} \hat{w}\left(\left(\frac{k}{p}+r\right)x\right)\\ \ll & \exp\left(-\frac{|k|}{p}\cdot x\right) \cdot \sum\limits_{r=0}^{\infty} \exp(-rx) \ll \exp\left(-\frac{|k|}{p}\cdot x\right) \end{split}$$ since $x\ge 1$ if $\varepsilon\le 1/2$ by and . Proof of Theorem 3 ================== By Weil’s bound for Kloosterman sums, we have $$\label{Weil} S(l,-k;p)\ll p^{1/2} \quad \mbox{if } k\not\equiv 0 \bmod{p}.$$ Using the Cauchy-Schwarz inequality and , it follows that $$\label{aftercauchy} \begin{split} & \sum\limits_{\substack{k=-(p-1)/2\\ k\not=0}}^{(p-1)/2} \sum\limits_{l} S(l,-k;p) F_k(x) \hat{w}\left(\frac{ly}{p}\right) \sum\limits_{j=1}^J e\left(\frac{kM_j+lN_j}{p}\right)\\ \ll & p^{1/2}\left(\sum\limits_{\substack{k=-(p-1)/2\\ k\not=0}}^{(p-1)/2} \sum\limits_{l} F_k(x) \hat{w}\left(\frac{ly}{p}\right)\right)^{1/2} \\ & \left(\sum\limits_{k=-(p-1)/2}^{(p-1)/2} \sum\limits_{l} F_k(x) \hat{w}\left(\frac{ly}{p}\right)\left|\sum\limits_{j=1}^J e\left(\frac{kM_j+lN_j}{p}\right)\right|^2\right)^{1/2}\\ \ll & \frac{p^{3/2}}{(xy)^{1/2}} \left(\sum\limits_{k=-(p-1)/2}^{(p-1)/2} \sum\limits_{l} F_k(x) \hat{w}\left(\frac{ly}{p}\right)\left|\sum\limits_{j=1}^J e\left(\frac{kM_j+lN_j}{p}\right)\right|^2\right)^{1/2}. \end{split}$$ Expanding the square, we get $$\begin{split} & \sum\limits_{k=-(p-1)/2}^{(p-1)/2} \sum\limits_{l} F_k(x) \hat{w}\left(\frac{ly}{p}\right)\left|\sum\limits_{j=1}^J e\left(\frac{kM_j+lN_j}{p}\right)\right|^2\\ =& \sum\limits_{j_1,j_2=1}^J \left(\sum\limits_{k=-(p-1)/2}^{(p-1)/2} F_k(x)e\left(k\cdot \frac{M_{j_1}-M_{j_2}}{p}\right)\right) \left(\sum\limits_{l} \hat{w}\left(\frac{ly}{p}\right)e\left(l\cdot \frac{N_{j_1}-N_{j_2}}{p}\right)\right).\end{split}$$ Using , we have $$\sum\limits_{k=-(p-1)/2}^{(p-1)/2} F_k(x)e\left(k\cdot \frac{M_{j_1}-M_{j_2}}{p}\right) \ll \sum\limits_{k=-(p-1)/2}^{(p-1)/2} F_k(x) \ll \frac{p}{x}.$$ Similarly, $$\sum\limits_{l} \hat{w}\left(\frac{ly}{p}\right)e\left(l\cdot \frac{N_{j_1}-N_{j_2}}{p}\right) \ll \sum\limits_{l} \hat{w}\left(\frac{ly}{p}\right) \ll \frac{p}{y}.$$ Moreover, removing the weight function $F_k(x)$ using partial summation and using the familiar estimate for geometric sums, we have $$\sum\limits_{k=-(p-1)/2}^{(p-1)/2} F_k(x)e\left(k\cdot \frac{M_{j_1}-M_{j_2}}{p}\right) \ll \left|\left| \frac{M_{j_1}-M_{j_2}}{p}\right|\right|^{-1}.$$ It follows that $$\label{ale} \begin{split} & \sum\limits_{k=-(p-1)/2}^{(p-1)/2} \sum\limits_{l} F_k(x) \hat{w}\left(\frac{ly}{p}\right)\left|\sum\limits_{j=1}^J e\left(\frac{kM_j+lN_j}{p}\right)\right|^2\\ \ll & \frac{p}{y}\sum\limits_{j_1,j_2=1}^J\min\left\{\left|\left| \frac{M_{j_1}-M_{j_2}}{p}\right|\right|^{-1}, \frac{p}{x}\right\} . \end{split}$$ Using the fact that the $M_j$’s are $X$-spaced modulo $p$, we obtain $$\label{last} \begin{split} \ll & \sum\limits_{j_1,j_2=1}^J\min\left\{\left|\left| \frac{M_{j_1}-M_{j_2}}{p}\right|\right|^{-1}, \frac{p}{x}\right\} \\ \ll & J \sum\limits_{j=0}^{J-1} \min\left\{\frac{p}{jX}, \frac{p}{x}\right\}\\ \ll & Jp\left(\frac{1}{x}+\frac{\log 2J}{X}\right). \end{split}$$ Combining , , , , and , we arrive at $$\begin{split} T = \frac{Jxy}{p} \cdot \hat{w}(0)^2+O\left((\log 2J)^{1/2}(Jxp)^{1/2} \left(\frac{1}{x}+\frac{1}{X}\right)^{1/2}\right). \end{split}$$ For the right-hand side to be greater $0$ (i.e. error term $<$ main term), it suffices that $$J\gg \frac{p^3\log p}{xy^2}\left(\frac{1}{x}+\frac{1}{X}\right),$$ which holds if is satisfied. This implies Theorem \[theorem3\]. Proof of Theorem 4 ================== Under the conditions of Theorem 4, we have $$\label{geom} \begin{split} \sum\limits_{j=1}^J e\left(\frac{kM_j+lN_j}{p}\right)=e\left(\frac{kM+lN}{p}\right) \sum\limits_{j=1}^J e\left(\frac{j(kX+lY)}{p}\right)\\ \ll \min\left\{\left|\left|\frac{kX+lY}{p}\right|\right|^{-1},J\right\}. \end{split}$$ Using and , we obtain $$\label{follows} \begin{split} & \sum\limits_{k=-(p-1)/2}^{(p-1)/2} \sum\limits_{l} S(l,-k;p) F_k(x) \hat{w}\left(\frac{ly}{p}\right) \sum\limits_{j=1}^J e\left(\frac{kM_j+lN_j}{p}\right)\\ \ll & p^{1/2} \sum\limits_{l} \hat{w}\left(\frac{ly}{p}\right) \sum\limits_{k=-(p-1)/2}^{(p-1)/2}F_k(x) \min\left\{\left|\left|\frac{kX+lY}{p}\right|\right|^{-1},J\right\}. \end{split}$$ We estimate the inner-most sum over $k$ by $$\begin{split} \sum\limits_{k=-(p-1)/2}^{(p-1)/2}F_k(x)\min\left\{\left|\left|\frac{kX+lY}{p}\right|\right|^{-1},J\right\} \ll & \frac{p/x}{p/X}\cdot \left(J+\frac{p}{X}\cdot\log p\right)\\ = & \frac{X}{x}\cdot J+ \frac{p}{x}\cdot \log p, \end{split}$$ where we use and $X\ge x$ (which follows from ). Hence, we get $$\label{hence} \begin{split} & p^{1/2} \sum\limits_{l} \hat{w}\left(\frac{ly}{p}\right) \sum\limits_{k=-(p-1)/2}^{(p-1)/2}F_k(x) \min\left\{\left|\left|\frac{kX+lY}{p}\right|\right|^{-1},J\right\}\\ \ll & \frac{p^{3/2}}{y} \cdot \left(\frac{X}{x}\cdot J+ \frac{p}{x}\cdot \log p \right). \end{split}$$ Combining , , , and , we get $$T=\frac{Jxy}{p} \cdot \hat{w}(0)^2+O\left(\frac{X}{p^{1/2}}\cdot J+p^{1/2}\log p \right).$$ For the right-hand side to be greater 0, it suffices that $$X\ll \frac{xy}{p^{1/2}} \quad \mbox{and} \quad J\gg \frac{p^{3/2}\log p}{xy},$$ which holds if is satisfied. This implies Theorem \[theorem4\]. [3]{} T. Browning, A. Haynes, [*Incomplete Kloosterman sums and multiplicative inverses in short intervals*]{}, arXiv:1204.6374v1.
--- abstract: 'Finite temperature SU(3) gauge theory is studied on anisotropic lattices using the standard plaquette gauge action. The equation of state is calculated on $16^{3} \times 8$, $20^{3} \times 10$ and $24^{3} \times 12$ lattices with the anisotropy $\xi \equiv a_s / a_t = 2$, where $a_s$ and $a_t$ are the spatial and temporal lattice spacings. Unlike the case of the isotropic lattice on which $N_t=4$ data deviate significantly from the leading scaling behavior, the pressure and energy density on an anisotropic lattice are found to satisfy well the leading $1/N_t^2$ scaling from our coarsest lattice, $N_t/\xi=4$. With three data points at $N_t/\xi=4$, 5 and 6, we perform a well controlled continuum extrapolation of the equation of state. Our results in the continuum limit agree with a previous result from isotropic lattices using the same action, but have smaller and more reliable errors.' address: | $^{\rm a}$Institute of Physics, University of Tsukuba, Tsukuba, Ibaraki 305-8571, Japan\ $^{\rm b}$Center for Computational Physics, University of Tsukuba, Tsukuba, Ibaraki 305-8577, Japan\ $^{\rm c}$Institute for Cosmic Ray Research, University of Tokyo, Kashiwa 277-8582, Japan\ $^{\rm d}$High Energy Accelerator Research Organization (KEK), Tsukuba, Ibaraki 305-0801, Japan author: - 'CP-PACS Collaboration : Y. Namekawa,$^{\rm a}$ S. Aoki,$^{\rm a}$ R. Burkhalter,$^{\rm a,b}$ S. Ejiri,$^{\rm b}$ M. Fukugita,$^{\rm c}$ S. Hashimoto,$^{\rm d}$ N. Ishizuka,$^{\rm b}$ Y. Iwasaki,$^{\rm a,b}$ K. Kanaya,$^{\rm a}$ T. Kaneko,$^{\rm d}$ Y. Kuramashi,$^{\rm d}$ V. Lesk,$^{\rm b}$ M. Okamoto,$^{\rm b}$ M. Okawa,$^{\rm d}$ Y. Taniguchi,$^{\rm a}$ A. Ukawa$^{\rm b}$ and T. Yoshié$^{\rm b}$' title: | \ \ \ Thermodynamics of SU(3) gauge theory on anisotropic lattices --- Introduction {#sec:intro} ============ Study of lattice QCD at finite temperatures is an important step towards clarification of the dynamics of the quark gluon plasma which is believed to have formed in the early Universe and is expected to be created in high energy heavy ion collisions [@ejiri]. In order to extract predictions for the real world from results obtained on finite lattices, we have to extrapolate lattice data to the continuum limit of vanishing lattice spacings. Because of the large computational demands for full QCD simulations, continuum extrapolations of thermodynamic quantities have so far been attempted only in SU(3) gauge theory, [*i.e.*]{}, in the quenched approximation of QCD, where the influence of dynamical quarks is neglected. Two studies using the standard plaquette gauge action [@boyd] and a renormalization-group (RG) improved gauge action [@okamoto] have found the pressure and energy density consistent with each other in the continuum limit. In full QCD with two flavors of dynamical quarks, thermodynamic quantities on coarse lattices have been found to show large lattice spacing dependence [@milc; @bielefeld; @cppacs_ft]. For a reliable extrapolation to the continuum limit, data on finer lattices are required. With conventional isotropic lattices, this means an increase of the spatial lattice size to keep the physical volume close to the thermodynamic limit. Full QCD simulations on large lattices are still difficult with the current computer power. A more efficient method of calculation is desirable. Even in the quenched case, we note that continuum extrapolations of equation of state have been made using only two lattice spacings [@boyd; @okamoto]. In order to reliably estimate systematic errors from the extrapolations, more data points are needed. Therefore, an efficient method is called for also in quenched QCD. Recently, anisotropic lattices have been employed to study transport coefficients and temporal correlation functions in finite temperature QCD [@sakai; @taro; @umeda]. In these studies, anisotropy was introduced to obtain more data points for temporal correlation functions. In this paper, we show that anisotropic lattices provide also an efficient calculation method for thermodynamic quantities. The idea is as follows. Inspecting the free energy density of SU(3) gauge theory in the high temperature Stephan-Boltzmann limit, the leading discretization error from the temporal direction is found to be much larger than that from each of the spatial directions. Hence, choosing $\xi=a_s/a_t$ larger than one, where $a_s$ and $a_t$ are the spatial and temporal lattice spacings, cutoff errors in thermodynamic quantities will be efficiently reduced without much increase in the computational cost. From a study of free energy density in the high temperature limit, we find that $\xi=2$ is an optimal choice for SU(3) gauge theory. This improvement also makes it computationally easier to accumulate data for more values of temporal lattice sizes for the continuum extrapolation. As a first test of the method, we study the equation of state (EOS) in SU(3) gauge theory. On isotropic lattices, discretization errors in the EOS for the plaquette action are quite large at the temporal lattice size $N_t=4$. The data at this value of $N_t$ deviate significantly from the leading $1/N_t^2$ scaling behavior, $ \left. F(T) \right|_{N_t} = \left. F(T)\right|_{\rm continuum} + c_{F}/N_t^{2} $, where $F$ is a thermodynamic quantity at a fixed temperature $T$. So far, continuum extrapolations of the EOS have been made using results at $N_t=6$ and 8. On anisotropic lattices with $\xi=2$, we find that the discretization errors in the pressure and energy density are much reduced relative to those from isotropic lattices with the same spatial lattice spacing. Furthermore, we find that the EOS at $N_t/\xi=4$, 5 and 6 follow the leading $1/N_t^2$ scaling behavior remarkably well. Therefore, a continuum extrapolation can be reliably carried out. Since the total computational cost is still lower than that for an $N_t=8$ isotropic simulation, we can achieve a higher statistics as well, resulting in smaller final errors. In Sec. \[sec:highT\_limit\], we study the high temperature limit of SU(3) gauge theory on anisotropic lattices to see how $\xi$ appears in the leading discretization error for the EOS. From this study, we find that $\xi=2$ is an optimum choice for our purpose. We then perform a series of simulations on $\xi=2$ anisotropic lattices. Our lattice action and simulation parameters are described in Sec. \[sec:simulation\]. Sec. \[sec:scale\] is devoted to a calculation of the lattice scale through the string tension. The critical temperature is determined in Sec. \[sec:Tc\]. Our main results are presented in Secs. \[sec:pressure\] and \[sec:energy\], where the pressure and energy density are calculated and their continuum extrapolations are carried out. A brief summary is given in Sec. \[sec:summary\]. High temperature limit {#sec:highT_limit} ====================== In the high temperature limit, the gauge coupling vanishes due to asymptotic freedom, and SU(3) gauge theory turns into a free bosonic gas. In the integral method [@engels4] which we apply in this study, the pressure $p$ is related to the free energy density $f$ by $p=-f$ for large homogeneous systems. Therefore, in the high temperature limit, the energy density $\epsilon$ is given by $ \epsilon = 3p = -3f. $ The value of $f$ in the high temperature limit has been calculated in [@engels1; @elze]. Normalizing $\epsilon$ by the Stephan-Boltzmann value in the continuum limit, we find $$\frac{\epsilon}{\epsilon_{SB}} = 1 + \frac{5 + 3 \xi^{2}}{21} \left( \frac{\pi}{N_t} \right)^{2} + \frac{91 + 210 \xi^{2} + 99 \xi^{4}}{1680} \left( \frac{\pi}{N_t} \right)^{4} + O\left( \left( \frac{\pi}{N_t} \right)^{6} \right) \label{anisotropic-integral}$$ for spatially large lattices. Substituting $\xi = 1$ in Eq. (\[anisotropic-integral\]), we recover the previous results for isotropic lattices [@scheideler]. When we alternatively adopt the derivative method (operator method) [@engels1] to define the energy density, we obtain $$\frac{\epsilon}{\epsilon_{SB}} = 1 + \frac{5(1 + \xi^2)}{21} \left( \frac{\pi}{N_t} \right)^{2} + \frac{13 + 50 \xi^{2} + 33 \xi^{4}}{240} \left( \frac{\pi}{N_t} \right)^{4} + O\left( \left( \frac{\pi}{N_t} \right)^{6} \right). \label{anisotropic-derivative}$$ In both formulae, the leading discretization error is proportional to $1/N_t^2$. In the leading $1/N_t^2$ term of Eq. (\[anisotropic-integral\]) (or Eq. (\[anisotropic-derivative\])), the term proportional to $\xi^2$ represents the discretization error from finite lattice spacings $a_s$ in the three spatial directions. We find that the temporal cutoff $a_t$ leads to $5/8$ (or 1/2) of the leading discretization error at $\xi=1$, while the spatial cutoff $a_s$ contributes only 1/8 (or 1/6) from each of the three spatial directions. Since a reduction of the lattice spacing in each direction separately causes an increase of the computational cost by a similar magnitude, a reduction of $a_t$ is much more efficient than that of $a_s$ in suppressing lattice artifacts in thermodynamic quantities. Making the anisotropy $\xi = a_s/a_t$ too large is, however, again inefficient because the spatial discretization errors remain even in the limit of $\xi=\infty$. A rough estimate for the optimum value of $\xi$ is given by equating the discretization errors from spatial and temporal directions, $\xi = \sqrt{5} \approx 2.24$ from Eq. (\[anisotropic-integral\]), and $\xi = \sqrt{3} \approx 1.73$ from Eq. (\[anisotropic-derivative\]). More elaborate estimations considering the balance between the computational cost as a function of the lattice size and the magnitude of discretization errors including higher orders of $1/N_t$ lead to similar values of $\xi$. Based on these considerations, we adopt $\xi = 2$ for simulations of SU(3) gauge theory in the present work. An even number for $\xi$ is attractive also for the vectorization/parallelization of the simulation code which is based on an even-odd algorithm, since we can study the case of odd $N_t/\xi$ without modifying the program. Details of simulations {#sec:simulation} ====================== Action ------ We employ the plaquette gauge action for SU(3) gauge theory given by $$S_{G}[U] = \beta \left( \frac{1}{\xi_0} Q_s + \xi_0 Q_t \right), \label{lat-gauge-aniso}$$ where $\xi_0$ is the bare anisotropy, $\beta=6/g_0^2$ with $g_0$ the bare gauge coupling constant, and $$Q_s = \sum_{n,(ij)} \left(1 - P_{ij}(n)\right),\;\; Q_t = \sum_{n,i} \left(1 - P_{i4}(n)\right),$$ with $P_{\mu\nu}(n) = \frac{1}{3} {\mbox{Re}\ }{\mbox{Tr}\ }U_{\mu\nu}(n)$ the plaquette in the $(\mu,\nu)$ plane at site $n$. Anisotropy is introduced by choosing $\xi_0 \neq 1$. Due to quantum fluctuations, the actual anisotropy $\xi \equiv a_s / a_t$ deviates from the bare value $\xi_0$. We define the renormalization factor $\eta(\beta,\xi)$ for $\xi$ by $$\eta(\beta,\xi) = \frac{\xi}{\xi_{0}(\beta,\xi)}.$$ The values of $\eta(\beta,\xi)$ can be determined non-perturbatively by matching Wilson loops in temporal and spatial directions on anisotropic lattices [@scheideler; @burgers; @fujisaki; @klassen]. For our simulation, we calculate $\xi_0(\beta,\xi=2)$ using $\eta(\beta,\xi)$ obtained by Klassen for the range $1 \leq \xi \leq 6$ and $5.5 \leq \beta \leq \infty$ [@klassen]: $$\eta(\beta,\xi) = 1 + \left( 1 - \frac{1}{\xi} \right) \frac{\hat{\eta}_{1}(\xi)}{6} \, \frac{1+a_{1}g_{0}^{2}}{1+a_{0}g_{0}^{2}} g_{0}^{2}, \label{renormalize_anisotropy}$$ where $a_{0}= -0.77810$, $a_{1} = -0.55055$ and $$\hat{\eta}_{1}(\xi) = \frac{1.002503 \xi^{3} + 0.39100 \xi^{2} + 1.47130 \xi - 0.19231} {\xi^{3} + 0.26287 \xi^{2} + 1.59008 \xi -0.18224}.$$ Simulation parameters --------------------- The main runs of our simulations are carried out on $\xi = 2$ anisotropic lattices with size $N_s^3\times N_t = 16^{3} \times 8$, $20^{3} \times 10$ and $24^{3} \times 12$. For $N_t=8$, we make additional runs on $12^3\times8$ and $24^3\times8$ lattices to examine finite size effects. The zero-temperature runs are made on $N_s^3\times \xi N_s$ lattices with $\xi=2$. The simulation parameters of these runs which cover the range $T/T_c \sim 0.9$–5.0 are listed in Table \[tab:simulation\_parameters\]. To determine precise values for the critical coupling, longer runs around the critical points are made at the parameters compiled in Table \[tab:simulation\_parameters-critical\_coupling\]. For the main runs, the aspect ratio $L_sT = (N_s a_s)/(N_t a_t)$ is fixed to 4, where $L_s = N_s a_s$ is the spatial lattice size in physical units. This choice is based on a study of finite spatial volume effects presented in Sec. \[sec:pressure\], where it is shown that, for the precision and the range of $T/T_c$ we study, finite spatial volume effects in the EOS are sufficiently small with $L_s T \geq 4$. Gauge configurations are generated by a 5-hit pseudo heat bath update followed by four over-relaxation sweeps, which we call an iteration. As discussed in Sec. \[sec:pressure\], the total number of iterations should be approximately proportional to $N_t^{6}$ to keep an accuracy for the EOS. After thermalization, we perform 20,000 to 100,000 iterations on finite-temperature lattices and 5,000 to 25,000 iterations on zero-temperature lattices, as compiled in Table \[tab:simulation\_parameters\]. At every iteration, we measure the spatial and temporal plaquettes, $P_{ss}$ and $P_{st}$. Near the critical temperature, we also measure the Polyakov loop. The errors are estimated by a jack-knife method. The bin size for the jack-knife errors, listed in Table \[tab:simulation\_parameters\], is determined from a study of bin size dependence as illustrated in Fig. \[fig:jackknife\]. The results for the plaquettes are summarized in Tables \[tab:MC\_results-16x8\_32\]–\[tab:MC\_results-24x12\_48\]. Scale {#sec:scale} ===== Static quark potential {#subsec:pot} ---------------------- We determine the physical scale of our lattices from the string tension, which is calculated from the static quark-antiquark potential at zero temperature. To calculate the static quark potential, we perform additional zero-temperature simulations listed in Table \[tab:pot\_simulations\]. The static quark potential $V(\hat{R})$ is defined through $$W(\hat{R},\hat{T}) = C(\hat{R}) e^{-V(\hat{R}) \hat{T} / \xi}, \label{eq:potential}$$ where $W(\hat{R},\hat{T})$ is the Wilson loop in a spatial-temporal plane with the size $\hat{R} a_s \times \hat{T} a_t$. We measure Wilson loops at every 25 iterations after thermalization. In order to enhance the ground state signal in (\[eq:potential\]), we smear the spatial links of the Wilson loop [@bali; @bali2]. Details of the smearing method are the same as in Ref.[@cppacs_pot]. We determine the optimum smearing step $N_{opt}$ which maximizes the overlap function $C(\hat{R})$ under the condition $C(\hat{R}) \le 1$. Following Ref.[@bali2], we study a local effective potential defined by $$V_{\it eff}(\hat{R},\hat{T}) = \xi \log \left( \frac{W(\hat{R},\hat{T})}{W(\hat{R},\hat{T}+1)} \right), \label{eq:potential2}$$ which tends to $V(\hat{R})$ at sufficiently large $\hat{T}$. The reason to adopt Eq. (\[eq:potential2\]) instead of the fit result from Eq. (\[eq:potential\]) is to perform a correlated error analysis directly for the potential parameters. The optimum value of $\hat{T}$, listed in Table \[tab:string\_beta\], is obtained by inspecting the plateau of $V_{\it eff}(\hat{R},\hat{T})$ at each $\beta$. We perform a correlated fit of $V(\hat{R}) = V_{\it eff}(\hat{R},\hat{T}_{opt})$ with the ansatz [@michael], $$V(\hat{R}) = V_{0} + \sigma \hat{R} - e \frac{1}{\hat{R}} + l \left( \frac{1}{\hat{R}} - \left[\frac{1}{\hat{R}}\right] \right). \label{pot-fit}$$ Here, $\left[\frac{1}{\hat{R}}\right]$ is the lattice Coulomb term from one gluon exchange $$\left[\frac{1}{\hat{R}}\right] = 4\pi \int_{-\pi}^{\pi} \frac{d^{3}\mathbf{k}}{(2\pi)^{3}} \frac{\cos (\mathbf{k} \cdot \hat{R})} {4 \sum_{i=1}^{3} \sin^{2} (k_{i}a_s/2)},$$ which is introduced to approximately remove terms violating rotational invariance at short distances. The coefficient $l$ is treated as a free parameter. The fit range $[\hat{R}_{min},\hat{R}_{max}]$ for $\hat{R}$ is determined by consulting the stability of the fit. Our choices for $\hat{R}_{min}$ are given in Table \[tab:string\_beta\]. We confirm that the fits and the values of the string tension are stable under a variation of $\hat{R}_{min}$. The string tension is almost insensitive to a wide variation of $\hat{R}_{max}$. Hence $\hat{R}_{max}$ is chosen as large as possible so far as the fit is stable and the signal is not lost in the noise. With this choice for the fit range, we obtain fit curves which reproduce the data well. Our results for the potential parameters are summarized in Table \[tab:string\_beta\]. The error includes the jack-knife error with bin size one (25 iterations) and the systematic error from the choice of $\hat{R}_{min}$ estimated through a difference under the change of $\hat{R}_{min}$ by one. We confirm that increasing the bin size to two gives consistent results on $16^3\times 32$ lattices, while, on $24^3\times 48$ lattices, correlated fits with bin size two become unstable due to insufficient number of jackknife ensembles. String tension {#subsec:sigma} -------------- We interpolate the string tension data using an ansatz proposed by Allton [@allton], $$a_s \sqrt{\sigma} = f(\beta) \, \frac{1 + c_{2}\hat{a}(\beta)^{2} + c_{4}\hat{a}(\beta)^{4}} {c_{0}}, \label{allton_fitting_eq}$$ where $f(\beta)$ is the two-loop scaling function of SU(3) gauge theory, $$\begin{aligned} f(\beta) &=& \left(\frac{6b_{0}}{\beta}\right)^{- \frac{b_{1}}{2 b_{0}^{2}}} \exp[-\frac{\beta}{12 b_0}], \nonumber \\ b_{0} &=& \frac{11}{16 \pi^{2}}, \;\; b_{1} = \frac{102}{(16 \pi^{2})^{2}}, \label{allton_fitting_eq2}\end{aligned}$$ and $\hat{a}(\beta) \equiv f(\beta)/f(\beta=6.0)$. From Table \[tab:string\_beta\], we find that the values for $a_s\sqrt{\sigma}$ are insensitive to the spatial lattice volume to the present precision. Using data marked by star ($*$) in Table \[tab:string\_beta\], we obtain the best fit at $$c_{0} = 0.01171(41), \;\; c_{2} = 0.285(79), \;\; c_{4} = 0.033(30),$$ with $\chi^{2}/N_{DF} = 1.77$. The string tension data and the resulting fit curve are shown in Fig. \[fig:allton-fit\], together with those from isotropic lattices [@edwards]. Critical temperature {#sec:Tc} ==================== We define the critical gauge coupling $\beta_{c}(N_t,N_s)$ from the location of the peak of the susceptibility $\chi_{rot}$ for a Z(3)-rotated Polyakov loop. The simulation parameters for the study of $\beta_{c}$ are compiled in Table \[tab:simulation\_parameters-critical\_coupling\]. The $\beta$-dependence of $\chi_{rot}$ is calculated using the spectral density method [@swendsen]. The results for $\beta_c$ are compiled in Table \[tab:beta\_c\_lat\]. To estimate the critical temperature, we have to extrapolate $\beta_{c}(N_t,N_s)$ to the thermodynamic limit and to the continuum limit. We perform the extrapolation to the thermodynamic limit using a finite-size scaling ansatz, $$\beta_{c}(N_t,N_s) = \beta_{c}(N_t,\infty) - h \left( \frac{N_t}{\xi N_s}\right)^{3}.$$ for first order phase transitions. From the data for $\beta_{c}$ on anisotropic $12^3\times8$, $16^3\times8$ and $24^3\times8$ lattices with $\xi=2$, we find $h = 0.031(16)$ for $N_t/\xi=4$, as shown in Fig. \[fig:finite\_size\_scaling\]. In a previous study on isotropic lattices, $h$ was found to be approximately independent of $N_t$ for $N_t=4$ and 6 [@qcdpax]. We extract $\beta_{c}(N_t,\infty)$ adopting $h = 0.031(16)$ for all $N_t$. The critical temperature in units of the string tension is given by $$\frac{T_c}{\sqrt{\sigma}} = \frac{\xi} {N_t \left(a_s \sqrt{\sigma}\right)\left(\beta_c(N_t,\infty)\right)}$$ using the fit result for Eq. (\[allton\_fitting\_eq\]). The values of $T_{c} / \sqrt{\sigma}$ are summarized in Fig. \[fig:Tc\] and Table \[tab:beta\_c\_lat\]. The dominant part of the errors in $T_{c} / \sqrt{\sigma}$ is from the Allton fit for the string tension. Finally we extrapolate the results to the continuum limit assuming the leading $1/N_t^2$ scaling ansatz, $$\left. F \right|_{N_t} = \left. F\right|_{\rm continuum} + \frac{c_F}{N_t^2} \label{continuum-extrapolation}$$ with $F = T_{c} / \sqrt{\sigma}$. The extrapolation is shown in Fig. \[fig:Tc\]. In the continuum limit, we obtain $$\frac{T_{c}}{\sqrt{\sigma}} = 0.635(10) \label{eq:TcFinal}$$ from the $\xi=2$ plaquette action. In Fig. \[fig:Tc\], we also plot the results obtained on isotropic lattices using the plaquette action [@beinlich2] and the RG-improved action [@tsukuba; @okamoto]. Our value of $T_c/\sqrt{\sigma}$ in the continuum limit is consistent with these results within the error of about 2%. A more precise comparison would require the generation and analyses of potential data in a completely parallel manner, because, as discussed in [@okamoto], numerical values of $T_c/\sqrt{\sigma}$ at a few percent level sensitively depend on the method used to determine the string tension. We leave this issue for future studies. Pressure {#sec:pressure} ======== Integral method --------------- We use the integral method to calculate the pressure [@engels4]. This method is based on the relation $p = -f \equiv (T/V) \log Z(T,V)$ satisfied for a large homogeneous system, where $V=L_s^3$ is the spatial volume of the system in physical units and $Z$ is the partition function. Rewriting $\log Z = \int d\beta \frac{1}{Z} \frac{\partial Z}{\partial\beta}$, the pressure is given by $$\left. \frac{p}{T^{4}} \right|^{\beta}_{\beta_{0}} = \int_{\beta_{0}}^{\beta} d \beta^{\prime} \Delta S(\beta^{\prime}), \label{pressure}$$ with $$\Delta S (\beta) \equiv \xi \left(\frac{N_t}{\xi}\right)^{4} \frac{1}{N_s^{3}N_t} \left. \frac{\partial \log Z}{\partial \beta} \right|_{\xi}. \label{delta_S}$$ For our anisotropic gauge action (\[lat-gauge-aniso\]), the derivative of $\log Z$ is given by $$-\frac{\partial \log Z}{\partial \beta} = {\left\langle}\frac{S_{G}}{\beta} {\right\rangle}+ \beta \frac{\partial \xi_{0}(\beta,\xi)}{\partial \beta} \left( {\left\langle}Q_t {\right\rangle}- \frac{{\left\langle}Q _s {\right\rangle}}{\xi_{0}^{2}(\beta,\xi)} \right) - (T=0 \,\, \mbox{contribution}).$$ We use symmetric $N_s^{3} \times \xi N_s$ lattices to calculate the $T = 0$ contribution. For a sufficiently small $\beta_{0}$, $p(\beta_{0})$ can be neglected. In order to keep the same accuracy of $\Delta S$ for the same physical lattice volume $L_s^3$ in units of the temperature $T$, the statistics of simulations should increase in proportion to $(\xi (N_t/\xi)^{4})^2 / (N_s^3 N_t) \propto N_t^4/\xi^3$. Here, the first factor arises from $\xi (N_t/\xi)^{4}$ in Eq. (\[delta\_S\]), and the second factor $1/(N_s^3 N_t)$ from a suppression of fluctuations due to averaging over the lattice volume. Taking into account the autocorrelation time which is proportional to $N_t^2$, the number of iterations should increase as $ \sim N_t^6$. Integrating $\Delta S$ in $\beta$ using a cubic spline interpolation, we obtain the pressure. For the horizontal axis, we use the temperature in units of the critical temperature, $$\frac{T}{T_{c}} = \frac{(a_s \sqrt{\sigma})(\beta_{c})} {(a_s \sqrt{\sigma})(\beta)}.$$ The errors from numerical integration are estimated by a jack-knife method in the following way [@okamoto]. Since simulations at different $\beta$ are statistically independent, we sum up all the contributions from $\beta_i$ smaller than $\beta$ corresponding to the temperature $T$ by the naive error-propagation rule, $\delta p(T) = \sqrt{\sum_i {\delta p_i(T)^2}}$, where $\delta_i p(T)$ at each simulation point $\beta_i$ is estimated by the jack-knife method. Finite spatial volume effects ----------------------------- We first study the effects of finite spatial volume on the EOS. In Fig. \[fig-delta\_S-12\_24x8\], we show the results for $\Delta S$ at $N_t/\xi=8/2$ with the aspect ratio $L_s T = N_s\xi/N_t = 3$, 4 and 6 which correspond to $N_s=12$, 16 and 24, respectively. Integrating $\Delta S$ in $\beta$, we obtain Fig. \[fig-pressure-Nt8\] for the pressure. We find that the data at $L_s T = 3$ is affected by sizable finite volume effects both at $T \sim T_c$ and at high temperatures. On the other hand, for the range of $T/T_c$ we study, the pressure does not change when the aspect ratio is increased from $L_s T=4$ to 6, indicating that the conventional choice $L_s T = 4$ is safe with the present precision of data. Hence, we choose $L_s T = 4$ for our studies of lattice spacing dependence. Results for $\Delta S$ at $L_s T = 4$ with various $N_t$ are given in Fig. \[fig-delta\_S-8\_12\]. Integrating the data using a cubic spline interpolation, as shown in the figures, we obtain the pressure plotted in Fig. \[fig-pressure\]. Continuum extrapolation ----------------------- We now extrapolate the pressure to the continuum limit using the leading order ansatz of Eq. (\[continuum-extrapolation\]). Figure \[fig-pressure-scaling\] shows the pressure at $T/T_c=1.5$, 2.5 and 3.5 as a function of $(\xi/N_t)^2$ (filled circles). For comparison, results from isotropic lattices using the plaquette action [@boyd] (open circles) and the RG-improved action [@okamoto] (open squares) are also plotted. For the $\xi=1$ plaquette data, we adopt the results of a reanalysis made in Ref.[@okamoto] to commonly apply the scale from the Allton fit of the string tension and also the same error estimation method. The advantage of using anisotropic lattices is apparent from Fig. \[fig-pressure-scaling\]. On the coarsest lattice $N_t/\xi=4$, finite lattice spacing errors at $\xi=2$ are much smaller than those at $\xi=1$ with the same plaquette action. The pressure at $T = 2.5 T_{c}$, for example, on the isotropic $16^{3} \times 4$ lattice is larger than its continuum limit by about 20%, while the deviation is only 5% on the corresponding $16^{3} \times 8$ lattice with $\xi = 2$. Furthermore, with the anisotropic $\xi=2$ data, the leading $1/N_t^2$ term describes the data well even at $N_t/\xi=4$ (the right-most point). Therefore, we can confidently perform an extrapolation to the continuum limit using three data points. In the case of the isotropic plaquette action, in contrast, the continuum extrapolation had to be made with only two data points at $N_t/\xi=6$ and 8. In the continuum limit, our results for $\xi=2$ are slightly smaller than those from the isotropic plaquette action, but the results are consistent with each other within the error of about 5% for the results from the isotropic action. It is worth observing that the $\xi=2$ results have smaller and more reliable errors of 2–3%. In order to quantitatively evaluate the benefit of anisotropic lattices, we compare the computational cost to achieve comparable systematic and statistical errors on isotropic and $\xi=2$ anisotropic lattices. Choosing $T=2.5T_c$ as a typical example, we find that the deviation of the pressure from the continuum limit ([*i.e.*]{}, the magnitude of the systematic error due to finite lattice cutoffs) is comparable between the isotropic $32^3 \times 8$ [@boyd] and our $\xi = 2$ anisotropic $20^3 \times 10$ lattices, [*i.e.,*]{} $p/T^4 = 1.390(26)$ on a $32^3\times 8$ lattice and $p/T^4 = 1.381(13)$ on a $20^3\times 10$ lattice, both lattices having the same spatial size $N_s a_s = 1.6/T_c$. The number of configurations to achieve these statistical errors are 20,000–40,000 iterations for $\xi=1$ and 50,000 for $\xi=2$, respectively. Therefore, for the same statistical error, the relative computational cost for a $\xi=2$ lattice over that for $\xi=1$ is conservatively estimated as $\left( (20^3 \times 10)\times 50000\right)/ \left((32^3 \times 8)\times 4 \times 20000\right) \approx 1/5 $, showing a factor 5 gain in the computational cost for the anisotropic calculation in this example. In Fig. \[fig-pressure-scaling\] we also note that the results from the RG-improved action on isotropic lattices are higher by 7–10% (about 2$\sigma$) than those from the present work in the continuum limit. A possible origin of this discrepancy is the use of the $N_t/\xi=4$ data of the RG-improved action, which show a large (about 20%) deviation from the continuum value. For a detailed test of consistency, we need more data points, say at $N_t/\xi=6$, from the RG-improved action. Repeating the continuum extrapolation at other values of $T/T_c$, we obtain Fig. \[fig-pressure-continuum\]. Our results show a quite slow approach to the high temperature Stephan-Boltzmann limit, as reported also in previous studies on isotropic lattices [@boyd; @okamoto]. Energy density {#sec:energy} ============== We calculate the energy density $\epsilon$ by combining the results of $p/T^4$ with those for the interaction measure defined by $$\frac{\epsilon - 3 p}{T^{4}} = - a_s \left. \frac{\partial \beta} {\partial a_s} \right|_{\xi} \Delta S. \label{energy}$$ The QCD beta function on anisotropic lattice $\left. \frac{\partial \beta}{\partial a_s} \right|_{\xi}$ is determined through the string tension $\sigma$ studied in Sec. \[subsec:sigma\], $$a_s \left. \frac{\partial \beta}{\partial a_s} \right|_{\xi} = \frac{12 b_{0}}{6 \left( b_{1}/b_{0} \right) \beta^{-1} -1} \, \frac{1 + c_{2} \hat{a}^{2} + c_{4} \hat{a}^{4}} {1 + 3 c_{2} \hat{a}^{2} + 5 c_{4} \hat{a}^{4}},$$ where the coefficients $c_i$ are given in Eq. (\[allton\_fitting\_eq\]). The error of the energy density is calculated by quadrature from the error of $3 p$ and that for $\epsilon-3p$, the latter being proportional to the error of $\Delta S$. The results for the energy density are shown in Figs. \[fig-energy\] and \[fig-energy-scaling\]. As in the case of the pressure the leading scaling behavior is well followed by our $\xi=2$ data from $N_t/\xi = 4$, which allows us to extrapolate to the continuum limit reliably. The results for the energy density in the continuum limit are compared with the previous results in Fig. \[fig-energy-continuum\]. Our $\xi=2$ plaquette action leads to an energy density which is slightly smaller than, but consistent with that from the $\xi=1$ plaquette action, but is about 7–10% (about 2$\sigma$) smaller than that from the $\xi=1$ RG action. More work is required to clarify the origin of the small discrepancy with the RG action. Conclusion {#sec:summary} ========== We have studied the continuum limit of the equation of state in SU(3) gauge theory on anisotropic lattices with the anisotropy $\xi \equiv a_s/a_t =2$, using the standard plaquette gauge action. Anisotropic lattices are shown to be more efficient in calculating thermodynamic quantities than isotropic lattices. We found that the cutoff errors in the pressure and energy density are much smaller than corresponding isotropic lattice results at small values of $N_t/\xi$. The computational cost for $\xi=2$ lattices is about 1/5 of that for $\xi=1$ lattices. We also found that the leading scaling behavior is well satisfied already from $N_t/\xi = 4$, which enabled us to perform continuum extrapolations with three data points at $N_t/\xi=4$, 5 and 6. The equation of state in the continuum limit agrees with that obtained on isotropic lattice using the same action, but have much smaller and better controlled errors. The benefit of anisotropic lattice demonstrated here will be indispensable for extraction of continuum predictions for the equation of state, when we include dynamical quarks. Acknowledgements {#acknowledgements .unnumbered} ================ This work is supported in part by Grants-in-Aid of the Ministry of Education (Nos. 10640246, 10640248, 11640250, 11640294, 12014202, 12304011, 12640253, 12740133, 13640260 ). SE and M. Okamoto are JSPS Research Fellows. VL is supported by the Research for Future Program of JSPS (No. JSPS-RFTF 97P01102). Simulations were performed on the parallel computer CP-PACS at the Center for Computational Physics, University of Tsukuba. [99]{} For a recent review, see S. Ejiri, Nucl. Phys. [**B**]{} (Proc. Suppl.) 94 (2001) 19. G. Boyd [*et al.*]{}, Nucl. Phys. [**B469**]{} (1996) 419. CP-PACS Collaboration: M. Okamoto [*et al.*]{}, Phys. Rev. [**D60**]{} (1999) 094510. C. Bernard [*et al.*]{}, Phys. Rev. [**D55**]{} (1997) 6861. J. Engels [*et al.*]{}, Phys. Lett. [**B396**]{} (1997) 210. F. Karsch, E. Laermann and A. Peikert, Phys. Lett. [**B478**]{} (2000) 447. CP-PACS Collaboration: A. Ali Khan [*et al.*]{}, Phys. Rev. [**D63**]{} (2001) 034502; hep-lat/0102038. S. Sakai, A. Nakamura and T. Saito, Nucl. Phys. [**A638**]{} (1998) 535. QCD-TARO Collaboration: Ph. de Forcrand [*et al.*]{}, Phys. Rev. [**D63**]{} (2001) 054501. T. Umeda, R. Katayama, O. Miyamura and H. Matsufuru, Nucl. Phys. [**B**]{} (Proc. Suppl.) [**94**]{} (2001) 435; hep-lat/0011085. J. Engels, J. Fingberg, F. Karsch, D. Miller and M. Weber, Phys. Lett. [**B252**]{} (1990) 625. J. Engels, F. Karsch and H. Satz, Nucl. Phys. [**B205**]{} (1982) 239. H.-Th. Elze, K. Kajantie and J. Kapusta, Nucl. Phys. [**B304**]{} (1988) 832. J. Engels, F. Karsch and T. Scheideler, Nucl. Phys. [**B564**]{} (2000) 303. G. Burgers [*et al.*]{}, Nucl. Phys. [**B304**]{} (1988) 587. QCD-TARO Collaboration: M. Fujisaki [*et al.*]{}, Nucl. Phys. [**B**]{}(Proc.Suppl.)[**53**]{} (1997) 426. T.R. Klassen, Nucl. Phys. [**B533**]{} (1998) 557. G.S. Bali and K. Schilling, Phys. Rev. [**D46**]{} (1992) 2636. G.S. Bali and K. Schilling, Phys. Rev. [**D47**]{} (1993) 661. CP-PACS Collaboration: A. Ali Khan [*et al.*]{}, Phys. Rev. [**D60**]{} (1999) 114508. C. Michael, Phys. Lett. [**B283**]{} (1992) 103. C. Allton, hep-lat/9610016. R.G. Edwards, U.M. Heller, T.R. Klassen, Phys. Rev. Lett. [**80**]{} (1998) 3448. I.R. McDonald and K. Singer, Discuss. Faraday Soc. [**43**]{} (1967) 40,\ A.M. Ferrenberg and R.H. Swendsen, Phys. Rev. Lett. [**61**]{} (1988) 2635; [**63**]{} (1989) 1195,\ S. Huang, K.J.M. Moriarty, E. Myers and J. Potvin, Z. Phys. [**C50**]{} (1991) 221. Y. Iwasaki [*et al.*]{}, Phys. Rev. [**D46**]{} (1992) 4657. B. Beinlich, F. Karsch, E. Laermann and A. Peikert, Eur. Phys. J. [**C6**]{} (1999) 133. Y. Iwasaki, K. Kanaya, T. Kaneko and T. Yoshié, Phys. Rev. [**D56**]{} (1997) 151. tables.tex figs.tex
--- author: - 'Ibrahima Bah,' - 'Federico Bonetti,' - 'Ruben Minasian,' - and Peter Weck bibliography: - './refs.bib' title: Anomaly Inflow Methods for SCFT Constructions in Type IIB --- Acknowledgments {#acknowledgments .unnumbered} =============== We would like to thank Nikolay Bobev, Fririk Freyr Gautason, Craig Lawrie, Emily Nardoni, and Raffaele Savelli for interesting conversations and correspondence. We thank Sakura Schäfer-Nameki for comments on the draft. The work of IB, FB, and PW is supported in part by NSF grant PHY-1820784. RM is supported in part by ERC Grant 787320 - QBH Structure. The work of PW is supported in part by the Chateaubriand Fellowship of the Office for Science & Technology of the Embassy of France in the United States. We gratefully acknowledge the Aspen Center for Physics, supported by NSF grant PHY-1607611, for hospitality during part of this work.
--- author: - Wenyuan Liu - Stanisław Saganowski - Przemysław Kazienko - Siew Ann Cheong title: Using Machine Learning to Predict the Evolution of Physics Research --- Introduction {#introduction .unnumbered} ============ We all become scientists because we want to create an impact and make a difference, to the lives of those around us, and also to the many generations that are to come. We all strive to make choices in the problems we study, but not all choices lead to breakthroughs. There is actually a lot more about scientific breakthroughs that we can try to understand. For one, science is an ecosystem of scholars, ideas, and papers published. In this ecosystem, scientists can form strongly-interacting groups over a particular period to solve specific problems, but later drift apart as their interests diverge, or due to the availability or paucity of funds, or other factors. The evolution of these problem-driven groups is more or less completely documented by the papers published as outcomes of their research. By analysing groups of closely related papers, researchers could extract rich information about knowledge processes [@Chen2010; @Rosvall2010; @Liu2017a]. The potential to map scientific progress using publication data has attracted enormous interest recently [@Zeng2017; @Fortunato2018; @Hicks2015]. However, compared to the study on science at the level of individual papers [@Radicchi2008; @Wang2013; @Ke2015] and at the level of the whole citation network [@Small1999; @Boyack2005; @Bollen2009], where a lot of work has already been done, the research on science at the community level is still limited [@Chen2010; @Liu2017a]. In a recent paper, Liu *et al.* demonstrated the utility of visualizing and analysing scientific knowledge evolution for physics at the aggregated mesoscale through the use of alluvial diagrams[@Liu2017a]. In this picture, papers are clustered into groups (or communities) and these groups can grow or shrink, merge or split, new groups may arise while the others may dissolve. This shares a very strong parallel with what some researchers discovered in social group dynamics [@Palla2007]. More importantly, many breakthroughs were made by scientists absorbing knowledge from other fields, often in a very short time. On the alluvial diagrams, these knowledge transformations manifest themselves as merging and splitting events. Clearly, funding agencies, universities and research institutes would want to promote growing research fields, and particularly those where breakthroughs are imminent. This is why it is important to be able to predict the future events. Liu *et al.*[@Liu2017a] attempted this in their paper, by analysing the correlation between event types and several network metrics. Unfortunately, such predictions are very noisy. While merging events are highly correlated with interconnections between communities, the correlation between splitting events and the internal structure of communities are much more complex; besides, the predictions of forming, dissolving, growing, shrinking were not considered at all. Given the recent successes in the area of machine learning and artificial intelligence to a variety of prediction problems [@Carrasquilla2017; @Ahneman2018], as well as having developed and validated a general framework to predict social group evolution in Saganowski *et al.*[@Saganowski2017], we decided to utilize machine learning techniques to fill the gap in predicting scientific knowledge events [@Saganowski2015; @Ilhan2016; @Pavlopoulou2017]. The overall idea behind the Group Evolution Prediction (GEP) method is to build a classification model trained with historical observations in order to predict the future group changes based on their current characteristics, such as size, density, average degree of nodes, etc. A single historical observation consists of a set of features describing the group at a given point in time, and an event type that this group just experienced. The profile of the group may reflect its structure (e.g. density), dynamics (e.g. average age of its member articles) or context (e.g. the journals which the articles—group members—come from). In total, we used over 100 features, some of which were already known to the literature, whereas the others focusing on the dynamics and context are the new, unique features proposed in this paper. Indeed, when we rank the most valuable features contributing to successful prediction of knowledge evolution events, the new features are among the best ones. In order to be able to perform prediction of future group changes, we have to track and learn the model on the historical cases. For that purpose, the group changes from the past (historical evolution) need to be defined and discovered using the methods successfully applied to the Social Network Analysis field, e.g. the GED method [@Brodka2013a], Tajeuna *et al.* method [@Tajeuna2015] or other [@Saganowski2017a]. Most of the methods consider the similarity between the groups in the consecutive time windows as a major factor to match similar groups and further to identify the evolution event type between them. In our work, we apply the GED method, which facilitates both the group quantity (the number of common members) and the group quality (the importance of common members), in order to match related groups. This allows us to enrich the co-citation evolution network with information about member relations, which is depicted in the Social Position measure [@Brodka2009]. In this study, we extract groups—topical clusters (TCs)—from the bibliographic coupling networks (BCNs) and independently from the co-citation networks (CNs) for the period 1981-2010. Next, the GED method is utilized to label four types of evolution events (changes of TCs): continuing, dissolving, merging and splitting. Then, we use an auto-adaptive mechanism to find the most predictive machine learning model together with its parameters for each network. Additionally, two scenarios were considered for each network: when the number of events of each kind is imbalanced (the original case) and balanced by equally sampling. In general, the prediction quality was satisfactory good for all event types, with F-measures substantially exceeding 0.5. Such values are significantly greater than the baseline F-measures as of 0.14–0.21 for both networks. The feature ranking tells us that the most informative features are context-based like the number of PRE, PRB, and RMP papers belonging to the group, and the structural features like the degree, closeness, and betweenness. While looking more carefully at the betweenness of papers from two *merging* TCs, we find the significantly higher betweenness for papers that are linked across these two TCs than those connected inside the TCs. No such enhancement in betweenness was found for *continuing* TCs, while a significant decrease in average betweenness was found for *splitting* TCs. In summary, our findings suggest that evolutionary events in the landscape of physics research can be predicted accurately using various machine learning models, and understanding this predictive power in terms of important features is a worthwhile future research direction. Results {#results .unnumbered} ======= Physics research evolution for 1981-2010 {#physics-research-evolution-for-1981-2010 .unnumbered} ---------------------------------------- We begin with studying how scientific knowledge evolved in terms of communities of research papers, and how these communities changed over time. There are several studies on evolution of knowledge within the set of whole journals [@Rosvall2010], which is considered as the analysis on the macroscopic level. Also some research has been carried out for the collection of papers, usually involving some subjective criterion provided by the authors, e.g. only papers cited at least 100 times[@Chen2010]. As a result, they focus only on a small subset—the most prominent, frequently cited papers, which do not represent the whole diverse domain knowledge. This kind of analysis is considered as microscopic. In our approach, we assume that the most informative way is to analyse neither the entire journal, nor the most cited papers, but whole communities of closely related papers. These communities emerge naturally since they share the same citation patterns. The analysis at such level provides better balance between high and low granularity. We call this kind of analysis as mesoscopic, because it is in-between the macroscopic scale of journals and the microscopic scale of individual papers. However, if we perform community detection directly on the citation network, we might end up with communities consisting of both old and recent papers simultaneously. In such case, it is difficult to interpret how scientific knowledge has evolved from the past to the present. We should be able to explain that such and such communities represent scientific knowledge from an earlier year, whereas the other communities correspond to scientific knowledge from another consecutive year. This enable us to compare them and to distill a picture of how scientific knowledge has evolved from past to present. It requires, however, to construct the networks from research papers that are published in a given year (bibliographic coupling), or papers that are cited in a given year (co-citation). The bibliographic coupling network (BCN) reflects the relation between present publications while the co-citation network (CN) represent the relation between papers which have strong influence on recent publications. In this way, we can detect communities over the years, and study how they evolve year by year, see the Methods section for details on BCN and CN. After building BCN and CN, the Louvain method was used to extract the community structures. By checking the Physics and Astronomy Classification Scheme (PACS) numbers of the papers in these communities, we have shown that the BCN communities are meaningful and reflect the real structure of the scientific communities [@Liu2017a]. We found indeed that papers in the same community are really focus on the closely related topics. For the CN communities, this validation is tricky because of two problems: (i) the old Physics Review papers have no PACS numbers, and (ii) PACS was revised several times, so the same numbers in different versions can potentially refer to different topics, or the same topics are referred to by different numbers in different versions. Nevertheless, systematic validation seems to be impossible although a quick check on some CN communities after 2010 suggests that CN community structure also reliably reflects the actual scientific community. We refer to these validated units of knowledge evolution as topical clusters (TCs) in this paper. In , we provide the alluvial diagram that depicts the evolution of TCs within the BCNs for the period between 1981 to 2010. The equivalent alluvial diagram for the CNs is shown in in Supplementary Information (SI). In both alluvial diagrams, we visualized the sequences of TCs, their inheritance relations, which can be intimacy indices (for the BCN communities), fraction of common members or inclusion measures (for the CN communities), and the evolution processes they undergo, see the Methods section for more details. The events (changes) that we can discern from the alluvial diagram (shown in ) are analogous to those recognized in social group evolution[@Palla2007]. They represent forming, dissolving, growing, shrinking, merging and splitting. We found in Liu *et al.* that the prediction of such events is hard, since the correlation between them is nonlinear and complex. This challenge is addressed in the following section by tapping into the power of machine learning. Event labelling {#event-labelling .unnumbered} --------------- The GED method takes into account the size and the similarity between groups (TCs) in the consecutive time frames in order to label groups change (assign event type). There are four events considered in this work: - *continuing*—a research field is said to be continuing when the problems identified and solutions obtained from one year to another are of an incremental nature. It is likely correspond to the repeated hypothesis testing picture of the progress of science proposed by Karl Popper [@Popper1999]. Therefore, in the CN, this would appear as a group of papers that are repeatedly together cited year by year. In the BCN, this shows up as groups of articles from successive years sharing more or less the same reference list. - *dissolving*—a research field is thought to disappear in the following year if the problems are solved or abandoned, and no new significant work is done after this. For the CN, we will find a group of papers that are cited up to a given year, but receives very few new citations afterwards. In the BCN, no new relevant papers are published in the field, hence, the reference chain terminates. - *splitting*-–-a research field splits in the following year, when the community of scientists who used to work on the same problems, start to form two or more sub-communities, which are more and more distant from one another. In terms of the CN, we will find a group of papers that are almost always cited together up till a given year, breaking up into smaller and disjoint groups of papers that are cited together in the next year. In the BCN, we will find the transition between new papers citing a group of older papers to new papers citing only a part of this reference group. - *merging*-–-multiple research fields are considered to have merged in the following year when the previously disjoint communities of scientists found mutual interest in each other’s field so that they solve the problems in their own domain using methods from another domain. In the CN, we find previously distinct groups of papers that are cited together by papers published after a given year. In the BCN, newly published papers will form a group commonly citing several previously disjoint groups of older papers. The GED method has two main parameters (alpha and beta), which are the levels of inclusion that groups in the consecutive years have to cross in order to be considered as matching groups. We have applied the GED method with the wide range of these parameters from 5% to 100%. The characteristics of the considered networks required us to set the alpha and beta thresholds to very low values, i.e. 30% for the BCN, and 10% for the CN, see SI for more details. In total, we have obtained 479 various events for the BCN, and 492 events for the CN, which are our observations and the labels in the prediction part of our study. In both networks, the events distribution was imbalanced with the continuing event dominating over all other types, see A1 and B1. Future events prediction {#future-events-prediction .unnumbered} ------------------------ The machine learning approach to prediction requires dividing the data into two parts: the training data set and test data set. The training data is used to learn classifier, which can then label events in the test data. The labelled values are compared with the event labels and the prediction performance is calculated. More than 450 observations were used to train the classifiers. Each observation contained 77 features (preselected from the initial 100) divided into three categories: microscopic features (related to nodes in the group, e.g. node degree), mesoscopic features (related to the entire group, e.g. the group size), and macroscopic features (related to the whole network, e.g. network density). Mesoscopic features calculated for individual nodes are commonly aggregated for all nodes from the group, e.g. average node degree or betweenness in the group. See the SI for the complete list of features used. To automatically select the best classification algorithm (model) as well as its hyper-parameter settings to maximize the prediction performance, the Auto-WEKA software package [@Kotthoff2017] was utilized. For each network, we ran the Auto-WEKA for 48 hours, which allowed us to validate nearly 20,000 configurations per network. The metric being maximized was the F-measure, commonly used for multi-class classification. The overall classification quality was calculated as the average F-measure for all event types, treating them as equally important. The predicted output variable (event labels) had an imbalanced distribution. Commonly, classifiers tend to focus on the dominant event type (class), which is very well predicted, but at the expense of the minority event types. For the imbalanced BCN data set, the best performance was achieved with the Attribute Selected Classifier (with the SMO as base classifier), which performs feature selection[@Platt1999]. The percentage of the correctly classified instances was 80.6%, while the average F-measure was only 0.50 due to classifier focusing on continuing, which was the most frequently occurring event type, see A. For this event, the F-measure value was equal to 0.89, and only 7 events out of 352 were incorrectly classified. The worst classified was the splitting event, whose F-measure was only as of 0.11. Most of the splitting events were incorrectly classified as continuing (31 out of 33 events). The second worst was merging, with F-measure 0.35. Again, the majority of the merging events were wrongly classified as continuing events: 38 out of 56. Interestingly, the splitting and merging events were never cross-classified mistakenly. For the imbalanced CN data set, the best performance was achieved with a lazy classifier, which uses locally weighted learning [@Christopher1997]. The percentage of the correctly classified instances was 73.3%, while the average F-measure was only 0.53, again due to the classifier concentrating on the dominating continuing event type, see B. The F-measure value for the continuing event was only 0.83, however, as many as 50 continuing events (out of 337) were wrongly classified as dissolving. Alike to BCN, many splitting and merging events were incorrectly classified as continuing: 17 out of 22 events, and 24 out of 46 events, with F-measure equal to 0.30 and 0.42, respectively. ![The prediction quality of classification results. The F-measure values for the imbalanced BCN (A) and CN (B) data sets, as well as the balanced BCN (C) and CN (D) data sets. The distribution of classes in the training sets are provided for each data set: A1, B1, C1, D1, respectively. For the imbalanced data sets, the classifier focused on the dominating continuing event. Balancing the data sets increased the overall prediction quality by over 20%.[]{data-label="fig:Fig2"}](figures/Figure2.png){width="\textwidth"} By balancing the imbalanced training data sets (i.e. by equally sampling them), we force the classifiers to pay more attention to the features rather than to the number of occurrences of the particular majority event type. As a result of balancing data sets, the previously minor event types (dissolving, merging, and splitting) were predicted much better, but with a significant drop in performance of the continuing event classification. More importantly, by balancing the data sets we increased the overall prediction quality by over 20%. For the balanced BCN data set, the best performance was achieved by means of the boosting-based classifier AdaBoost with the Bayes Net as the base model. The percentage of the correctly classified instances was 62.0% and the average F-measure was 0.61. The biggest sources of errors were merging events, which were wrongly classified as continuing and dissolving, as well as continuing wrongly classified as splitting. The best classified event was dissolving (only 4 mistakes in 27 classifications, the overall score 0.79) followed by the splitting event (6 mistakes in 27 classifications, overall F-measure 0.70). For the balanced CN data set, the Attribute Selected Classifier (with the PART as base classifier) provided the best results—the percentage of the correctly classified instances was 69.32%, while the average F-measure was 0.69[@Frank1998]. The dissolving, merging, and splitting events were classified very well with the F-measure values equal to 0.79, 0.82, and 0.75 respectively. Most of the continuing events were wrongly classified as splitting (13 out of 22), which resulted in lower F-measure value 0.40. What is interesting for us to note is that the prediction results for the CN being slightly better than for the BCN. A possible explanation is that for the CN we used a richer similarity measure containing users importance information. Thus the event tracking and therefore the ground truth could be more accurate. Overall, the prediction quality expressed by the average F-measure was very good for the imbalanced as well as for the balanced data sets, as the baseline results obtained with the ZeroR classifier were much worse: F-measure 0.21 for both, BCN and CN, imbalanced data sets, 0.18 for the balanced BCN and 0.14 for the balanced CN. For each data set different classifier turned out to be the best, however most models were wrapped with the boosting or meta classifiers. Predictive feature ranking {#predictive-feature-ranking .unnumbered} -------------------------- The feature selection technique is used in machine learning to find the most informative features, to avoid classifier overfitting, to eliminate (or at least to reduce) the noise in the data as well as to provide some explanations about phenomena[@Yang1998]. By repeating the feature selection 1000 times, we obtained 1000 sets of selected features. Next, we calculated how many times each feature has been selected, thus, receiving the ranking of the most often selected features. For the BCN, the context-based features dominated the ranking. It referred, especially the number of papers from the Physical Review E, Physical Review B and Physical Review A, see A. Beside the context, the network features based on degree, betweenness, size and closeness measures were most informative, which tells us that the structural properties are as important as context awareness. The context-based feature, i.e. the number of papers published in Review of Modern Physics, was the most often selected for the CN data set. It is followed by closeness- and degree-based features in the ranking, see B. For both networks macroscopic features were ranked rather low, which suggests that the general network profile is not very important, perhaps because of the smooth changes in the entire network. Surprisingly, the dynamic features, e.g. related to the average age of references (for BCN) and age of articles (for CN) did not show informative value and were ranked very low for both networks. The rankings were validated in the additional two years of data available (2010-2012). The prediction was performed twice: (i) using all features, and (ii) using the top 10 ranked features only. Selecting only the top 10 features, boosted the quality of the prediction by 11% for the CN, and by 2% for the BCN, which underlines the necessity of the feature selection process. ![Feature ranking. The most frequently selected features in 1000 iterations for the BCN (A) and CN (B) data sets. The context-based features (number of papers published in a given journal) turned out to be the most informative, followed by the microscopic structural measures, especially closeness, degree and betweenness.[]{data-label="fig:Fig3"}](figures/Figure3.png){width="\textwidth"} Changes to the Betweenness Distributions Associated with Merging and Splitting Events in BCN {#changes-to-the-betweenness-distributions-associated-with-merging-and-splitting-events-in-bcn .unnumbered} -------------------------------------------------------------------------------------------- Having the list of best predictive features, , we can analyse some of them more in-depth to look for early warning signals. Basically, we believe that scientific knowledge evolves slowly, and this slow evolution drives the evolution of citation patterns. Therefore, there must be specific changes in citation patterns that precede merging and splitting events. Besides the number of PRE papers in a TC, the sum\_network\_betweenness is also a strongly predictive feature, see A. This suggests that we should look at the betweenness of papers in the BCN more carefully. The betweenness of the node denotes what percentage of shortest paths between all pairs of nodes in the network passes a given node. Values of nodes’ betweenness can be aggregated (sum, average, max, min) for all nodes in the TC, as what we list in . However, in this section we only focus on the distribution of original node betweenness. Naively, when we consider the part of the BCN adjacency matrix corresponding to two TCs that ultimately merged, we expect to find few links between TCs at first. But as the number of links between TCs increase over time, the modularity-maximizing Louvain method will eventually merge the two TCs into a single TC. This is shown schematically in , where in general betweenness will increase on average with time as the two TCs merge. [0.24]{} ![image](figures/csa_1-1.png){width="\textwidth"}   [0.24]{} ![image](figures/csa_1-2.png){width="\textwidth"}   [0.24]{} ![image](figures/csa_1-3.png){width="\textwidth"}   [0.24]{} ![image](figures/csa_1-4.png){width="\textwidth"} In reality, there are always links between TCs, and the numbers and strengths of these links fluctuate over time. To develop a more quantitative description of the merging events outlined in , as well as splitting and continuing events, we focus on five events going from 1999 to 2000, shown in . ![image](figures/events.pdf){width="\textwidth"} **TC in 1999** **event** **TC in 2000** ------------------ ----------- ------------------ 1999.01 split 2000.02, 2000.03 1999.01, 1999.02 merge 2000.03 1999.04 continue 2000.06 1999.11, 1999.12 merge 2000.15 1999.13 continue 2000.16 : The five evolution events from 1999 to 2000 in the BCN alluvial diagram that we will study quantitatively. The naming convention for TC is that four digits before ‘.’ is the year of TC, two digits after ‘.’ is the position of the TC in the diagram, starting with 00 for the bottom TC, the one just above bottom is 01 and so on. In the left panel, we highlight the related TCs.[]{data-label="tab:events"} ### $\mathbf{1999.01} + \mathbf{1999.02} \rightarrow \mathbf{2000.03}$ {#mathbf1999.01-mathbf1999.02-rightarrow-mathbf2000.03 .unnumbered} Let us consider the part of the BCN associated with the TCs. For example, for 1999.01 and 1999.02, we can see from (A) that connections within 1999.01 and 1999.02 are very dense, but there are also some links between the two TCs. In fact, we find 164 out of 1849 papers in 1999.01 with non-zero bibliographic coupling to 144 papers in 1999.02 (344 papers). [0.45]{} ![image](figures/csa_2.png){width="\textwidth"}   [0.45]{} ![image](figures/csa_10.png){width="\textwidth"} The natural question we then ask is: are the betweenness of the 164 papers in 1999.01 that are coupled to 1999.02 larger, equal, or smaller than the betweeness of the rest 1685 papers in 1999.01 not coupled to 1999.02? Alternatively, if we think of the 164 papers as randomly sampled from the 1849 papers in 1999.01, are we sampling the 164 betweenness in an unbiased fashion? To distinguish the different parts of the TC, we call all papers in 1999.01 which have coupling with papers in 1999.02 as $1999.01a$, and the rest of papers as $1999.01b$. For more detail analysis, we will divide $1999.01a$ and $1999.01b$ into $1999.01a\alpha$, $1999.01a\beta$, $1999.01b\alpha$, $1999.01b\beta$. $1999.01a\alpha$ consist of 17 papers in $1999.01a$ that do not have references in common with papers in $1999.01b$, $1999.01a\beta$ consist of 147 papers in $1999.01a$ that have references in common with papers in $1999.01b$, $1999.01b\alpha$ are 907 papers in $1999.01b$ that have references in common with papers in $1999.01a$ and $1999.01b\beta$ represents 778 papers in $1999.01b$ that do not have references in common with papers in $1999.01a$. ------------------ ---- ---- ---- 25 50 75 1999.01 $1999.01a$ $1999.01a\alpha$ $1999.01a\beta$ $1999.01b$ $1999.01b\alpha$ $1999.01b\beta$ $1999.02$ $1999.02a$ $1999.02b$ 1999.11 $1999.11a$ $1999.11b$ 1999.12 $1999.12a$ $1999.12b$ ------------------ ---- ---- ---- : The 25th, 50th and 75th percentiles of the betweenness of 1849 papers in 1999.01, the 164 papers in $1999.01a$, the 17 papers in $1999.01a\alpha$, the 147 papers in $1999.01a\beta$, the 1685 papers in $1999.01b$, the 907 papers in $1999.01b\alpha$, the 778 papers in $1999.01b\beta$; the 344 papers in 1999.02, the 144 papers in $1999.02a$, and the 200 papers in $1999.02b$; the 1014 papers in 1999.11, the 299 papers in $1999.11a$, the 715 papers in $1999.11b$ and the 988 papers in 1999.12, the 347 papers in $1999.12a$, the 641 papers in $1999.12b$.[]{data-label="tab:percentile"} In , we show the 25th, 50th and 75th percentiles of the papers in these smaller groups, compared to those of 1849 papers in 1999.01 and 344 papers in 1999.02. As we can see, the 25th, 50th, 75th percentile betweenness in the connecting parts ($1999.01a$ and $1999.02a$) are all higher than the 25th, 50th, 75th percentile betweenness in the non-connecting parts ($1999.01b$ and $1999.02b$). More importantly, these percentile betweenness are higher than the 25th, 50th, 75th percentile betweenness of the TCs 1999.01 and 1999.02 themselves. To test how significant these quartiles are in $1999.01a$, we randomly sampled 164 betweenness values from 1999.01 times, and measured the quartiles of these samples. When we draw random samples from a TC, the 25th percentile, the 50th percentile, and the 75th percentile, depends on the size of the TC. There is more variability in these quartiles in smaller samples than they are in larger samples. Therefore, in the test for statistical significance, the observed quartile has to be tested against different null model quartiles for samples of different sizes. To do this, we draw samples with a range of sizes from the same set of betweenness, and for a given quartile (25%, 50%, or 75%), fit the minimum quartile value against sample size to a cubic spline, and the maximum quartile value against sample size to a different cubic spline. With these two cubic splines, we can then check whether the observed quartile value for a sample of size $n$ is more than or less than the null model minimum or maximum using cubic spline interpolation. From the histograms shown in (A), we see that the betweenness quartiles of $1999.01a$ are statistically larger than random samples of the same size from 1999.01, at the level of $p < \num{d-6}$, which means the papers in $1999.01a$ have significantly larger betweenness than other papers in 1999.01. [0.3]{} ![image](figures/csa_3.png){width="\textwidth"}   [0.3]{} ![image](figures/csa_4.png){width="\textwidth"}   [0.3]{} ![image](figures/csa_5.png){width="\textwidth"} \ [0.3]{} ![image](figures/csa_6.png){width="\textwidth"}   [0.3]{} ![image](figures/csa_7.png){width="\textwidth"}   [0.3]{} ![image](figures/csa_8.png){width="\textwidth"} \ [0.3]{} ![image](figures/csa_9.png){width="\textwidth"}   [0.3]{} ![image](figures/csa_11.png){width="\textwidth"}   [0.3]{} ![image](figures/csa_12.png){width="\textwidth"} \ [0.3]{} ![image](figures/csa_13.png){width="\textwidth"}   [0.3]{} ![image](figures/csa_14.png){width="\textwidth"} We also checked the statistical significance of the larger betweenness values of $1999.02a$, against random samples of the same length (144) from 1999.02. From (B), we can derive that the quartiles of $1999.02a$ are only a little larger than the tails of the quartile histograms of the random samples, but their statistical significance is still at the level of $p < \num{d-6}$. ### $\mathbf{1999.01} \rightarrow \mathbf{2000.02} + \mathbf{2000.03}$ {#mathbf1999.01-rightarrow-mathbf2000.02-mathbf2000.03 .unnumbered} When a TC splits into two in the next year, we expect the links between two parts $a$ and $b$ in the TC to have thinned out to the point that the modularity $Q$ of the whole is lower than the modularities $Q_a$ and $Q_b$ of the two parts. However, in general, we would not know how to separate the TC into the two parts $a$ and $b$. Fortunately, for the $1999.01 \rightarrow 2000.02 + 2000.03$ splitting event, we also know the part $1999.01a$, which merged with $1999.02a$, became 2000.03. Therefore, we might naively expect $1999.01b$ to be the part that split from $1999.01$ to become 2000.02. If we test the quartiles of $1999.01b$, against random samples of the same size from 1999.01, we find the histograms shown in (C). As we can see, the betweenness quartiles of $1999.01b$ are quite a bit lower than the typical values in 1999.01, but this difference is statistically not as significant as the quartiles of $1999.01a$. Thinking about this problem more deeply, we realized that while papers in $1999.01b$ have no references in common with 1999.02, some of them do share common references with $1999.01a$. Let us call these sets of papers $1999.01a\alpha$ (papers do not have references in common with papers in $1999.01b$), $1999.01a\beta$ (papers have references in common with papers in $1999.01b$), $1999.01b\alpha$(papers have references in common with papers in $1999.01a$), and $1999.01b\beta$ (papers that do not have references in common with papers in $1999.01a$). In (D), we learn from the histograms that the betweenness quartiles of $1999.01b\alpha$ are indistinguishable with random samples of the same size from 1999.01. On the other hand, from the histograms in (E), we find out that while the lower betweenness quartile of $1999.01b\beta$ is indistinguishable with the random samples of the same size from 1999.01, its median and upper quartile are both on the low sides of the random sample distributions. This suggests a split of 1999.01 to (1999.01a + $1999.01b\alpha$) + $1999.01b\beta$. Just to be safe, we also checked the betweenness quartiles of $1999.01a\alpha$ and $1999.01a\beta$, against random samples of the same sizes from 1999.01. As we can see from (F) and (G), the lower quartiles and medians are lower than those obtained from random samples, but the upper quartiles are decidedly higher. However, the difference between $1999.01a\alpha$ and $1999.01a\beta$ is not as obvious as difference between $1999.01b\alpha$ and $1999.01b\beta$, one possible reason is the smaller sample size (17, 147 vs. 907, 778). Again, these results are consistent with the picture that the rise in betweenness in $1999.01a$ is driving the merging with $1999.02a$, while the fall in betweenness in $1999.01b\beta$ is driving a splitting inside 1999.01. ### $\mathbf{1999.11} + \mathbf{1999.12} \rightarrow \mathbf{2000.15}$ {#mathbf1999.11-mathbf1999.12-rightarrow-mathbf2000.15 .unnumbered} Although a small part split off from each of 1999.11 and 1999.12, the main event associated with the two TCs was a symmetric merging. Looking again into the relevant parts of the BCN, we found 299 out of 1014 papers in 1999.11 coupled to 347 out of 988 papers in 1999.12, and we call them $1999.11a$ and $1999.12a$, respectively. As we can see from the histograms in (H) and (J), the betweenness quartiles in $1999.11a$ and $1999.12a$ are significantly higher than one would expect from random samples of 1999.11 and 1999.12. Simultaneously, the betweenness quartiles in $1999.11b$ and $1999.12b$ are significantly lower than in random samples of 1999.11 and 1999.12 (see (I) and (K)). Therefore, what we are seeing here might be the early warning signals of merging, as well as that of the asymmetric splitting. ### $\mathbf{1999.04} \rightarrow \mathbf{2000.06}$ and $\mathbf{1999.13} \rightarrow \mathbf{2000.16}$ {#mathbf1999.04-rightarrow-mathbf2000.06-and-mathbf1999.13-rightarrow-mathbf2000.16 .unnumbered} So far we have learnt that a decrease in betweenness within a TC signals a possible split, whereas an increase in betweenness of the part of the TC coupled to another TC signals a merger between the two TCs. For this story to be consistent, we must not see these signals in the continuing events $1999.04 \rightarrow 2000.06$ and $1999.13 \rightarrow 2000.16$. However, if we go through the full BCN, we find that 370 out of 389 papers in 1999.04 and 308 out of 319 papers in 1999.13 are coupled to papers outside of these TCs, which suggests the possibility of merging or splitting. --------- ----- ---- ---- ---- ----- ---- ---- ---- 25 50 75 25 50 75 1999.00 12 1 - - 1999.01 56 6 1999.02 6 2 - 1999.03 25 0 - - - 1999.04 - - - - 8 1999.05 179 4 1999.06 110 40 1999.07 29 44 1999.08 63 17 1999.09 49 99 1999.10 53 254 1999.11 89 71 1999.12 53 39 1999.13 9 - - - - 1999.14 62 210 1999.15 17 176 b 88 27 --------- ----- ---- ---- ---- ----- ---- ---- ---- : The distributions of betweennesses of papers in 1999.04 and 1999.13 that share common references with the other TCs in 1999 (1999.00 to 1999.15). Four columns below ‘1999.04’ and ‘1999.13’ denote: the first column shows how many papers have common references with the other TCs, while the second, third, and fourth column show the lower, median, and upper quartile values of betweennesses of these papers, respectively. For example, there are 25 papers in 1999.04 that share common references with papers in 1999.03, and the betweennesses of these papers have a lower quartile value of , a median value of , and an upper quartile value of . Similarly, there are 254 papers in 1999.13 that share common references with papers in 1999.10, and the betweennesses of these papers have a lower quartile value of , a median value of , and an upper quartile value of . The bottom row ‘b’ represent 1999.04b and 1999.13b respectively, which are papers in 1999.04 and 1999.13 have no references in common with papers in other TCs. A betweenness value in red means that it is larger than the maximum of the corresponding quartile distribution of random samples, and a betweenness value in blue denotes it is smaller than the minimum of the corresponding random samples.[]{data-label="tab:continue_percentile"} However, as we can conclude from , while the lower betweenness quartiles of the coupling parts of 1999.04 and 1999.13 with other TCs may be significantly larger than those of random samples of the two TCs, the highest betweenness quartiles are never significantly larger. Therefore, at the same level of confidence that we have set for the precursors of merging between 1999.01 and 1999.02, as well as between 1999.11 and 1999.12, we have to say that there is no significant precursors for 1999.04 and 1999.13 to merge with other TCs. What about splitting then? A TC is likely to split into two if at least one of two parts has reduced betweenness. We see in that betweenness in the coupling parts of 1999.04 and 1999.13 are not significantly lower than those of random samples. Therefore, we look at the non-coupling part, i.e. papers in 1999.04 and 1999.13 which have no references in common with papers in other TCs, but they may have common references with papers in the same TCs. We call these non-coupling parts $1999.04b$ and $1999.13b$, respectively (the bottom row in ). Only the top betweenness quartile of $1999.04b$ falls below that of random samples from 1999.04 in . Therefore, the early warning for a splitting event in the next year is not strong enough. For $1999.13b$, on the other hand, all three betweenness quartiles fall below that of random samples from 1999.13, even after we have accounted for the small size of $1999.13b$. This suggests that the probabiiity of a splitting event next year is high, but 1999.13 continued on to 2000.16, which thereafter continued to 2001 without merging or splitting. This might be because additional conditions, like the size of TC being large, must be satisfied before a splitting can occur. Discussion {#discussion .unnumbered} ========== During the past two decades, researchers have made a lot of efforts to understand the system of science. Many problems are solved, however the understanding of interactions between different fields is still limited. Investigating the temporal network (BCN, CN) and their community structures, we are able to measure and quantify the complex interaction between different fields particularly in physics over time. Naturally, we would like to have a predictive power based on this picture. However, the correlation between network structure and evolution events is nonlinear and complex. Therefore we turn to machine learning techniques, which have shown a great power to solve predictive problems that are hardly to be solved using traditional statistical methods. To our knowledge, this is the first study that utilizes both machine learning and network science approaches to predict the future of science at the community level. To be able to identify changes in TCs we needed to define time windows used for network creation and community detection. The natural choice for bibliographical data was the usage of single years, since the publishing process may last many months. Obviously, another granularity may be considered like multiple years, e.g. 2 or 5 years. In our approach, i.e. both for BCN and CN, every citation has the same importance. However, there are some other concepts like fractional counting of citations [@Leydesdorff2010]. It assumes that the impact of each citation is proportionate to the number of references in the citing document. Additionally, it can be differentiated depending on e.g. the quality of the journal. For the CN we have calculated the similarity between groups in the consecutive time windows in two ways: (i) using the plain relative overlap measure, and (ii) using the inclusion measure based on Social Position. The idea was to enrich evolution data with the structural information occurring between the nodes. It turned out that both measures provided the similar labelling, but the evolution tracking with the Social Position information produced slightly better initial prediction. Therefore, the study was continued only for the inclusion measure,see SI for more information. We decided to analyse more in-depth only on one feature describing structural profile of TCs, namely node betweenness. It was primary caused by the limited amount of resources and complexity of analyses. The entire process required much human assistance and could not have been easily automated. In our experiments, we utilized the raw, imbalanced or artificially flattened—balanced data sets. However, depending on the study purpose, we can bias some classes we are more interested in e.g. split. It can be achieved either by means of appropriate balancing—sampling for the learning set, or reformulating the problem into the binary question—is split expected (true) or not (false). As of now, the betweenness analysis is still limited to several case studies, in future a more rigorous framework will be desired. The idea of analysing science by discovery of knowledge changes is general and can be applied to all bibliographical data containing citations. We focus solely on APS journals, however, also papers indexed by PubMed, Web of Science or Google Scholar may be studied. Methods {#methods .unnumbered} ======= The entire analytical process consists of several steps that are primary defined by the Group Evolution Prediction (GEP) framework. First, the bibliographic coupling network (BCN) and co-citation network (CN) are extracted from the references placed in the papers from a given time window, see , and this is carried out separately for each period. As a result, we get a time series of BCNs/CNs. Next, paper groups called topical clusters (TCs) are extracted using the Louvain clustering methods, independently for each BCN/CN in the time series. Each group is described by the set of predictive features. Having TCs for consecutive periods, we were able to identify changes in TC evolution using the Group Evolution Discovery (GED) method that appropriately labels the TC changes, see below. Independently, the features ranking and its validation were performed to find the most valuable TC measures. Based on this ranking, a structural measure node betweenness was selected for the more in-depth studies as the early signal for splitting or merging. GEP method {#gep-method .unnumbered} ---------- The Group Evolution Prediction (GEP) method is the first generic approach for the prediction of the evolution of groups[@Saganowski2017], in our case groups correspond to TCs. The GEP process consists of six main steps: (1) time window definition, (2) temporal network creation, (3) group detection, (4) group evolution tracking, (5) evolution chain identification and feature calculation, and (6) classification using machine learning techniques. Thanks to its adaptable character, we were able to apply it to the BCN and CN differently. For the group (TC) detection in both networks, we applied the Louvain method[@Blondel2008]. The group evolution tracking was performed with the GED method (see below), but we used different similarity measures for each network BCN and CN, see below. The set of features describing the group at a given time windows was adjusted to our networks, as some of the features defined in the GEP method were not applicable in our case. We also introduced some new, dedicated measures appropriate for bibliographical data, see SI for the complete list. Finally, we applied the Auto-WEKA tool to find the best predictive model and its parameters from the wide range of all possible solutions. The commonly known average F-measure was used as a prediction performance measure. Bibliographic coupling network (BCN) and co-citation network (CN) {#bibliographic-coupling-network-bcn-and-co-citation-network-cn .unnumbered} ----------------------------------------------------------------- In the BCN and CN, nodes represent papers and undirected but weighted edges denote the bibliographic coupling strengths and co-citation strengths, respectively. That is, if two papers share $w$ common references, the BCN edge between them would have a weight of $w$. For example, papers 1 and 2 in share three citations: A, B, and C, whereas papers 3 and 4 commonly cite only one paper—E. On the other hand, if two papers are cited together by $w'$ papers, the edge between them in the CN receives weight $w’$. Papers A and B are cited together by two other papers: 1 and 2, but papers B and C by three, i.e. additionally by paper 3. Both BCN and CN are temporal networks, in which the nodes are all papers published within a specific time window (BCN) or papers cited within a given time window (CN). We assume that the reasonable time window for bibliographical data is one year to facilitate the analysis of changes in scientific knowledge, i.e. changes in topical clusters year by year. For the BCN, only the giant component, which in most cases occupies 99% of the whole BCN, will be considered for the TC detection and evolution analysis. For the CN, we do not use all papers cited in the given time window because most of them are cited only a small number of times, and thus they have little influence on the broader knowledge evolution. Therefore, we rank all available $N$ papers ${p_1,p_2,…,p_N}$ in the descending order by the number of times they are cited in this time window (year): ${f_1,f_2,…,f_N },f_1 \geq f_2 \geq ... \geq f_N$. Next, we choose the top $n$ papers ${p_1,p_2,…,p_n}$, that totally gathered $\frac{1}{4}$ of all citations, i.e. such that $n < N$ is the smallest integer to satisfy $\sum_{i=1}^{n}f_i \geq \frac{1}{4}\sum_{j=1}^{N}f_j$. The data we used in this paper is the APS data set, consisting of about half a million publications between 1893 to 2013 and six million citation relations among them[@APS_dataset]. ![The process of building a Bibliographical Coupling Network (BCN) and Co-citation Network (CN) from the citation bipartite network for a given period—year $t$. Both BCN and CN are undirected and weighted; the weights denote the number of shared citations (BCN) or co-citing papers (CN). Separate topical clusters are extracted for BCN ($C_1$, $C_2$) and CN ($C_3$, $C_4$). Nodes with numbers are papers from a given period being considered and nodes with letters are their references.[]{data-label="fig:Fig_method_1"}](figures/BCN_and_CN.png){width="\linewidth"} Intimacy indices {#intimacy-indices .unnumbered} ---------------- To analyse the evolution of TCs, we need to match them from the consecutive years. The set of cited papers to large extent overlaps year by year, so for the CN, we can use the regular approach proposed together with the GED method, see below and Brodka *et al.*[@Brodka2013a]. For BCN, however, there is no overlap at all between papers published in the successive years because every paper can be published only once and in only one year. Even if we do not have the corresponding papers in TCs from two BCNs, i.e. two years, the papers’ references overlap each another. Therefore, we can measure the similarity of their reference pools to reflect their inheritance. For that purpose, we introduced the *forward intimacy index* and *backward intimacy index* in Liu *et al.*[@Liu2017a]. The idea behind intimacy indices is that the references related to a particular topic change gradually. The *forward intimacy index* $I^{f}_{mn}$ and the *backward intimacy index* $I^{b}_{mn}$ between TCs $C_m^t$ in year $t$ and $C_n^{t+1}$ in year $t+1$ are defined in following: $$\label{eq:intimate_index} \begin{aligned} I^f_{mn} &= \sum_{i} \frac{N\left(R_i, \mathcal{R}_n^{t+1}\right)}{N\left(R_i, \mathcal{R}^{t+1}\right)} \frac{N\left(R_i, \mathcal{R}_m^t\right)}{L\left(\mathcal{R}_m^t\right)}, \\ I^b_{mn} &= \sum_{i} \frac{N\left(R_i, \mathcal{R}_m^t\right)}{N\left(R_i, \mathcal{R}^{t}\right)} \frac{N\left(R_i, \mathcal{R}_n^{t+1}\right)}{L\left(\mathcal{R}_n^{t+1}\right)}. \end{aligned}$$ Here the TCs at $t$ and $t+1$ are $\mathcal{C}^t = \left\{C_1^t, ...,C_m^t,..., C_u^t\right\}$ and $\mathcal{C}^{t+1} = \left\{C_1^{t+1}, ...,C_n^{t+1},..., C_v^{t+1}\right\}$, and we denote the references cited by papers in $C^t_m$ and $C^{t+1}_n$ as [$\mathcal{R}_m^t = \mathcal{R}(C_m^t) = \left[R_{m1}, ..., R_{mp}\right]$ and $\mathcal{R}_n^{t+1} = \mathcal{R}(C_n^{t+1}) = \left[R_{n1}, ..., R_{nq}\right]$]{}; and $\mathcal{R}^t = \left\{\mathcal{R}_1^t, ...,\mathcal{R}_m^t,...\right\}$. $N(element, list)$ is the number of times $element$ occurs in $list$, and $L(list)$ is the length of $list$. For more details and examples of intimacy indices, please refer to Liu *et al.* [@Liu2017a]. GED method {#ged-method .unnumbered} ---------- The Group Evolution Discovery (GED) method[@Brodka2013a] was used for tracking group evolution for historical cases–—to learn the classifier and for testing cases to validate classification results. The GED method makes use of the similarity between groups in the following years as well as their sizes to label one of six event types: continuing, dissolving, merging, splitting, growing, shrinking. However, we have adapted the GED method to label only four types of events: continuing, dissolving, merging, splitting, as these are the most important to us. The other two (growing and shrinking) are covered by continuing. In general, the GED method allows us to use various metrics as a similarity measure between groups. Therefore, the intimacy indices defined in were used for the BCN to match similar groups in the consecutive time windows. However, the original GED inclusion measures were used for the CN. It means that the similarity between two groups from two successive time windows is reflected by the inclusion measure, which is calculated for two scenarios: inclusion $I(C_n^t, C_m^{t+1})$ of a group $C_n^t$ from time window $t$ in another group $C_m^{t+1}$ from time window $t+1$ (forward, ), and inclusion $I(C_m^{t+1}, C_n^t)$ of this second group $C_m^{t+1}$ from $t+1$ in the first group $C_n^t$ from $t$ (backward, ). The inclusion measure makes use of the Social Position $SP(p)$, which is a kind of weighted PageRank. It denotes an importance of paper $p$ being cited among all other papers[@Brodka2009]. The inclusions for CN are defined as follows: $$\label{eq:forward_inclusion} I(C_n^t, C_m^{t+1})=\overbrace{\frac{\Vert C_n^t \cap C_m^{t+1} \Vert}{\Vert C_n^t \Vert}}^{\text{group quantity}} \cdot \underbrace{\frac{\sum\limits_{p \in (C_n^t \cap C_m^{t+1})} SP(p)}{\sum\limits_{p \in (C_n^t)} SP(p)}}_{\text{group quality}} \cdot 100\%,$$ $$\label{eq:backward_inclusion} I(C_m^{t+1}, C_n^t) = \overbrace{\frac{\Vert C_m^{t+1} \cap C_n^t \Vert}{\Vert C_m^{t+1} \Vert}}^{\text{group quantity}} \cdot \underbrace{\frac{\sum\limits_{p \in (C_m^{t+1} \cap C_n^t)} SP(p)}{\sum\limits_{p \in (C_m^{t+1})} SP(p)}}_{\text{group quality}} \cdot 100\%.$$ If both inclusions (CN) or both intimacy indices (BCN) are greater than the percentage thresholds alpha and beta (the only parameters in this method), the method labels the event continuing. If at least one inclusion or one intimacy index exceeds one of the thresholds, the splitting and merging events is considered, the proper event is assigned depending on the number of similar groups in $t$ and $t+1$. If both inclusions or both intimacy indexes are below the thresholds, i.e. the group has no corresponding group in the next time window, the dissolving event is assigned. Feature Ranking {#feature-ranking .unnumbered} --------------- Rankings of the most prominent features was obtained by repeating the feature selection 1000 times using a basic evolutionary algorithm [@Yang1998], as proposed in Saganowski *et al.*[@Saganowski2017]. The rankings were received for the 30 years span (1981-2010). Next, only top 10 features were selected to described TCs in two additional years (2010-2012) and predict TC evolution. The results revealed the superiority of feature selection compared to the raw approach with all features engagement. [10]{} url \#1[`#1`]{}urlprefixdoiprefix\[2\][\#2]{} \[1\][[\#1](https://dx.doi.org/#1)]{} & . ****, (). . & . **** (). . , & . ****, (). . *et al.* . ****, (). . *et al.* . ****, (). . , , , & . ****, (). . , & . ****, (). . , & . ****, (). . , , & . ****, (). . . ****, (). . , & . ****, (). . *et al.* . ****, (). . , & . ****, (). . & . ****, (). . , , , & . ****, (). . , , & . (). . *et al.* . ****, (). . & . ****, (). . , , & . In **, (, ). . , & . ****, (). . , & . In **, (, ). . , & . In **, (, , ). . , & . In **, (, ). . ** (, ). , , , & . ****, (). . In **, (, ). , & . ****, (). & . In **, (, ). & . In **, (, , ). . & . ****, (). . , , & . ****, (). . . . , , & . (). , & . **** (). . . ****, (). . . ****, (). . ** (, ). & ** (, , ). . & . ****, (). . ** (, ). . Acknowledgements {#acknowledgements .unnumbered} ================ W.L. thanks the Nanyang Technological University for supporting him through a research scholarship. S.A.C. acknowledges support from the Singapore Ministry of Education Academic Research Fund Tier 2, under grant number MOE2017-T2-2-075. S.S. and P.K. received partial supports from the National Science Centre, Poland, the project no. 2016/21/B/ST6/01463, from the European Union’s Marie Skłodowska-Curie Program under grant agreement no. 691152, and from the Polish Ministry of Science and Higher Education under grant agreement no. 3628/H2020/2016/2. Author contributions statement {#author-contributions-statement .unnumbered} ============================== W.L., S.S., S.A.C. and P.K. conceived the study. W.L., S.S. and S.A.C. designed and performed the research. W.L., S.S. and S.A.C. wrote the manuscript. W.L., S.S., S.A.C. and P.K. reviewed and approved the manuscript. Additional information {#additional-information .unnumbered} ====================== **Competing financial interests:** The authors declare no competing financial interests. **Supplementary Information** The alpha and beta thresholds for event labelling {#the-alpha-and-beta-thresholds-for-event-labelling .unnumbered} ================================================= Two groups in the consecutive time windows are considered similar if at least one of their inclusion measures is greater than alpha or beta parameters. In other words, the alpha and beta parameters are thresholds which have to be satisfied to assign an event between two groups. The theoretical range of values for alpha and beta is between 0% and 100%. However, the most common values are selected from the range from 30% to 70%, depending on the density of the network and node’s fluctuation year by year. In general, the selection of parameters should reflect the needs of researchers. For example, one may choose very high value (e.g. 80%) in order to preserve only very similar groups. In another case, it might be necessary to set very low value, e.g. 10% if the network is sparse or the fluctuation is high. In our study, we ran the GED method with alpha and beta parameters varying from 5% to 100%, to see how the number of events varies. Our goal was to have at least one event assigned to each TC. As the splitting and merging events involve several groups, we aimed to have on average slightly more than one event per TC. With this assumption, we selected 30% for both alpha and beta parameters in case of BCN, and 10% for alpha and beta parameters in case of CN. This values produced in total 479 events per 430 groups for BCN, and 492 events per 457 groups for CN. Correlation between overlap measure and inclusion measure {#correlation-between-overlap-measure-and-inclusion-measure .unnumbered} ========================================================= For BCN, we use the forward and backward intimacy indices to measure the closeness between TCs in consecutive time windows (years). For CN, we considered two types of measure: (i) a simple overlap measure of two groups (the relative fraction of common members), and (ii) an overlap of two groups enriched with the information about the importance of the common members. The latter is suggested by the GED method authors, who named their similarity measure the inclusion measure. One way to evaluate the importance of TC members is to use node centrality measures to rank them within the group. In our work, we are using the Social Position measure[@Brodka2009] (as suggested in the GED method), an idea based on the PageRank algorithm[@Page1999]. Saganowski *et al.*[@Saganowski2012] found that using a richer similarity measure allows us to track group evolution more reliably. To better understand the difference between the simple overlap measure and the inclusion measure we compared values obtained with both measures in . It turned out that the inclusion measure is on average 20% lower than the simple overlap measure, and the corresponding values, i.e. 30% for the simple overlap and 10% for the inclusion measure, produce roughly the same number of the evolution events. However, the more complex version of the similarity measure (i.e. the inclusion measure), provided slightly better initial prediction results. Therefore, we finally utilized the inclusion measure in our calculations for CN. [0.45]{} ![image](figures/FigureS2-1.pdf){width="\textwidth"}   [0.45]{} ![image](figures/FigureS2-2.pdf){width="\textwidth"} Alluvial diagram for CN {#alluvial-diagram-for-cn .unnumbered} ======================= Like the bibliographic coupling network (BCN), the co-citation network (CN) can also be visualized in the form of the alluvial diagram. The groups in a CN represent the papers from the past that are coherent and related to a certain topic that stimulates the present research lines. The list of features used in the study {#the-list-of-features-used-in-the-study .unnumbered} ====================================== As we mentioned in **Future events prediction**, each observation contained 77 features (preselected from the initial 100). The full list of 100 features are showed in . Many features in this list are proposed for directed social network, therefore are inappropriate for our undirected BCN and CN. The symbol $+$ indicates this feature was used in BCN prediction, while the symbol $\ast$ indicates this feature was used in CN prediction. [|c|c|p[0.42]{}|]{} & &\ [[** [S]{} – continued from previous page**]{}]{}\ & &\ \ & sum\_group\_degree\_in & The sum of indegree[@Freeman1978] of nodes belonging to the community calculated within the community. Indegree is a node measure defining the number of connections directed to the node\ & avg\_group\_degree\_in & The average value of indegree of nodes belonging to the community calculated within the community\ & min\_group\_degree\_in & The minimum value of indegree of nodes belonging to the community calculated within the community\ & max\_group\_degree\_in & The maximum value of indegree of nodes belonging to the community calculated within the community\ & sum\_group\_degree\_out & The sum of outdegree[@Freeman1978] of nodes belonging to the community calculated within the community. Outdegree is a node measure determining the number of connections outgoing from the node\ & avg\_group\_degree\_out & The average value of outdegree of nodes belonging to the community calculated within the community\ & min\_group\_degree\_out & The minimum value of outdegree of nodes belonging to the community calculated within the community\ & max\_group\_degree\_out & The maximum value of outdegree of nodes belonging to the community calculated within the community\ & sum\_group\_degree\_total$+\ast$ & The sum of total degree of nodes belonging to the community calculated within the community. Total degree is the sum of indegree and outdegree\ & avg\_group\_degree\_total$+\ast$ & The average value of total degree of nodes belonging to the community calculated within the community\ & min\_group\_degree\_total$+\ast$ & The minimum value of total degree of nodes belonging to the community calculated within the community\ & max\_group\_degree\_total$+\ast$ & The maximum value of total degree of nodes belonging to the community calculated within the community\ & sum\_group\_betweenness$+\ast$ & The sum of betweenness[@Freeman1978] of nodes belonging to the community calculated within the community. Betweenness is a node measure describing the number of the shortest paths from all nodes to all others that pass through that node\ & avg\_group\_betweenness$+\ast$ & The average value of betweenness of nodes belonging to the community calculated within the community\ & min\_group\_betweenness$+\ast$ & The minimum value of betweenness of nodes belonging to the community calculated within the community\ & max\_group\_betweenness$+\ast$ & The maximum value of betweenness of nodes belonging to the community calculated within the community\ & sum\_group\_closeness$+\ast$ & The sum of closeness[@Freeman1978] of nodes belonging to the community calculated within the community. Closeness is a node measure defined as the inverse of the farness, which in turn, is the sum of distances to all other nodes\ & avg\_group\_closeness$+\ast$ & The average value of closeness of nodes belonging to the community calculated within the community\ & min\_group\_closeness$+\ast$ & The minimum value of c of nodes belonging to the community calculated within the community\ & max\_group\_closeness$+\ast$ & The maximum value of closeness of nodes belonging to the community calculated within the community\ & sum\_group\_eigenvector$+\ast$ & The sum of eigenvector[@Bonacich1972] of nodes belonging to the community calculated within the community. Eigenvector is a node measure indicating the influence of a node in the network\ & avg\_group\_eigenvector$+\ast$ & The average value of eigenvector of nodes belonging to the community calculated within the community\ & min\_group\_eigenvector$+\ast$ & The minimum value of eigenvector of nodes belonging to the community calculated within the community\ & max\_group\_eigenvector$+\ast$ & The maximum value of eigenvector of nodes belonging to the community calculated within the community\ & avg\_group\_eccentricity$+\ast$ & The average value of eccentricity[@Harary1969] of nodes belonging to the community calculated within the community. Eccentricity of a node is its shortest path distance from the farthest other node in the graph\ & min\_group\_eccentricity$+\ast$ & The minimum value of eccentricity of nodes belonging to the community calculated within the community\ & max\_group\_eccentricity$+\ast$ & The maximum value of eccentricity of nodes belonging to the community calculated within the community\ & sum\_network\_degree\_in & The sum of indegree of nodes belonging to the community calculated within the network\ & avg\_network\_degree\_in & The average value of indegree of nodes belonging to the community calculated within the network\ & min\_network\_degree\_in & The minimum value of indegree of nodes belonging to the community calculated within the network\ & max\_network\_degree\_in & The maximum value of indegree of nodes belonging to the community calculated within the network\ & sum\_network\_degree\_out & The sum of outdegree of nodes belonging to the community calculated within the network\ & avg\_network\_degree\_out & The average value of outdegree of nodes belonging to the community calculated within the network\ & min\_network\_degree\_out & The minimum value of outdegree of nodes belonging to the community calculated within the network\ & max\_network\_degree\_out & The maximum value of outdegree of nodes belonging to the community calculated within the network\ & sum\_network\_degree\_total$+\ast$ & The sum of total degree of nodes belonging to the community calculated within the network\ & avg\_network\_degree\_total$+\ast$ & The average value of total degree of nodes belonging to the community calculated within the network\ & min\_network\_degree\_total$+\ast$ & The minimum value of total degree of nodes belonging to the community calculated within the network\ & max\_network\_degree\_total$+\ast$ & The maximum value of total degree of nodes belonging to the community calculated within the network\ & sum\_network\_betweenness $+\ast$& The sum of betweenness of nodes belonging to the community calculated within the network\ & avg\_network\_betweenness$+\ast$ & The average value of betweenness of nodes belonging to the community calculated within the network\ & min\_network\_betweenness$+\ast$ & The minimum value of betweenness of nodes belonging to the community calculated within the network\ & max\_network\_betweenness$+\ast$ & The maximum value of betweenness of nodes belonging to the community calculated within the network\ & sum\_network\_closeness$+\ast$ & The sum of closeness of nodes belonging to the community calculated within the network\ & avg\_network\_closeness$+\ast$ & The average value of closeness of nodes belonging to the community calculated within the network\ & min\_network\_closeness$+\ast$ & The minimum value of closeness of nodes belonging to the community calculated within the network\ & max\_network\_closeness$+\ast$ & The maximum value of closeness of nodes belonging to the community calculated within the network\ & sum\_network\_eigenvector$+\ast$ & The sum of eigenvector of nodes belonging to the community calculated within the network\ & avg\_network\_eigenvector$+\ast$ & The average value of eigenvector of nodes belonging to the community calculated within the network\ & min\_network\_eigenvector$+\ast$ & The minimum value of eigenvector of nodes belonging to the community calculated within the network\ & max\_network\_eigenvector$+\ast$ & The maximum value of eigenvector of nodes belonging to the community calculated within the network\ & avg\_group\_coefficient[@Wasserman1994]$+\ast$ & The average of the local clustering coefficients of all the nodes in the community\ & avg\_network\_coefficient[@Wasserman1994]$+\ast$ & The average of the local clustering coefficients of all the nodes in the network\ & group\_size$+\ast$ & The number of nodes in the group\ & group\_density[@Wasserman1994]$+\ast$ & The number of connections between nodes in the group in relation to all possible connections between them\ & group\_cohesion[@White2001]$+\ast$ & The vertex connectivity of the community\ & group\_coefficient\_global[@Wasserman1994]$+\ast$ & The ratio of the triangles and the connected triples in the community\ & group\_reciprocity[@Newman2010] & A fraction of edges that are reciprocated within the community\ & group\_leadership[@Freeman1978]$+\ast$ & A measure describing centralization in the community (the largest value is for a star network)\ & neighborhood\_out & The number of nodes outside the community that have incoming connection from the nodes inside the community divided by the number of nodes in the community\ & neighborhood\_in & The number of nodes outside the community that have outgoing connection to the nodes inside the community divided by the number of nodes in the community\ & neighborhood\_all$+\ast$ & The number of nodes outside the community that are connected to the nodes inside the community divided by the number of nodes in the community\ & group\_adhesion[@White2001]$+\ast$ & The minimum number of edges needed to be removed to obtain a community which is not strongly connected\ & alpha[@Brodka2013a] & The GED inclusion measure of group $G_i$ from time window $T_n$ in group $G_j$ from $T_{n+1}$\ & beta[@Brodka2013a] & The GED inclusion measure of group $G_j$ from time window $T_{n+1}$ in group $G_i$ from $T_n$\ & network\_ratio\_size$+\ast$ & The ratio of *group\_size* to *network\_size*\ & network\_ratio\_density$+\ast$ & The ratio of *group\_density* to *network\_density*\ & network\_ratio\_cohesion$+\ast$ & The ratio of *group\_cohesion* to *network\_cohesion*\ & network\_ratio\_coefficient\_global$+\ast$ & The ratio of *group\_coefficient\_global* to *network\_coefficient\_global*\ & network\_ratio\_coefficient\_average$+\ast$ & The ratio of *group\_clustering\_coefficient* to *network\_clustering\_coefficient*\ & network\_ratio\_reciprocity & The ratio of *group\_reciprocity* to *network\_reciprocity*\ & network\_ratio\_leadership$+\ast$ & The ratio of *group\_leadership* to *network\_leadership*\ & network\_ratio\_eccentricity$+\ast$ & The ratio of *avg\_group\_eccentricity* to *network\_avg\_eccentricity*\ & network\_ratio\_adhesion$+\ast$ & The ratio of *group\_adhesion* to *network\_adhesion*\ & **phys\_rev**$\ast$ & **The number of articles belonging to the group that were published in the Physical Review journal**\ & **phys\_rev\_a**$+\ast$ & **The number of articles belonging to the group that were published in the Physical Review A journal**\ & **phys\_rev\_b**$+\ast$ & **The number of articles belonging to the group that were published in the Physical Review B journal**\ & **phys\_rev\_c**$+\ast$ & **The number of articles belonging to the group that were published in the Physical Review C journal**\ & **phys\_rev\_d**$+\ast$ & **The number of articles belonging to the group that were published in the Physical Review D journal**\ & **phys\_rev\_e**$+\ast$ & **The number of articles belonging to the group that were published in the Physical Review E journal**\ & **phys\_rev\_lett**$+\ast$ & **The number of articles belonging to the group that were published in the Physical Review Letters journal**\ & **phys\_rev\_stab**$+\ast$ & **The number of articles belonging to the group that were published in the Physical Review STAB journal**\ & **phys\_rev\_stper**$+$ & **The number of articles belonging to the group that were published in the Physical Review STPER journal**\ & **physics**$\ast$ & **The number of articles belonging to the group that were published in the Physics journal**\ & **rev\_mod\_phys**$+\ast$ & **The number of articles belonging to the group that were published in the Review of Modern Physics journal**\ & **sum\_group\_age**$+\ast$ & **The sum of age of articles belonging to the group. In the co-reference network the age of an article is the average age of the articles it references to. In the co-citation network the age of an article is the age of the articles being cited.**\ & **avg\_group\_age**$+\ast$ & **The average age of articles belonging to the group**\ & **min\_group\_age**$+\ast$ & **The minimum age of articles belonging to the group**\ & **max\_group\_age**$+\ast$ & **The maximum age of articles belonging to the group**\ & **network\_ratio\_avg\_group\_age**$+\ast$ & **The ratio of avg\_group\_age to the average age of all articles in the network**\ & time\_window$+\ast$ & The number of time window from which the community instance was obtained\ & network\_size$+\ast$ & The number of nodes in the network\ & network\_density$+\ast$ & The number of connections between nodes in the network in relation to all possible connections between them\ & network\_cohesion$+\ast$ & The vertex connectivity of the network\ & network\_coefficient\_global$+\ast$ & The ratio of the triangles and the connected triples in the network\ & network\_coefficient\_average$+\ast$ & The average of the local clustering coefficients of all the nodes in the network\ & network\_reciprocity & A fraction of edges that are reciprocated within the network\ & network\_leadership$+\ast$ & A measure describing centralization in the network (the largest value is for a star network)\ & network\_avg\_eccentricity$+\ast$ & The average value of eccentricity of nodes within the network.\ & network\_adhesion$+\ast$ & The minimum number of edges needed to be removed to obtain a graph which is not strongly connected\
--- author: - 'S. Brett' - 'I. Bentley' - 'N. Paul' - 'R. Surman' - 'A. Aprahamian' date: 'Received: date / Revised version: date' title: 'Sensitivity of the r-process to Nuclear Masses' --- [leer.eps]{} gsave 72 31 moveto 72 342 lineto 601 342 lineto 601 31 lineto 72 31 lineto showpage grestore Basic properties of nuclei, such as their binding energies per nucleon allow the synthesis of the elements up to approximately iron via fusion reactions in stars from the lightest elements created by the Big Bang. However, the abundances of elements in our solar system contain a substantial number of nuclei well beyond iron [@Arla99; @GA10; @Lo03]. The origins of these nuclei are entangled in complexity since the heavier elements are thought to be made via both slow- and rapid- neutron-capture processes (s- and r-processes) [@Sneden2008]. The s-process leads to a network of nuclei near stability while the r-process allows the production of nuclei with increasing neutron numbers much further from stability, producing neutron-rich nuclei. The astrophysical scenarios in which the s-process can take place have been identified, but a potential site for the r-process is still unresolved[@Arnould2007]. The challenge for astrophysical science today is to understand the conditions that would provide a major abundance of neutrons and lead to successive captures before the nucleus has a chance to decay; while on the nuclear side, the challenge is to determine the physics of nuclei far from stability where the range and impact of the nuclear force is less well known [@KG99; @Arnould2007]. There have been a number of astrophysical scenarios suggested as possible sites for the r-process. Some of the most promising sites include the neutrino driven wind from core-collapse supernovae [@WW94], two-neutron star-mergers [@Freiburghaus1999], gamma-ray bursts [@Surman2006], black-hole neutron star mergers [@Surman2008], relativistic jets associated with failed supernovae [@Fujimoto2006] or magnetohydrodynamic jets from supernovae [@Nishimura2006]. The r-process proceeds via a sequence of neutron captures, photodissociations and $\beta$ decays. Simulations of the r-process therefore require tabulations of $\beta$-decay lifetimes, neutron capture rates and neutron separation energies; photodissociation rates are determined from the capture rates and separation energies by detailed balance [@FCZ67]: $$\lambda_\gamma(Z,A) \propto T^{3/2} \exp\left[-{\frac{S_n(Z,A)}{kT}}\right] \langle \sigma v \rangle_{(Z,A-1)} \label{photo}$$ In the above expression, $T$ is the temperature, $\langle \sigma v \rangle_{(Z,A-1)}$ is the thermally-averaged value of the neutron capture cross section for the neighboring nucleus with one less neutron, and $S_{n}(Z,A)$ is the neutron separation energy—the difference in binding between the nuclei $(Z,A)$ and $(Z,A-1)$. Nuclear masses are crucial inputs in theoretical calculations of each of these sets of nuclear data. One way to assess the role of nuclear masses in the r-process is to choose two or more mass models, calculate all of the relevant nuclear data with the mass model consistently, and then run r-process simulations with the different sets of global data. Such comparisons are quite valuable and examples include Ref. [@Wan04; @Far10; @Arc11]. Our approach here is quite different. We instead focus on the sensitivity of the r-process to the *individual* neutron separation energies within a given mass model, as they appear in Eqn. \[photo\], in an attempt to determine the nuclei that have the greatest impact on the overall r-process abundances and, in turn, identify the most crucial measurements to be made. This is the first time that such an attempt has been made and the results could potentially be of great significance to both nuclear and astrophysical science. The study of radioactive nuclei far from stability approaching the r-process path is one of the global research frontiers for nuclear science today. New facilities are being developed in the USA (CARIBU at ANL, NSCL and FRIB at MSU), in Europe (ISOLDE at CERN), in France (SPIRAL II at GANIL), in Finland (Jyvaskyla), in Germany (FAIR at GSI Darmstadt), in Japan (RIKEN), in China (BRIF,CARIF in CIAE Beijing), and in Canada (ISAC at TRIUMF). The overarching question for this global effort in nuclear science is which measurements need to be made [@Sch08]. This study used a fully dynamical r-process nuclear network code [@Wa94]. Inputs to the simulation code include a seed nucleus, neutron density, temperature and dynamical timescale descriptive of a given astrophysical scenario. In addition, $\beta$ decay rates, neutron capture rates and neutron separation energies are the inputs for the nuclear properties. The simulation processes neutron captures, photodissociations, $\beta$-decays, and $\beta$-delayed neutron emissions from the start of the r-process through freezeout and the subsequent decay toward stability [@Me02]. Fission, while important in some astrophysical scenarios, is not significant for the the conditions used here and so is not included. ![Comparison of the separation energies from Duflo-Zuker [@DZ95], HFB-21 [@Gor10], and the experimental masses from [@Au03] to the FRDM [@MN95] values for the tin isotopes.[]{data-label="fig:MM"}](fig1.eps){width="8.5cm"} All the calculations are done for the same initial astrophysical conditions. The astrophysical scenario used in our simulations was based on the H or high frequency r-process suggested by Qian et al. [@Qi98], with an initial temperature of $T_{9} = 1.5$ and an initial density of $3.4\times 10^{2}$ g/cm$^{3}$. We take the temperature and density to decline exponentially as in [@QiW96] with a dynamical timescale of 0.86 s. While Qian specifies a seed of $^{90}$Se and a neutron to seed ratio ($N_n/N_{seed}$) of 86 [@Qi98], here a lighter seed of $^{70}$Fe is chosen, which results in $N_n/N_{seed}=67$ when the electron fraction is kept consistent with Qian ($Y_{e}=0.190$). The nuclear data inputs include beta decay rates from [@MPK03] and neutron capture rates from [@RT00], both calculated with Finite Range Droplet Model (FRDM) masses. The measured values of $S_n$ come from the Audi Mass Evaluation 2003 [@Au03]. For the remaining nuclei, we used the $S_n$ values resulting from the calculated mass values in the FRDM [@MN95]. We subsequently varied these theoretical $S_n$ for one nucleus at a time by $\pm25\%$. In each case, the resulting r-process abundance curves were generated and compared against the baseline abundances resulting from the unchanged $S_n$ value. The $25\%$ variation of separation energies was chosen somewhat arbitrarily. A comparison of the ratio of separation energies extracted from measured masses or theoretically calculated separation energies with the FRDM calculated values is shown for the Sn isotopes in Figure \[fig:MM\]. This indicates that the $25\%$ value is a reasonable variation estimate far from stability. ![Final r-process abundances for the baseline H-scenario [@Qi98] with $^{70}$Fe seed (black line) compared to simulations in which the neutron separation energy of $^{138}$Sn is increased (red long-dashed line) or decreased (blue short-dashed line) by $25\%$. The calculated abundances are normalized to the solar r-process abundances of Sneden et al. [@Sneden2008] (points) at $A=130$.[]{data-label="fig:138SnwSolar"}](fig2.eps){width="8.5cm"} An example of the resulting abundance patterns is shown in Figure \[fig:138SnwSolar\], where the baseline pattern is compared to the final abundance patterns produced by simulations in which the separation energy of $^{138}$Sn was increased or decreased by $25\%$. This comparison can be quantified by summing the differences in the final mass fractions: $$\label{eqn:1} F_{\pm}=100\sum_{A} \vert X_{baseline}(A)- X_{\pm\Delta S_n}(A)) \vert,$$ where $X(A)=AY(A)$ is the mass fraction of nuclei with mass number $A$ (such that $\sum_{A} X(A)=1$), and the sum of $A$ ranges over the entire abundance curve. This quantity is largest when the curves differ near the peak abundances, giving preference to those regions. The values of $F=(F_{+}+F_{-})/2$ are calculated for 3010 nuclei from $^{58}$Fe to $^{294}$Fm. Figure \[fig:sens\] shows the nuclei whose separation energy variations result in the greatest changes in the resulting r-process abundances. Nuclei that have the greatest impact on the r-process are those neutron rich nuclei near the closed shells at $Z=28$ and 50, and $N=50$, 82, and 126. ![Comparison of the sensitivity to mass values determined by Equation 2. The separation energies far from stability were generated by the FRDM [@MN95], Duflo-Zuker [@DZ95], and HFB-21 [@Gor10]. The scale is from white to dark red, indicating regions with a small change to a substantial change in the resulting abundances. For reference, stable nuclei have been included as black crosses and the magic numbers have been indicated by thin lines. Superimposed on the sensitivity results are the limits of accessibility by CARIBU [@SavPar05] and the proposed FRIB intensities [@TarHau12]. In both cases, we have plotted the conservative limits of what can be produced and measured in mass measurements.[]{data-label="fig:sens"}](fig3.eps){width="8.7cm"} A natural question to ask is the dependence of these results on the mass model used. Therefore, similar calculations were performed using four additional mass models, the Duflo-Zuker (DZ) [@DZ95], the Extended Thomas Fermi plus Strutinsky Integral with shell Quenching (ETFSIQ) [@PN96], the Hartree-Fock-Bogoliubov (HFB-21) [@Gor10], and the F-spin [@Ap11] model in addition to the FRDM. All models take advantage of very different physics ingredients to calculate the masses of nuclei far from stability. Each of the calculations performed started with the same initial astrophysical conditions and again varying individual separation energies by $\pm25\%$. The results are astounding. In each case, the nuclei with the greatest impact were generally the ones near the major closed shells independent of the chosen mass models. Figure \[fig:sens\] shows the resulting sensitivity plots from three of the mass models; the FRDM, DZ, and HFB-21 models. Nuclei near the closed shells of N=50, 82, and 126 rise above all the others in impact. The nuclei with the most impact on the r-process abundances cluster around $^{132}$Cd and $^{138}$Sn. In this region, the nuclei are $^{131-134}$Cd, $^{132-137}$In, $^{135-140}$Sn, $^{139,141}$Sb. There are also specific low mass nuclei such as $^{82}$Cu, $^{85}$Zn, and $^{88}$Zn that are important. ![Shows the mass fractions of $^{136}$Sn (purple), $^{138}$Sn (blue), and $^{140}$Sn (aqua) for the baseline $r$-process simulation (top panel) and the simulation with the separation energy of $^{138}$Sn decreased by 25% (middle panel). The bottom panel compares the neutron abundance for the two simulations (black and red lines, respectively).[]{data-label="fig:mech"}](fig4.eps){width="8.8cm"} In trying to understand these results, we know that there are two ways that an individual neutron separation energy can influence the $r$-process abundance distribution. The first is a long-recognized [@BB57] equilibrium effect, and the second is an early-freezeout photodissociation effect, recently pointed out in [@Sur2009]. In the classic view, the $r$-process takes place in conditions of $(n,\gamma)$-$(\gamma,n)$ equilibrium, where abundances along an isotopic chain are determined by a Saha equation: $$\begin{aligned} \nonumber I_{00}&=& \frac{Y(Z,A+1)}{Y(Z,A)}=\frac{G(Z,A+1)}{2G(Z,A)}\left(\frac{2\pi\hbar^{2}N_{A}}{m_{n}kT}\right)^{3/2}N_{n}\\ && \times\exp\left[\frac{S_{n}(Z,A+1)}{kT}\right] \label{saha}\end{aligned}$$ where the $G$s are the partition functions, $N_{n}$ is the neutron number density, and $m_{n}$ is the nucleon mass. The relative abundances of the different isotopic chains are then determined by the $\beta$-decay lifetimes of the most populated nuclei along each chain. As described in Eqn. \[saha\], any change to an individual separation energy will cause a shift in the abundances along the isotopic chain. This can have a global impact on the final abundance pattern, particularly if the affected nucleus is highly populated and material is shifted to a nucleus with a significantly faster or slower $\beta$-decay lifetime. For example, consider the case of $^{138}$Sn, a nucleus just above the $N=82$ closed shell region. In the baseline simulation, $^{138}$Sn is the most abundant tin isotope, and $^{136}$Sn $^{140}$Sn are much less abundant. Their mass fractions are shown as a function of time in Fig. \[fig:mech\](a); their relative values follow those predicted by Eqn. \[saha\] until about $t\sim 1.2$ s, when equilibrium begins to fail and the nuclei primarily $\beta$-decay to stability. If the simulation is repeated with neutron separation energy of $^{138}$Sn reduced by 25%, we see that the equilibrium abundance of this nucleus is drastically reduced, as expected from Eqn. \[saha\] and shown in Fig. \[fig:mech\](b). Material is instead shifted to $^{136}$Sn, which has a $\beta$-decay lifetime approximately 1.6 times that of $^{138}$Sn (and 5.3 times the lifetime of $^{140}$Sn, which is also depleted by the shift). As a result, more material is stuck in the tin isotopic chain compared to the baseline simulation, and the overall rate at which neutrons are consumed is slowed, as shown in Fig. \[fig:mech\](c). This impacts the availability of neutrons for the whole abundance pattern and results in changes throughout the pattern. The second mechanism, in contrast, operates once $(n,\gamma)$-$(\gamma,n)$ equilibrium begins to fail, and individual neutron capture and photodissociation rates become important. Since the neutron separation energy appears in the exponential in Eqn. \[photo\], photodissociation rates are quite sensitive to this quantity. Changes in individual photodissociation rates during freezeout can produce local shifts in abundances, which can translate into global abundance changes if they alter the late-time availability of free neutrons. This mechanism is described carefully in [@Sur2009]. Odd-$N$ nuclei, which tend to be in equilibrium only briefly if at all, are particularly susceptible to these non-equilibrium effects. In conclusion, this study of 3010 nuclei via an r-process simulation tested the sensitivity of the r-process abundance yields to the theoretical mass values of neutron rich nuclei presently unknown in the laboratory from several different mass models, the results are shown here for three of them (FRDM[@MN95], Duflo-Zuker[@DZ95], and HFB-21[@Gor10]). The results are uniform and conclusive in highlighting the importance of nuclei near closed shells. Essentially the same set of nuclei emerge as having the highest impact on the r-process irrespective of the varying physics ingredients of the different mass models. The nuclei with greatest impact on the r-process—neutron rich isotopes of cadmium, indium, tin, and antimony in the $N=82$ region, nickel, copper, zinc, and gallium in the $N=50$ region, and thulium, ytterbium, lutetium, and hafnium in the $N=126$ region—should be of highest priority to measure in the various exotic beam facilities around the world. Table \[tbl:Tab1\] shows the top 25 nuclei with the greatest impact on the r-process for the three models. Since the particular isotopes of these elements that have the greatest impact can shift depending on the astrophysical conditions, a future paper will explore the effects of various astrophysical scenarios in determining the most important nuclei to measure. -------------- ------- -------------- ------- -------------- -------- FRDM DZ HFB-21 [$^A$X]{} $F$ [$^A$X]{} $F$ [$^A$X]{} $F$ $^{ 138}$ Sn 24.59 $^{ 132}$ Cd 36.54 $^{ 140}$ Sn 17.59 $^{ 132}$ Cd 22.37 $^{ 138}$ Sn 26.74 $^{ 134}$ Cd 15.77 $^{ 139}$ Sn 19.64 $^{ 134}$ Cd 25.96 $^{ 80}$ Ni 12.09 $^{ 137}$ Sn 18.06 $^{ 137}$ Sn 23.23 $^{ 86}$ Zn 11.85 $^{ 137}$ Sb 13.69 $^{ 140}$ Sn 21.79 $^{ 85}$ Zn 11.05 $^{ 140}$ Sn 11.12 $^{ 86}$ Zn 21.15 $^{ 197}$ Hf 10.62 $^{ 86}$ Zn 10.24 $^{ 139}$ Sn 17.25 $^{ 137}$ Sn 10.33 $^{ 135}$ Sn 9.40 $^{ 136}$ Sn 16.61 $^{ 132}$ Cd 9.47 $^{ 134}$ Cd 8.27 $^{ 133}$ Cd 14.33 $^{ 84}$ Zn 9.23 $^{ 133}$ Cd 7.72 $^{ 135}$ Sb 13.80 $^{ 141}$ Sn 8.89 $^{ 131}$ Cd 7.25 $^{ 131}$ Cd 13.16 $^{ 142}$ Sn 8.35 $^{ 85}$ Zn 7.08 $^{ 141}$ Sb 12.25 $^{ 136}$ Cd 7.98 $^{ 135}$ In 6.66 $^{ 133}$ In 12.04 $^{ 135}$ Cd 7.76 $^{ 141}$ Sb 6.24 $^{ 85}$ Zn 11.92 $^{ 131}$ Cd 7.63 $^{ 136}$ Sn 6.23 $^{ 135}$ Sn 11.54 $^{ 196}$ Lu 7.17 $^{ 132}$ In 5.92 $^{ 133}$ Sn 11.52 $^{ 133}$ Cd 7.12 $^{ 133}$ Sn 5.46 $^{ 139}$ Sb 10.77 $^{ 137}$ In 6.66 $^{ 137}$ In 4.77 $^{ 135}$ In 10.72 $^{ 139}$ Sn 6.00 $^{ 133}$ In 4.68 $^{ 137}$ Sb 9.72 $^{ 195}$ Yb 5.50 $^{ 142}$ Sb 4.44 $^{ 136}$ Sb 9.56 $^{ 138}$ In 5.43 $^{ 197}$ Hf 4.38 $^{ 143}$ Sb 9.28 $^{ 139}$ In 5.32 $^{ 89}$ Ga 4.33 $^{ 138}$ Sb 8.72 $^{ 79}$ Ni 5.23 $^{ 134}$ In 4.16 $^{ 137}$ In 8.14 $^{ 87}$ Ga 5.16 $^{ 139}$ Sb 4.15 $^{ 134}$ Sb 7.61 $^{ 196}$ Yb 5.03 $^{ 135}$ Sb 4.14 $^{ 134}$ Sn 7.50 $^{ 132}$ In 5.03 -------------- ------- -------------- ------- -------------- -------- : MOST IMPORTANT NEUTRON SEPARATION ENERGIES FOR H-SCENARIO WITH $^{70}$Fe SEED \[tbl:Tab1\] This work was supported by the National Science Foundation through grant number PHY0758100 and the Joint Institute for Nuclear Astrophysics grant number PHY0822648. [99]{} C. Arlandini, F. Kappeler, K. Wisshak, Astrophys. J [**525**]{}, 886 (1999). N. Grevesse, M. Asplund, A. Sauval, P. Scott, Astrophysics and Space Science [**328**]{}, 179 (2010). K. Lodders, Astrophys. J [**591**]{}, 1220 (2003). C. Sneden, J.J. Cowan, R. Gallino, Ann. Rev. Astron. Astrophys. [**46**]{}, 241 (2008). M. Arnould, S. Goriely, K. Takahashi, Phys. Rep. [**450**]{}, 97 (2007). K.-L. Kratz, J. Görres, B. Pfeiffer, M. Wiescher. Journal of Radioanalytical and Nuclear Chemistry [**243**]{}, 133 (2000). S.E. Woosley, J.R. Wilson, G.J. Mathews, R.D. Hoffman, B.S. Meyer, Astrophys. J [**433**]{}, 229 (1994). C. Freiburghaus, S. Rosswog, F.-K. Thielemann, Astrophys. J [**525**]{}, L121 (1999). R. Surman, G.C. McLaughlin, W.R. Hix, Astrophys. J [**643**]{}, 1057 (2006). R. Surman, S. Kane, J. Beun, G.C. McLaughlin, W.R. Hix, J. Phys. G [**35**]{}, 014059 (2008). S.-I. Fujimoto, K. Kotake, S. Yamada, M.-A. Hashimoto, K. Sato, Astrophys. J [**644**]{}, 1040 (2006). S. Nishimura, K. Kotake, M.-A. Hashimoto, S. Yamada, N. Nishimura, S. Fujimoto, K. Sato, Astrophys. J [**642**]{}, 410 (2006). W. Fowler, G. Caughlan, B. Zimmerman, Ann. Rev. Astron. Astrophys. [**5**]{}, 525 (1967). S. Wanajo, S. Goriely, M. Samyn, N. Itoh, Astrophys. J [**606**]{}, 1057 (2004). K. Farouqi, K.-L. Kratz, B. Pfeiffer, T. Rauscher, F.-K. Thielemann, J.W. Truran, Astrophys. J [**712**]{}, 1359 (2010). A. Arcones, G. Martinez-Pinedo, Phys. Rev. C [**83**]{}, 045809 (2011). H. Schatz, Physics Today [**61**]{}, 40 (2008). J. Walsh, Ngam.f Fortran code, Clemson University (1994). B.S. Meyer, Phys. Rev. Lett. [**89**]{}, 231101 (2002). J. Duflo, A.P. Zuker, Phys. Rev. C [**52**]{}, R23 (1995). S. Goriely, N. Chamel, J.M. Pearson, Phys. Rev. C [**82**]{}, 035804 (2010). G. Audi, A. H. Wapstra, C. Thibault, Nucl. Phys. A [**729**]{}, 337 (2002). P. Möller, J.R. Nix, W.D. Myers, W.J. Swiatecki, Atomic Data and Nuclear Data Tables [**59**]{}, 185 (1995). Y.-Z. Qian, P. Vogel, G.J. Wasserburg, Astrophys. J [**494**]{}, 285 (1998). Y.-Z. Qian, S.E. Woosley, Astrophys. J [**471**]{}, 331 (1996). P. Möller, B. Pfeiffer, K.-L. Kratz, Phys. Rev. C [**67**]{}, 055802 (2003). T. Rauscher, F.-K. Thielemann, Atomic Data and Nuclear Data Tables [**75**]{}, 1 (2000). G. Savard, R. Pardo, Proposal for the $^{252}$Cf source upgrade to the ATLAS facility, Technical report, ANL (2005). O.B. Tarasov, M. Hausmann, LISE++ development: Abrasion-Fission, Technical report, NSCL, MSUCL1300 (2005). J.M. Pearson, R.C. Nayak, S. Goriely, Phys. Lett. B [**387**]{}, 455 (1996). A. Teymurazyan, A. Aprahamian, I.Bentley, N.Paul, in preparation (2012). E.M. Burbidge, G.R. Burbidge, W.A. Fowler, F. Hoyle, Rev. Mod. Phys. [**29**]{}, 547 (1957). R. Surman, J. Beun, G.C. McLaughlin, W.R. Hix, Phys. Rev. C [**79**]{}, 045809 (2009).
--- abstract: 'This paper is devoted to a comparison of early works of Kato and Yosida on the integration of non-autonomous linear evolution equations $\dot{x} = A(t)x$ in Banach space, where the domain $D$ of $A(t)$ is independent of $t$. Our focus is on the regularity assumed of $t\mapsto A(t)$ and our main objective is to clarify the meaning of the rather involved set of assumptions given in Yosida’s classic and highly influential *Functional Analysis*. We prove Yosida’s assumptions to be equivalent to Kato’s condition that $t\mapsto A(t)x$ is continuously differentiable for each $x\in D$.' author: - | Jochen Schmid and Marcel Griesemer\ Fachbereich Mathematik, Universität Stuttgart, D-70569 Stuttgart, Germany\ [email protected] title: 'Kato’s Theorem on the Integration of Non-Autonomous Linear Evolution Equations' --- Introduction ============ This paper is devoted to a comparison of early works of Kato and Yosida on the integration of non-autonomous, linear evolution equations in Banach space. Explicitly, we consider the abstract initial value problem $$\label{ivp} \dot{x} = A(t)x,\qquad x(s)=y,$$ in the Banach space $X$, where $A(t):D\subset X\to X$ for each $t\in [0,1]$ is a closed linear operator with a dense domain $D$. The initial value $y$ belongs to $D$ and $0\leq s<1$. The importance of this problem is based on the vast range of applications and on the fact that problems of this kind are still the subject of research. Kato in 1953 assumes that $D$ is independent of $t$ and that $A(t)$ for each $t$ is the generator of a contraction semigroup [@Kato53]. In addition, there are some regularity assumptions on $t\mapsto A(t)$, which are now understood to be equivalent to the simple condition that for every $x\in D$ $$\label{C1-A} t\mapsto A(t)x\quad\text{is continuously differentiable}$$ in the norm of $X$. These conditions are sufficient for the existence of a unique evolution system (propagator) $U(t,s)$ such that $t\mapsto U(t,s)y$ for $y\in D$ is a continuously differentiable solution of the initial value problem [@Kato53; @EngelNagel; @Pazy]. In 1956 and 1970, Kato generalized his above-mentioned result to time-dependent domains and to linear operators $A(t)$ generating semigroups that are not necessarily contractive [@Kato56; @Kato70]. The $C^1$-condition first appeared explicitly in [@Kato56]. Meanwhile, Yosida, in the second edition of his classic and influential *Functional Analysis*, had given a simplified presentation of Kato’s work of 1953 with hypotheses that were adjusted accordingly [@YosidaFA2]. Yosida’s regularity conditions appear weaker than Kato’s $C^1$-condition as they involve no derivative of $A(t)$. Yet they are far more complicated. Yosida’s account of Kato’s theorem remained unchanged over the last five editions of his book and it has been adopted by Reed and Simon, and by Blank, Exner and Havlicek [@ReedSimon; @BEH]. Due to the authority and tremendous popularity of the books by Yosida and by Reed and Simon, Yosida’s version of Kato’s theorem in a large scientific community is better known than the refined version of Kato stated above. In this paper we prove that Yosida’s conditions, the above mentioned $C^1$-condition, and Kato’s original conditions introduced in 1953 are all strictly equivalent. Likewise, Yosida’s regularity conditions in the case of locally convex spaces can be simplified [@Yosida1965; @Schmid2012]. The equivalence of Kato’s 1953-condition and the $C^1$-condition is fairly easy to prove; it was known to Kato and it is known to the experts on evolution equations [@EngelNagel; @Pazy]. It is not entirely obvious, however, and we shall provide a proof for the reader’s convenience. The equivalence of Yosida’s conditions and the $C^1$-condition was discovered by one of us in connection with the adiabatic theorem [@Schmid2011]. This equivalence is surprising, in view of the complexity of Yosida’s conditions. Nevertheless the proof is fairly short and the idea is simple: Yosida’s assumptions require the uniform convergence of certain left-sided difference quotients of the map $t\mapsto A(t)x$, $x\in D$. It is not hard to see that this requirement implies continuous differentiability, and this is the core of our proof, see Lemma \[lemma\], below. That the converse holds was known previously to the experts and is straightforward to prove. As far as we know the literature, before our work a direct comparison of the conditions of Kato and Yosida has never been undertaken. Such a comparison is mentioned neither in the monographs [@Krein-book; @Tanabe79; @Pazy; @Goldstein; @EngelNagel] nor in the review articles [@Kato93; @Sch2002]. Of course, Kato’s theorem has been generalized in various directions and for that the reader is referred to Pazy’s book [@Pazy] and to Kato’s *Fermi lectures* from 1985 [@Kato1985]. Equivalence of regularity assumptions ===================================== We recall from the introduction that $A(t):D\subset X\to X$ for each $t\in [0,1]$ is a closed linear operator with a dense, $t$-independent domain $D$. We are interested in the case where $A(t)$, for each $t$ is the generator of a strongly continuous contraction semigroup, but this is not needed for the comparison of regularity assumptions. The bounded invertibility of $1-A(t)$ and $A(t)$, respectively, suffices to state the assumptions and to prove our theorems. We often write $I$ for the interval $[0,1]$. Kato in Theorem 4 of [@Kato53] made the following assumption: \[ass:Kato\] - $B(t,s) =(1-A(t))(1-A(s))^{-1}$ is uniformly bounded on $I\times I$. - $B(t,s)$ is of bounded variation in $t$ in the sense that there is an $N\geq 0$ such that $$\sum_{j=0}^{n-1}\|B(t_{j+1},s) - B(t_j,s)\| \leq N<\infty$$ for every partition $0=t_0<t_1<\cdots <t_n=1$ of $I$, at least for some $s$. - $B(t,s)$ is weakly differentiable in $t$ and $\partial_{t}B(t,s)$ is strongly continuous in $t$, at least for some $s\in I$ Note that the statements (ii) and (iii) hold for all $s\in I$, if they are satisfied for some $s$. This follows from $B(t,s)=B(t,s_0)B(s_0,s)$. In the proof of the Proposition \[thm:Kato\], below, we will see that conditions (i) and (ii) follow from condition (iii), and that (iii) is equivalent to the $C^1$-condition . In 1953, Kato did not seem to be aware of that but from remarks in [@Kato53; @Kato56] it becomes clear that he knew it by 1956. See also Remark 6.2 of [@Kato70] which states that the new result – Theorem 6.1 of [@Kato70] – reduces to Theorem 4 of [@Kato53] in the situation considered there. \[thm:Kato\] Suppose that for each $t\in I$ the linear operator $A(t):D\subset X\to X$ is closed and that $1-A(t)$ has a bounded inverse. Then Assumption \[ass:Kato\] is satisfied if and only if the $C^1$-condition holds. &gt;From (iii) its follows (first in the weak, then in the strong sense) that $$\label{B-calc} B(t,s)x - B(t',s)x = \int_{t'}^t \partial_{\tau}B(\tau,s)x\, d\tau.$$ This equation shows that $t\mapsto B(t,s)x$ is of class $C^1$, which is equivalent to the $C^1$-condition . Hence (iii) is equivalent to the condition and it remains to derive (i) and (ii) from (iii). By the strong continuity of $\tau\mapsto\partial_{\tau}B(\tau,s)$ and by the principle of uniform boundedness, $$\label{B-univ} \sup_{\tau\in I}\| \partial_{\tau}B(\tau,s)\| <\infty.$$ Combining with , we see that $B(t,s)$ is of bounded variation as a function of $t$, which is statement (ii), and that $t\mapsto B(t,s)$ is continuous in norm. Therefore the inverse $t\mapsto B(t,s)^{-1}=B(s,t)$ is continuous as well and $B(t,s) = B(t,0)B(0,s)$ is uniformly bounded for $t,s\in I$. The following Assumption \[ass:Yosida\] collects the regularity conditions from Yosida’s Theorem XIV.4.1, [@YosidaFA6]. \[ass:Yosida\] - $\{ (s',t') \in I^2: s' \ne t' \} \ni (s,t) \mapsto \frac{1}{t-s} \, C(t,s)x$ is bounded and uniformly continuous for all $x \in X$, where $C(t,s) := A(t) A(s)^{-1} - 1$ - $C(t)x := \lim_{k \to \infty} k \, C(t, t-\frac{1}{k})x$ exists uniformly in $t \in (0,1]$ for all $x \in X$ - $(0,1] \ni t \mapsto C(t)x$ is continuous for all $x \in X$. The continuity assumption (iii) above was added for convenience. It follows from the uniform continuity in (i) and the uniform convergence in (ii). In fact, (i) and (ii) imply that $(0,1] \ni t \mapsto C(t)x$ is *uniformly* continuous and hence can be extended continuously to the left end point $0$. The following theorem is our main result. The key ingredient for its proof is the Lemma \[lemma\], below. \[thm:main2\] Suppose that for each $t\in I$ the linear operator $A(t):D\subset X\to X$ is closed and that $A(t)$ has a bounded inverse. Then Assumption \[ass:Yosida\] and the $C^1$-condition are equivalent. *Remark.* Note that the bounded invertibility is no restriction. If $A(t)$ for each $t\in I$ is the generator of a contraction semigroup, then so is $A(t)-1$ and moreover, by Hille–Yosida, $A(t)-1$ has a bounded inverse. Assumption \[ass:Yosida\] $\Rightarrow$ : Suppose that conditions (i) - (iii) of Assumption \[ass:Yosida\] are satisfied and let $x \in D$. We show that the map $t \mapsto f(t) = A(t)x$ satisfies the hypotheses of Lemma \[lemma\], below. By definition of $C(t,s)$, $$\label{f-C} f(t) - f(s) = (A(t)A(s)^{-1} - 1)A(s)x = C(t,s)f(s)$$ where, by (i), the norm of $(t-s)^{-1}C(t,s)f(s)$ for fixed $s$ is a bounded function of $t\in [0,1]\backslash\{s\}$. It follows that $f(t)-f(s) \to 0$ as $t\to s$. As a further consequence of (i) we obtain, by the principle of uniform boundedness, that $$\label{C-bound} M:= \sup_{s\neq t}\|(t-s)^{-1}C(t,s)\| <\infty.$$ Setting $s=t-k^{-1}$ in we obtain for fixed $t>0$ and all $k>t^{-1}$ that $$\begin{aligned} \nonumber k \Bigl( f(t) - f\bigl(t-\frac{1}{k}\bigr)\Bigr) =&\ kC\bigl(t,t-\frac{1}{k}\bigr) f\bigl(t-\frac{1}{k}\bigr)\\ =&\ kC\bigl(t,t-\frac{1}{k}\bigr) f(t) +kC\bigl(t,t-\frac{1}{k}\bigr)\Bigl(f\bigl(t-\frac{1}{k}\bigr)-f(t)\Bigr)\label{df}\\ &\longrightarrow C(t)f(t) \qquad (k \to \infty).\nonumber\end{aligned}$$ Here we used part (ii) of Assumption \[ass:Yosida\], the continuity of $f$, and that $\|kC(t,t-k^{-1})\|\leq M$ by . Since $f$ is uniformly continuous on the compact interval $I$ the second term of vanishes uniformly in $t$. It remains to prove uniform convergence for the first term of . Using the uniform continuity of $f$ again, we may choose a partition $0=t_0<t_1\ldots <t_N=1$ of $[0,1]$ such that $\|f(t) - f(t_i)\| \leq {\varepsilon}/3M$ for all $t\in (t_{i-1},t_i]$ and all $i$. Then, by (ii), we can find $k_{\varepsilon}$ such that for $k\geq k_{\varepsilon}$ the inequality $\|kC\bigl(t, t-\frac{1}{k}\bigr)f(t_i)-C(t)f(t_i)\| <{\varepsilon}/3$ holds for all $t\in [k^{-1},1]$ and all $i=1,\ldots,N$. These two estimates combined imply that $$\sup_{t\in [k^{-1},1]}\bigg\| kC\bigl(t,t-\frac{1}{k}\bigr) f(t) - C(t)f(t)\bigg\| \leq {\varepsilon}\qquad \text{for}\ k\geq k_{\varepsilon},$$ as desired. Finally we note that the limit map $(0,1] \ni t \mapsto C(t)A(t)x$ is continuously extendable to the left endpoint $0$ by the remark following Assumption \[ass:Yosida\]. We have thus verified all hypotheses of Lemma \[lemma\], and this lemma shows that the $C^1$-condition  is satisfied. $\Rightarrow$ Assumption \[ass:Yosida\]: Suppose that is satisfied and let $\dot{A}(t)x$ denote the derivative of $A(t)x$. Then $s\mapsto A(s) A(0)^{-1}$ is strongly continuously differentiable and hence continuous in norm. It follows that the inverse $s \mapsto \bigl( A(s) A(0)^{-1} \bigr)^{-1} = A(0) A(s)^{-1}$ is norm-continuous as well. Thus, by , the map $$\begin{aligned} (s,\tau) \mapsto \dot{A}(\tau) A(s)^{-1} x = \dot{A}(\tau)A(0)^{-1} \, A(0)A(s)^{-1} x\end{aligned}$$ is continuous for every $x \in X$. From this, using the integral representation $$\begin{aligned} \frac{1}{t-s} \, C(t,s)x = \frac{1}{t-s} \, \bigl( A(t) - A(s) \bigr) A(s)^{-1} x = \frac{1}{t-s} \, \int_s^t \dot{A}(\tau) A(s)^{-1} x \, d\tau,\end{aligned}$$ one readily obtains that $\{ s' \ne t' \} \ni (s,t) \mapsto \frac{1}{t-s} \, C(t,s)x$ extends to a continuous map on the whole of $I^2$ from which conditions (i) through (iii) of Assumption \[ass:Yosida\] are obvious. The exposition of Yosida’s proof given in [@Schmid2011] shows that the continuity in part (i) of Assumption \[ass:Yosida\] may be dropped if in part (iii) the requirement is added that the limit $\lim_{t\searrow 0}C(t)x$ exists for all $x\in X$. Our proof of Theorem \[thm:main2\] shows, that this modified version of Assumption \[ass:Yosida\] is still equivalent to the $C^1$-condition . The main ingredient for the proof of Theorem \[thm:main2\] is the following lemma. It is a discretized version of the well-known, elementary fact that a continuous and left-differentiable map with vanishing left derivative is constant (see Lemma III.1.36 in [@Kato-book] or Corollary 1.2, Chapter 2 of [@Pazy]). \[lemma\] Suppose $f: [0,1] \to X$ is continuous and the limit $g(t) := \lim_{k \to \infty} k \, \bigl( f(t) - f(t-\frac{1}{k}) \bigr)$ exists uniformly in $t \in (0,1]$, that is, the limit exists for every $t \in (0,1]$ and $\sup_{t \in [\frac{1}{k},1]} {\mbox{$\left\| k \, \bigl( f(t) - f(t-\frac{1}{k}) \bigr) - g(t) \right\|$}} \longrightarrow 0 \,\,(k \to \infty)$. Then $$\label{mve} {\mbox{$\left\| f(1) - f(t) \right\|$}} \le (1-t) \sup_{\tau \in [t,1]}{\mbox{$\left\| g(\tau) \right\|$}} \quad\text{for all}\ t\in (0,1],$$ and $f$ is continuously differentiable in $(0,1]$ with $f'=g$. If, in addition, the limit $g(0):=\lim_{t \searrow 0} g(t)$ exists, then $f'=g$ on $[0,1]$. The map $g$ is continuous on $(0,1]$ by the continuity of $f$ and the uniform convergence assumption. By the density of $(0,1) \cap {{\mathbb{Q}}}$ in $I$ and the continuity of $f$ and $g$ on $(0,1]$ it suffices to show that, for every ${\varepsilon}>0$ and for every $q\in (0,1) \cap {{\mathbb{Q}}}$, the estimate $$\begin{aligned} \label{zwbeh} {\mbox{$\left\| f(1) - f(q) \right\|$}} \le (1-q)(M_q + {\varepsilon})\end{aligned}$$ holds with $M_q:= \sup_{\tau \in [q,1]}{\mbox{$\left\| g(\tau) \right\|$}}<\infty$. Let $q=1-r/s$ with $r,s\in {{\mathbb{N}}}$ and let ${\varepsilon}>0$. For any $n\in {{\mathbb{N}}}$ we may write the difference $f(1) - f(1-r/s)$ as a telescoping sum $$\begin{aligned} \label{teleskopsumme} f(1) - f\Big(1-\frac{r}{s} \Big) = f(1) - f\Big(1- \frac{n r}{n s} \Big) = \sum_{k=0}^{n r-1} \left[f\Big(1-\frac{k}{n s} \Big) - f\Big(1-\frac{k}{n s}-\frac{1}{n s} \Big)\right]\end{aligned}$$ where, by the assumed uniform convergence, we may choose $n$ so large, that $$\begin{aligned} \label{estimate} \sup_{t \in [q, 1] } {\mbox{$\left\| f(t) - f\Big(t-\frac{1}{ns} \Big) \right\|$}} \le (M_q + {\varepsilon}) \, \frac{1}{n s}.\end{aligned}$$ Combining  and  we immediately obtain  and the proof of the estimate  is complete. If the limit $\lim_{t \searrow 0} g(t)$ exists, we can define $h(t) := f(t) - \int_{0}^t g(\tau) \,d\tau$ for $t \in I$. It is straightforward to check that $ k(h(t) - h(t-k^{-1}) \to 0$ uniformly and the established estimate yields the constancy of $h$. The proof of the remaining statement of the lemma is an easy exercise that is left to the reader. [10]{} Ji[ř]{}[í]{} Blank, Pavel Exner, and Miloslav Havl[í]{}[č]{}ek. . Theoretical and Mathematical Physics. Springer, New York, second edition, 2008. Klaus-Jochen Engel and Rainer Nagel. , volume 194 of [*Graduate Texts in Mathematics*]{}. Springer-Verlag, New York, 2000. With contributions by S. Brendle, M. Campiti, T. Hahn, G. Metafune, G. Nickel, D. Pallara, C. Perazzoli, A. Rhandi, S. Romanelli and R. Schnaubelt. Jerome A. Goldstein. . Oxford Mathematical Monographs. The Clarendon Press Oxford University Press, New York, 1985. T. Kato. . Lezioni Fermiane. \[Fermi Lectures\]. Scuola Normale Superiore, Pisa, 1985. Tosio Kato. Integration of the equation of evolution in a [B]{}anach space. , 5:208–234, 1953. Tosio Kato. On linear differential equations in [B]{}anach spaces. , 9:479–486, 1956. Tosio Kato. Linear evolution equations of “hyperbolic” type. , 17:241–258, 1970. Tosio Kato. Abstract evolution equations, linear and quasilinear, revisited. In [*Functional analysis and related topics, 1991 ([K]{}yoto)*]{}, volume 1540 of [*Lecture Notes in Math.*]{}, pages 103–125. Springer, Berlin, 1993. Tosio Kato. . Classics in Mathematics. Springer-Verlag, Berlin, 1995. Reprint of the 1980 edition. S. G. Kre[ĭ]{}n. . American Mathematical Society, Providence, R.I., 1971. Translated from the Russian by J. M. Danskin, Translations of Mathematical Monographs, Vol. 29. A. Pazy. , volume 44 of [*Applied Mathematical Sciences*]{}. Springer-Verlag, New York, 1983. Michael Reed and Barry Simon. . Academic Press \[Harcourt Brace Jovanovich Publishers\], New York, 1975. J. Schmid. . , March 2012. Jochen Schmid. Adiabatensätze mit und ohne [S]{}pektrallückenbedingung. Master’s thesis, University of Stuttgart, 2011. arXiv:1112.6338 \[math-ph\]. Roland Schnaubelt. Well-posedness and asymptotic behaviour of non-autonomous linear evolution equations. In [*Evolution equations, semigroups and functional analysis ([M]{}ilano, 2000)*]{}, volume 50 of [*Progr. Nonlinear Differential Equations Appl.*]{}, pages 311–338. Birkhäuser, Basel, 2002. Hiroki Tanabe. , volume 6 of [*Monographs and Studies in Mathematics*]{}. Pitman (Advanced Publishing Program), Boston, Mass., 1979. Translated from the Japanese by N. Mugibayashi and H. Haneda. K[ô]{}saku Yosida. Time dependent evolution equations in a locally convex space. , 162:83–86, 1965/1966. K[ô]{}saku Yosida. . Second edition. Die Grundlehren der mathematischen Wissenschaften, Band 123. Springer-Verlag New York Inc., New York, 1968. K[ō]{}saku Yosida. . Classics in Mathematics. Springer-Verlag, Berlin, 1995. Reprint of the sixth (1980) edition.
--- abstract: 'In this paper, a novel label fusion method is proposed for brain magnetic resonance image segmentation. This label fusion method is formulated on a graph, which embraces both label priors from atlases and anatomical priors from target image. To represent a pixel in a comprehensive way, three kinds of feature vectors are generated, including intensity, gradient and structural signature. To select candidate atlas nodes for fusion, rather than exact searching, randomized k-d tree with spatial constraint is introduced as an efficient approximation for high-dimensional feature matching. Feature Sensitive Label Prior (FSLP), which takes both the consistency and variety of different features into consideration, is proposed to gather atlas priors. As FSLP is a non-convex problem, one heuristic approach is further designed to solve it efficiently. Moreover, based on the anatomical knowledge, parts of the target pixels are also employed as graph seeds to assist the label fusion process and an iterative strategy is utilized to gradually update the label map. The comprehensive experiments carried out on two publicly available databases give results to demonstrate that the proposed method can obtain better segmentation quality.' author: - 'Siqi Bao and Albert C. S. Chung [^1] [^2] [^3]' bibliography: - 'fslf\_bib.bib' title: 'Feature Sensitive Label Fusion with Random Walker for Atlas-based Image Segmentation' --- Segmentation, Brain, Magnetic Resonance Imaging Introduction ============ The human brain is a complex neural system composing many anatomical structures. To study the functional and structural properties of its subcortical regions, image segmentation is a critical step in quantitative brain image analysis and clinical diagnosis. However, segmenting subcortical structures is difficult because they are small and often exhibit large variations in shape. Moreover, some structural boundaries are subtle or even missing in images. Although manual annotation is a standard procedure for obtaining quality segmentation, it is time-consuming and can suffer from inter- and intra-observer inconsistencies. In recent years, researchers have been focusing on developing automatic atlas-based segmentation methods which can effectively incorporate expert prior knowledge about the relationships between local intensity profiles and tissue labels. And many softwares have become available for brain image segmentation, such as FreeSurfer [@fischl2012freesurfer], BrainSuite [@shattuck2002brainsuite], BrainVoyage [@goebel2006analysis], BrainVisa [@geffroy2011brainvisa] and so on. Atlas-based segmentation involves three main components, image registration between atlases and a target image, label propagation, and label fusion, as summarized in Fig. \[fig14\]. To register images of intra-subject generated by different modalities, global transformation methods can be used, such as rigid or affine transformation. As for the registration of inter-subject or longitude analysis of intra-subject, global transformation is insufficient to estimate an accurate deformation field due to the high anatomical variabilities among these images. Local transformation, represented by non-rigid registration, has been proposed to deal with this problem. In non-rigid registration, the deformation field can be estimated using control points on the grid, with a combination of B-splines [@schnabel2001generic] or cosine basis functions [@ashburner1999nonlinear]. To further improve the quality of anatomical or matching correspondences between two images, symmetric diffeomorphism [@avants2008symmetric] moves both images simultaneously along a geodesic path until meeting at the middle of normalization domain and then the whole path or deformation field can be obtained by uniting the two parts of geodesic paths. In the evaluation of 14 nonlinear deformation algorithms [@klein2009evaluation], ANTs based on symmetric diffeomophism is selected as one of the best methods. While with a large number of target images to be labeled, the pairwise non-rigid registration methods can suffer from the expensive time consumption. ![Overview of the main components in atlas-based segmentation.[]{data-label="fig14"}](fig14){width="0.82\linewidth"} ![image](fig11){width="0.92\linewidth"} After image registration, the label maps can be propagated from atlases to the target image and multiple tissue labels can be collected for each image position, making label fusion a crucial final aggregation step for the reliable labeling of target images. A generative model for image segmentation based on label fusion is proposed in [@sabuncu2010generative] and different label fusion strategies are discussed. Majority voting is commonly used, while its accuracy can be adversely affected if the atlases are dissimilar. In voting using global weights, the similarity between each atlas and the target image is calculated and used as a weight during the label fusion process. Recently, more label fusion methods based on patches [@coupe2011patch; @liao2013sparse; @rousseau2011supervised; @wang2013multi; @tong2013segmentation; @ta2014optimized] have been proposed, which were first introduced for image de-noising [@coupe2008optimized] and recently become more prevalent in medical image segmentation. Generally, there are three stages in patch-based label fusion. First, it is necessary to determine which kind of feature to be adopted as pixel representation. The conventional way is to collect pixel values inside the surrounding patch to formulate an intensity feature vector. To better reveal image changes, gradient magnitude is another commonly used feature information. However, it is not adequate enough to obtain quality segmentation if just relying on the above two kinds of features, as they can only capture local and low-level properties. Some advanced approaches have been proposed to extract high-level features to compensate for local limitations. In [@bai2015multi], the contextual information, which estimates the relative relation between intensity values, is appended to form an augmented feature vector for cardiac image segmentation. With feature representation established, the second stage is to distinguish candidate pixels or patches for voting. In [@rousseau2011supervised], to label a centre pixel in the target image, all surrounding small patches from atlases are utilized for weighted voting. To avoid the adverse effects from dissimilar patches, an extension has been proposed in [@coupe2011patch] which involves first ranking the small patches based on structure similarity, followed by combining the selected ones in the final labeling. Another patch selection method based on sparse representation was proposed by Liao et al. [@liao2013sparse], which selects patch-based signatures with sparse logistic and the LASSO interface [@liu2009slep]. The third stage is to fuse the labels of candidate atlas nodes and the fusion strategies fall into two main categories: weighted voting and image patch reconstruction. It is a common way to first estimate the similarity between two patches by embedding their sum of squared difference to the Gaussian function and then to utilize the similarity value as the weight for voting [@coupe2011patch; @rousseau2011supervised]. Besides the independent impact on target pixel, Joint Label Fusion [@wang2013multi] also takes the error correlation among atlas patches into consideration and tries to find the optimal weight for voting. For the second category, to reconstruct a target patch, the linear combination coefficients of atlas patches need to be optimized first and the label of the centre pixel can be then assigned to the class with a minimum reconstruction error [@tong2013segmentation]. Moreover, as shown in [@coupe2011patch; @rousseau2011supervised], the patch-based label fusion methods do not necessarily depend on the time-consuming non-rigid registration. While given the poor contrast condition in the brain Magnetic Resonance (MR) images and similar histogram profiles among adjacent structures, the label fusion with only affine transformation as processing becomes more challenging. As such, to compensate for the quality loss caused by the affine transformation, it raises the demand to design a more elegant label fusion process for brain MR image segmentation. Under the assumption that distinct features can assist the segmentation in a complementary way, in this paper, Feature Sensitive Label Prior (FSLP) is designed to capture label priors from atlases, whose process is distinct with the conventional label fusion at every stage. As suggested in the segmentation of cardiac MR images, embracing more features besides intensity, such as contextual information, can help improve the segmentation quality [@bai2015multi]. For pixel representation, besides conventional intensity and gradient features, structural signature is introduced to extract the high-level property of each subcortical structure based on the Convolutional Neural Networks. During candidate node selection, rather than exact searching within a confined scope, the random k-d tree with a spatial constraint is put forward as an efficient approximation for high dimensional data matching. In the fusion stage, feature sensitivity is taken into account for the variance and consistency among various features. As FSLP is a non-convex problem, one heuristic method is further proposed to solve it by alternately dealing with two convex problems. The **motivations** to introduce FSLP are two-fold. On the one hand, the contributions of distinct features are expected to be consistent during label fusion, i.e., they can reach an agreement when labeling a pixel. On the other hand, the impact of different features can change according to image conditions. For the flat regions away from structural boundaries, intensity and gradient are supposed to be more essential. As for the complex region near tissue bounders, structural signature should play a more significant role. The experimental result with our method also justifies the initial motivation, as shown in Fig. \[fig11\]. The sub-figures (a) and (b) are a cropped target intensity image and its corresponding label map of the Hippocampus. In (c), for the pixels where atlases cannot make an agreement, the optimal feature coefficients estimated in FSLP are displayed as three channels of RGB. Three representative examples are selected to explain the dominant features in each pattern in (d). The color image in (c) demonstrates that the role of structural signature is more essential around tissue protrusions. For the other relatively flat regions, intensity and gradient matter more, which phenomenon also justifies our motivation to introduce feature sensitivity for label fusion. In addition to FSLP from atlases, anatomical priors from target image are also utilized to assist our graph-based label fusion process. Based on anatomical knowledge, to label a pixel which is deep inside or outside a subcortical structure is easier, while to label one which is located around the boundary is challenging. As such, rather than updating labels for all target pixels, those far away from structural border are selected as graph seeds and their influence can be propagated to other pixels through image lattice. Unlike the graph-based labeling constructed with both atlas and target nodes [@koch2015multi], we further infer an equal but more concise graph to encode FSLP and anatomical prior, which only relies on target nodes. The objective energy function on the graph is formulated with Random Walker and can be solved as a discrete Dirichlet problem. To evaluate the proposed method, experiments have been carried out on two image databases and results demonstrate that our approach can obtain better performance as compared with other state-of-the-art methods. Note that the preliminary version of this work has been published in the 19th International Conference on Medical Image Computing and Computer Assisted Intervention, MICCAI 2016. In this paper, 1) we extend our previous work by generating multiple features and introducing randomized k-d tree with spatial constraint for efficient high dimensional feature matching; 2) additional mathematical proofs, solutions together with illustrative examples are given in this work; 3) intensive experiments has been carried out to evaluate each component of our proposed method and comprehensive evaluations have been done with the state-of-the-art methods. Methodology =========== In this paper, to obtain a more discriminative representation, three kinds of features are extracted and candidate nodes are selected for each pixel, which will be explained in Section \[FeaG\] and Section \[FeaM\]. Given the demands of consistency and variety among distinct feature vectors during label fusion, a novel method FSLP is proposed in Section \[FeaSLP\] to deal with this dilemma, by collecting priors from atlases with feature sensitivity. Moreover, the pixels from target image are also selected based on anatomical knowledge, acting as anatomical prior. The whole label fusion process is modeled on an undirected graph and formulated under the framework of Random Walker, which is summarized in Section \[labelrw\]. Feature Generation {#FeaG} ------------------ In medical images, the conventional feature utilized to represent a pixel is intensity values or gradient magnitudes in its surrounding cube. While these features are limited to local information and susceptible to adverse impacts from similar histogram profiles among tissues. As each subcortical structure in the brain has its own shape characteristics and structural properties, this kind of high-level features can be used to formulate a more discriminative representation. Recently it has been proven that the feature extraction ability of Convolutional Neural Networks (CNN) [@lecun1998gradient] has surpassed hand-crafted features, like SIFT [@lowe2004distinctive], and CNN has brought significant improvements in image classification [@krizhevsky2012imagenet], semantic segmentation [@long2015fully], acoustic analysis [@sercu2015very] and so on. As such, in this paper, we propose to encode the high-level property for brain MR images with a feature vector extracted automatically using CNN. CNN is inspired from a biological visual mechanism, where neurons in the higher layer operate on a subregion of neurons in the lower layer. In CNN, there are two basic components: convolution and pooling layers, as illustrated in Fig. \[fig1\]. To estimate the convolutional response $a_1$ or $a_2$ in layer $l$, pixels within the subregion of images in the last layer (Red Region, namely the receptive field) are chosen as input. The convolution step consists of linear operation and non-linear activation, which can be formulated as follows: $$\label{conv1} a=f(Wx+b),$$ where $a$ is the convolutional response, $f(\cdot)$ refers to the non-linear activation function, $x$ is the flattened input from the receptive field, $W$ is the weight vector and $b$ is the bias associated with the convolutional kernel. ![Illustration of convolution and pooling layers. With three images from layer $l-1$ as input, two feature maps generated in layer $l$, each corresponding to one pair of $W$ and $b$, as stated in Equation (\[conv1\]).[]{data-label="fig1"}](fig1){width="\linewidth"} It is notable that each feature map in the convolution layer is assigned with a specific pair of $W$ and $b$. For the Purple pixel in the first feature map, its value can be estimated with $f(W_1x+b_1)$ and for the Green pixel in the second feature map, its value should be $f(W_2x+b_2)$. The number of feature maps in each layer can be preset during the design of the network architecture, while the parameters $W$ and $b$ need to be learned through training. For the non-linear activation $f(\cdot)$, the conventional way is to employ a sigmoid or tanh function. However, both can encounter the saturation problem and kill the gradients during backpropagation. Recently, non-saturated functions have become prevalent, such as Rectified Linear Unit (ReLU) [@nair2010rectified], leaky ReLU [@maas2013rectifier] and some other variants. The experiment in [@krizhevsky2012imagenet] demonstrates that ReLU can accelerate the training speed up to 6 times faster than the tanh function. As such, in the paper, ReLU is chosen as the activation function and Equation can then be rewritten as: $$\label{conv2} a=\max(0,Wx+b).$$ In CNN, the convolution and pooling layers are usually interwoven. The feature maps generated in the convolution layer can be regarded as input for the next pooling layer. As shown in Fig. \[fig1\], the patch in the $i$-th feature map of layer $l$ polls for the corresponding pixel in the $i$-th feature map of layer $l+1$. The pooling strategy used here can be either maximum or average pooling. From the example in Fig. \[fig1\], it can also be noticed that pooling can only shrink the feature maps, while leaving their amounts unchanged. In this paper, to capture the high-level properties of subcortical structures, CNN is utilized to extract the structural signature from brain MR images. Fig. \[fig5\] illustrates the architecture of the employed network. There are seven layers in the network, including six alternating convolution (C1, C2 and C3) and average pooling (omitted as dashed lines) layers, and one output layer. The input to the network is a 2D patch with the size of $20\times 20$ pixels and the two nodes in the output layer refer to the probability of each class. The feature vector to the output layer is extracted and regarded as the structural signature. Detailed parameter settings of the network can be found in Table \[tab1\]. To train the above network, each database is separated into two parts randomly in the experiments, with equal number of images as training (atlas) and testing (target) data sets. For the atlas pixels within region of interest (ROI), their surrounding patches (with the size of $20\times 20$ pixels) are extracted as training data, together with their corresponding labels. With the well trained network, we can obtain the structural signature for each pixel, by using its surrounding patch as input and extracting the feature vector before the output layer. ![CNN architecture. Red: 2D input patch; Blue: convolution layers; Green: output layer; Orange: feature vector to the output layer.[]{data-label="fig5"}](fig2){width="\linewidth"} ![image](tab1){width="\linewidth"} \[tab1\] ![image](tab2){width="0.85\linewidth"} \[tab2\] As demonstrated in [@wang2014mitosis], the performance of mitosis detection can be further improved by combining the discriminative CNN features with conventional handcrafted features, like morphology or color information. The method [@bai2015multi] also suggests that embracing high-level and low-level features yields better results in label fusion for cardiac image segmentation. Under the assumption that distinct features can assist the segmentation in a complementary way, in this paper, we extract multiple features from brain MR images and will consider the feature sensitivity during label fusion. For each pixel, besides the structural signature, the intensity values and gradient magnitudes in the surrounding cube are also assembled as feature vectors. In total, three kinds of feature vectors are generated in the proposed method, which is summarized in Table \[tab2\]. ![Illustration of Feature Matching. For a considered pixel (Gray Square) in the target image, similar pixels are searched from the atlases using each kind of feature vector. Red, Purple and Green Squares represent similar pixels found with intensity, gradient and structural signature respectively. Dashed Gray Square is the spatial constraint and those outside similar pixels will not be involved as candidate nodes.[]{data-label="fig6"}](fig6){width="0.8\linewidth"} Feature Matching {#FeaM} ---------------- After feature generation, the second stage in label fusion is to select candidate nodes from atlases. For each pixel in the target image, similar pixels can be selected from atlases using each kind of feature vector. In fact, it is the nearest neighbor (NN) problem to find similar points in real $d$-dimensional space from $N$ samples. As shown in Table \[tab2\], the dimension $d$ of our generated features (intensity, gradient and structural signature) has a value of 125, 125 and 18 respectively. Using the brute force approach to check each sample in a sequential order, the computation complexity can be $O(dN^2)$. Given the expensive computational cost of exact searching, approximate nearest neighbor (ANN) has been introduced to accelerate the searching speed. In [@arya1998optimal], the ($1+\epsilon$)-approximation to $k$ nearest neighbors can be obtained in $O(kd\log N)$ time. However, the performance of this algorithm degrades rapidly along with the dimension increase and it cannot be applied well to high dimensional data. The matching results can become patchy when $d$ becomes as high as 20. To tackle this issue, several advanced ANN approaches have been proposed for the application on high dimensional data, such as randomized k-d tree [@silpa2008optimised], locality sensitive hashing [@andoni2006near] and so on. In [@muja2014scalable], it demonstrates that randomized k-d tree and priority search k-means tree [@muja2009fast] can obtain the best results through comprehensive experiments. Therefore, the feature matching component of the proposed method is carried out on the foundation of the randomized k-d tree provided by fast library for approximate nearest neighbors (FLANN) toolbox [@muja2014scalable]. As shown in Fig. \[fig6\], similar pixels are selected from the atlases using randomized k-d tree with each kind of feature vector. Considering the poor contrast condition in MR brain images and similar histogram profiles among adjacent tissues, the atlas pixels selected with randomized k-d tree can belong to other structures and mislead the subsequent fusion procedure. As such, a spatial constraint is enforced in the proposed method to filter out the pixels which are too far away from the considered target pixel. For those atlas pixels which cannot meet the spatial constraint (i.e., outside Dashed Gray Square), they tend to be deceptive similar pixels and therefore are not involved in the pool of candidate nodes. Feature Sensitive Label Prior {#FeaSLP} ----------------------------- In this paper, a novel method named Feature Sensitive Label Prior (FSLP) is proposed to capture label prior from atlases by seeking for the optimal linear combination of atlas nodes to reconstruct the feature vector of the target pixel, as illustrated in Fig. \[fig3\](a). For each considered pixel from the target image, its three kinds of features are extracted and concatenated together to formulate one augmented vector $y$. Its similar pixels selected from the atlases with Feature Matching are assembled as dictionary $A$. Given that the confidence and significance of different features can vary considerably, the feature coefficient $a_i$ is introduced to balance their influences. The optimal weight to reconstruct $y$ with dictionary $A$ is stored in vector $\beta$. The formulation of FSLP is given as follows: ![**(a)** FSLP illustration. $y$ is the feature vector for the target pixel, concatenating intensity (Red), gradient (Purple) and structural signature (Green). $\alpha_i$ is feature coefficient and $A$ is a dictionary constructed with atlas feature vectors. $\beta$ is the vector storing reconstruction weights. **(b)** Feature sensitive matrix $W_\alpha$. Each diagonal sub-matrix (Red $W_1$, Purple $W_2$, Green $W_3$) corresponds to one kind of feature vector.[]{data-label="fig3"}](fig5){width="\linewidth"} $$\label{fslp} \begin{split} \min_{\alpha, \beta}~~& \frac{1}{2}|W_\alpha (y-A\beta)|_2^2+\lambda |\alpha|_2^2,\\ s.t.~~ & \sum_i \alpha_i=1,~\alpha_i\geq 0. \end{split}$$ $W_\alpha$ is the feature sensitive matrix, with its definition illustrated in Fig. \[fig3\](b). $W_\alpha$ is split into three subregions (Red $W_1$, Purple $W_2$ and Green $W_3$), each corresponding to one kind of feature vector. The diagonal elements in the sub-matrix $W_j$ are defined as: $$\label{wa} \forall w_{ii} \in W_j, w_{ii}=\frac{\alpha_j}{\sqrt{n_j}},$$ where $n_j$ is the length of the j-th feature vector. Through the division between the coefficient $\alpha_j$ and $\sqrt{n_j}$, the normalization on various features is enforced in the feature sensitive matrix. In Equation , with the regularization term on the coefficient vector $\alpha$, it guarantees that no feature dominates the whole optimization procedure. By solving Equation , optimal feature coefficient $\alpha$ and reconstruction weight $\beta$ can be obtained and label prior can be then estimated with grouped reconstruction error. However, Equation is one non-convex problem, which may have multiple local optima and can be difficult to solve. The details of the proof are given in the Appendix. To solve Equation efficiently, we also propose one solution for it in the following.  \ **Problem Solution** As discussed above, when optimizing $\alpha$ and $\beta$ simultaneously, Equation is not a convex problem. To solve this non-convex problem efficiently, one heuristic approach is proposed in this paper by seeking optimal solutions for $\alpha$ and $\beta$ alternately. The first step is to fix $\alpha$ and Equation turns into one least square problem: $$\label{fsp_a} \min_{\beta}~~ |W_\alpha (y-A\beta)|_2^2.$$ This optimization problem is convex and its solution is $\hat{\beta}=(W_\alpha A)\backslash(W_\alpha y)$. With updated $\beta$, the second step is to fix it and Equation is then simplified to one quadratic programming problem: $$\label{fsp_b} \min_{\alpha}~~ \frac{1}{2}\alpha^T \Lambda \alpha,~~~~s.t. \sum_i \alpha_i=1,~\alpha_i\geq 0,$$ where $\Lambda=\left[\begin{array}{lll} \frac{\sum f_{1j}^2}{n_1}+\lambda & ~~~~~0 & ~~~~~0\\ ~~~~~0 & \frac{\sum f_{2j}^2}{n_2}+\lambda & ~~~~~0\\ ~~~~~0 & ~~~~~0 & \frac{\sum f_{3j}^2}{n_3}+\lambda \end{array} \right],$ $f=y-A\beta=\left[\begin{array}{l} f_1\\f_2\\f_3 \end{array} \right].$ The newly introduced variables $f_1$, $f_2$ and $f_3$ are vectors related to three kinds of features, with length of $n_1$, $n_2$ and $n_3$ respectively. Equation is also a convex problem and can be solved efficiently. The proposed heuristic algorithm iterates the above two steps until either one of the following two conditions are met: the change of $\alpha$ is below a threshold or iterations exceed the predefined number. With $\alpha$ and $\beta$ acquired, the reconstruction error using each class can be estimated as follows: $$\label{rec} e_F=|W_\alpha (y- A\beta_F)|^2,~e_B=|W_\alpha (y- A\beta_B)|^2,$$ where $F$ and $B$ refers to the foreground and background respectively. $\beta_F$ and $\beta_B$ refers to the weights for the foreground and background atlas nodes respectively. With the estimated reconstruction error, FSLP is encoded as edge weight on the graph during label fusion, which will be explained in next subsection. Label Fusion with Random Walker {#labelrw} ------------------------------- Besides the FSLP gathered from atlases, the anatomical knowledge from target images is also encoded in the proposed label fusion method. As mentioned in the Introduction section, to label pixels which locate deep inside or outside a subcortical structure is relatively easier as compared with those around structural boundary. Thanks to the location advantage, even with a rough initial label map generated by affine transformation, the labels of these pixels (far away from the object boundary) can be treated as confident results. This kind of confidence can be propagated to less confident pixels (near boundary) through image lattice, which is regarded as anatomical prior in our method. In this paper, label fusion is formulated on an undirected graph $G=(V,E)$, where $V$ refers to a set of nodes consisting of foreground seeds $V_F$, background seeds $V_B$ and candidate nodes $V_C$. As both label and anatomical priors are employed in the proposed framework, two kinds of foreground seeds are included in $V_F$: $V_{F_a}$ from atlases and $V_{F_T}$ from the target image, similarly for $V_B$. As for $V_C$, it represents the set of nodes whose labels need to be determined during label fusion and these candidate nodes are selected from the target image. $E\subseteq V\times V$ is the set of edges $e_{ij}$ connecting nodes $v_i$ and $v_j$, with $w_{ij}$ as edge weight. Since the number and location of nodes are critical to the efficiency of segmentation algorithms, the strategy for node selection needs to be deployed carefully. As the prior from atlases has been encoded to FSLP, $V_{F_a}$ and $V_{B_a}$ can be represented with two terminal nodes and the consideration of node selection can be limited to the target image. As discussed in our previous work [@bao2014label], segmentation errors mainly lie around structural boundaries and those pixels which are far from the border can have higher label confidences. As such, in this paper, node selection is performed based on the Signed Distance Map (SDM), as illustrated in Fig. \[fig8\]. With multiple label maps provided by a set of atlases, these maps are first fused with majority voting to produce the initial label map for the target image. Then its corresponding SDM can be estimated by calculating the Euclidean distance between a pixel and its nearest neighbour on the object boundary, with positive or negative value for outside (background) or inside (foreground) respectively. Using SDM and pre-defined distance threshold $d_T$, the target seeds and candidate nodes can be identified, as displayed in Fig. \[fig8\](d). ![Node selection. **(a)** 2D slice of target intensity image; **(b)** Initial label map fused with majority voting; **(c)** Signed Distance Map of **b**; **(d)** Red (inner) layer: target foreground seeds $-(d_T+\varepsilon)\leq d_i \leq -d_T$; Black (outer) layer: target background seeds $d_T\leq d_i \leq (d_T+\varepsilon)$; Blue (middle) layer: candidate nodes $-d_T< d_i < d_T$.[]{data-label="fig8"}](fig8){width="\linewidth"} ![Graph construction for label fusion. **(a)** Orange and Purple nodes: atlas seeds. Red and Black nodes: target foreground and background seeds. $v_i$ is one candidate node and $v_j$ is one of its neighbours, with $w_{ij}$ as edge weight. FSLP is encoded to $w_{iF_a}$ and $w_{iB_a}$. **(b)** An equal graph of **a**. []{data-label="fig12"}](fig12){width="\linewidth"} With seeds and candidate nodes settled, the graph for label fusion can be constructed with edge connections, as shown in Fig. \[fig12\](a). The Orange and Purple nodes represent the atlas seeds $V_{F_a}$ and $V_{B_a}$. Red and Black nodes refer to the foreground $V_{F_T}$ and background $V_{B_T}$ seeds selected from the target image. The influences of target seeds can be propagated to candidate nodes through image lattice. The affinity between nodes with lattice connection is defined using classical Gaussian function as follows: $$\label{wij} \forall~v_j \in \mathcal{N}(v_i),~~~~w_{ij}=\exp(-\delta (I_T(v_i)-I_T(v_j))^2),$$ where $v_i$ is one candidate node, $\mathcal{N}(v_i)$ refers to its 6-nearest neighbours in 3D image, $I_T(\cdot)$ is the pixel intensity value in the target image and $\delta$ is one tuning parameter. The FSLP is encoded as the edge weight between $v_i$ and atlas seeds, with the following definition: $$w_{iF_a}=\frac{e_B}{e_F+e_B},~~~w_{iB_a}=\frac{e_F}{e_F+e_B}.$$ Given that $V_{F_a}$ and $V_{F_T}$ are all foreground seeds, the edge weights between them are supposed to be infinity. In this case, setting up an edge between $v_i$ and $V_{F_a}$ is equal to appending an edge for $v_i$ with any target foreground seed, as illustrated in Fig. \[fig12\]. In other words, the function of the atlas seeds can be replaced and FSLP can be assigned to the edges of $w_{iF_T}$ and $w_{iB_T}$ instead. In this way, the graph for label fusion can be constructed only with target nodes and the graph complexity can be greatly reduced. For graph-based image segmentation, the general energy function [@lezoray2012image] can be defined as follows: $$\label{gen} \begin{split} & E(x) =E_{unary}(x)+E_{binary}(x),\\ & =\sum _{v_i}~(w_{iF}^q|x_i-1|^p+w_{iB}^q|x_i-0|^p)+\sum _{e_{ij}}~w_{ij}^q|x_i-x_j|^p, \raisetag{2.4\baselineskip} \end{split}$$ where $x_i$ stands for the probability that node $v_i$ belongs to the foreground, with $x_F=1$ and $x_B=0$. The first unary term considers the data fidelity of each node independently and the second binary term takes the impact between connected nodes into account. By minimizing the above energy function, the optimal solution for $x$ can be obtained and the label of each node can be updated accordingly: $L(v_i)=1$ if $x_i\geq 0.5$ and $L(v_i)=0$ otherwise. As pointed out in [@couprie2011power], by assigning different values to $p$ and $q$, Equation can be adapted to several popular image segmentation models, including Graph Cuts, Random Walker, Power Watershed, and so on. However, as Graph Cuts prefers a surface with minimum energy, it can suffer from surface shrink [@vicente2008graph]. In brain MR images, as a result of poor contrast conditions around structural boundaries, the shrinkage problem can be more serious. With Power Watershed, due to the fact that edge weights dominate the optimization procedure ($q$ set to infinity), the generated boundary can be rough [@couprie2011power]. As such, to obtain a smooth and quality segmentation result, we choose to employ Random Walker (RW), with $p$ and $q$ set to $2$. Then the minimization problem discussed above can be reformulated as follows: $$\label{obj} \begin{split} \min_{x}~~& \sum _{v_i}~[w_{iF_T}^2(x_i-1)^2+w_{iB_T}^2x_i^2]+\sum _{e_{ij}}~w_{ij}^2(x_i-x_j)^2,\\ s.t.~~~ & x_{F_T}=1,~x_{B_T}=0. \raisetag{1.2\baselineskip} \end{split}$$ This equation can be viewed as a discrete Dirichlet problem and solved by using the Laplace equation with Dirichlet conditions through Graph Analysis Toolbox [@grady2003graph]. Considering RW is sensitive to seed positions [@Sinop2007], the foreground and background seeds need to be chosen carefully. As mentioned above, one fundamental step for node selection is the initial label map, whose quality depends on the choice of registration methods, for example, non-rigid or affine transformation. To increase the robustness of the proposed label fusion approach to registration procedure, an iterative RW scheme is introduced to update the label map and gradually improve the quality of node selection. The experimental results shown in Fig. \[fig9\] also demonstrate that the segmentation accuracy can benefit from this iterative strategy and tends to be stable after several iterations. ![Overview of the proposed feature sensitive label fusion.[]{data-label="fig7"}](fig7){width="\linewidth"} The overview of the proposed method is summarized in Fig. \[fig7\]. With atlas intensity $I_A$ and label maps $I_T$, affine transformation is first carried out and an initial label map for the target image $L_{T_{init}}$ is obtained with majority voting. Node selection can be performed based on the SDM of the initial label map and the graph for label fusion can be constructed with these target nodes. With intensity values, gradient magnitude and structural signature as augmented feature vector, candidate nodes are selected from atlases and the atlas prior is gathered in the form of FSLP. With the label prior from atlases and anatomical prior from the target itself, label fusion is formulated on a graph with Random Walker and the label map $L_T$ is updated gradually through iterations until stable. Experiments =========== Databases and Preprocessing --------------------------- To evaluate the performance of the proposed method, experiments have been carried out on two publicly available MR brain image databases – IBSRand LPBA40[@shattuck2008construction]. The IBSR v2.0 database, consisting of 18 T1-weighted images with 84 manually labeled structures, is provided by the Center for Morphometries Analysis at Massachusetts General Hospital, U.S.A.. Three kinds of voxel resolutions ($mm^3$) are utilized in the IBSR database: $0.97\times 0.97\times 1.5$, $1.0\times 1.0\times 1.5$ and $0.84\times 0.84\times 1.5$. 18 healthy subjects, including 14 males and 4 females, took part in the image acquisition, with ages ranging between 7 and 71. All 18 images have been normalized to Talairach orientation and the bias field has been corrected. The LPBA40 database, consisting of 40 images with 56 manually labeled structures and skull-stripped, is provided by the UCLA Laboratory of Neuro Imaging, U.S.A.. 40 human volunteers, including 20 males and 20 females, took part in the image acquisition, with ages ranging between 19 and 40. The 40 skull-stripped volumes have been rigidly registered to the MNI305 atlas and the intensity inhomogeneity has been corrected. Detailed description of these two databases is presented in Table \[dataset\]. ![image](tab3){width="\linewidth"} \[dataset\] Given the significance of subcortical structures in clinical diagnosis, surgical planning and therapeutic assessment, in this paper, we focus on the extraction of subcortical structures from brain MR images. There are six subcortical structures labeled in IBSR database, including Amygdala, Caudate, Hippocampus, Pallidum, Putamen and Thalamus. As for the LPBA40 database, three subcortical structures are delineated: Caudate, Hippocampus and Putamen. Each of the subcortical structure has two sub-parts, located in the left and right hemispheres respectively. In the experiments, each database was separated into two parts randomly, with equal number of images as training (atlas) and testing (target) data sets. Considering the intensity inconsistency among images, histogram matching was first conducted with the Insight Toolkit. Then pair-wise registrations between each target image and all atlases were performed based on affine transformation, using FLIRT [@jenkinson2002improved] provided by FSL toolbox [@jenkinson2012fsl]. With multiple label maps generated with various atlases, majority voting was applied to generate the initial label map and the results were also employed as the baseline during comparison. Segmentation Results -------------------- In the Methodology section, the label fusion method with Random Walker was initially designed for binary segmentation. While in the experiments, there are several remarkable subcortical structures in one brain volume and some of them can be adjacent with each other. Directly applying binary segmentation for each structure independently may cause some inconsistencies around the neighboring areas. As such, it is necessary to extend the binary segmentation to multi-class segmentation in a refined way. Distinct with other graph-based approaches (like Graph Cuts or Markov Random Field), Random Walker produces a probability map rather than a discrete label map, indicating the probability that each pixel belongs to the foreground. After applying Random Walker to each structure, we can obtain a vector $(p_{i1}, p_{i2},\cdots,p_{iK})$ for each pixel $v_i$, where $K$ is the total number of subcortical structures. $p_{ij}$ represents the probability that $v_i$ belongs to the $j$-th subcortical structure. As for the background probability, $p_{i0}$ is assigned as $1-\max(p_{i1}, p_{i2},\cdots,p_{iK})$. Then the probability distribution over the $K+1$ classes, including the background and multiple structures, can be estimated with the softmax function (normalized exponential function). The category with the largest probability is assigned as the final label for each pixel. Dice Coefficient (DC) is utilized to evaluate the quality of label fusion. In the proposed framework, the iterative strategy is exploited to update target label map $L_T$ gradually. To test the iterative effects, experiments have been conducted on LPBA40 database with available subcortical structures and the segmentation results at each iteration are recorded and displayed in Fig. \[fig9\]. It can be observed that the segmentation accuracy increases with the number of iterations and tends to remain stable after three iterations. ![Segmentation results by our method at each iteration on LPBA40 database, measured with DC.[]{data-label="fig9"}](fig9){width="\linewidth"} Another set of experiments has been carried out to check how the amount of candidate nodes can influence the segmentation quality. As discussed in Feature Matching, with each kind of feature vector, a set of similar pixels can be collected from atlases with randomized k-d tree and the pool of candidate nodes can be further determined with a spatial constraint. In Fig. \[fig10\], the horizontal direction refers to the settings of how many similar pixels need to be selected with one kind of feature. In the Upper subfigure, the Bule curve displays the count of candidate nodes with three features, which indicates that with spatial constraint, only a portion of similar pixels can be kept in the candidate nodes pool. The Green curve shows the percentage of candidate nodes among similar pixels and the percentage decreases along with the increase of similar pixel amount, which can be caused by the disturbances from adjacent tissues with similar profiles. In the Bottom subfigure, the segmentation accuracy measured with DC is displayed and the peak of the performance lies around 32 similar pixels. Fig. \[fig10\] demonstrates that segmentation quality is not proportional to the number of candidate nodes and the setting of 32 similar pixels gives the best performance. ![The effects of the setting of similar pixel amount on candidate nodes and segmentation quality with LPBA40 database. (a) Blue curve, amount of candidate nodes; Green curve, the percentage of candidate nodes among similar pixels. (b) Segmentation result measured with DC.[]{data-label="fig10"}](fig10){width="0.96\linewidth"} ![Parameters selection for compared methods, measured with DC.[]{data-label="compare"}](fig13){width="0.85\linewidth"} Based on the preliminary test on LPBA40, for the experiments on IBSR, the iteration was set to 3 and the number of similar atlas nodes was set to 32. The input patch size for various features follows Table \[tab2\] and the spatial constraint during Feature Matching was set to $9\times 9\times 9$. In FSLP estimation, rather than choosing a fixed $\lambda$ value in Equation , it was set to be adaptive $\lambda=\frac{1}{3}(\frac{\sum f_{1j}^2}{n_1}+\frac{\sum f_{2j}^2}{n_2}+\frac{\sum f_{3j}^2}{n_3})$ in each iteration. The settings of the rest parameters are listed as follows: signed distance threshold $d_T=2$ and $\varepsilon=1$ for node selection, and the tuning parameter used in Equation was set to $\delta=5$. ![image](tab4){width="0.98\linewidth"} \[reibsr\] ![image](fig18){width="\linewidth"} There are several existing softwares which support the automatic segmentation function for brain MR images, for example, BrainSuite [@shattuck2002brainsuite] or FreeSurfer [@fischl2012freesurfer]. Therefore, we decided to utilize BrainSuite, one of the available softwares, to label images in the IBSR and LPBA40 databases as a reference during evaluation. BrainSuite first runs surface/volume registration using the high-resolution ($0.5mm\times 0.5mm \times 0.8mm$) BCI-DNI\_brain atlas and then warps the label map from the atlas to the target image. Besides the reference BrainSuite and the baseline majority voting (MV), the comparison with the conventional patch-based label fusion (PBL) [@coupe2011patch] has been made for evaluation. Considering multiple features employed in the proposed method, it was also compared with the state-of-the-art method – patch-based label fusion with augmented features (PBAF) [@bai2015multi]. In addition to intensity information, the spatial and context features are also appended for label fusion in PBAF. The implementations of PBL and PBAF provided by [@bai2015multi] were used in the experiments. To keep consistent in the evaluation, the patch size for PBL and PBAF was set to $5\times 5\times 5$ and the size of search volume was set to $9\times 9\times 9$. For PBL, the key parameter is the Gaussian kernel value and it was tested from $\{1,10,100,1000,10000,100000\}$ on two databases, measured with DC. From Fig. \[compare\](a), it can be observed that $10$ and $10000$ gives the best performance on IBSR and LPBA40 respectively. For PBAF, the parameter setting of pre-selected atlas nodes amount was tested from $\{2,16,32,64,128,256,512\}$ on two databases. Fig. \[compare\](b) indicates that $128$ can obtain the best result and the accuracy starts to decrease a little after the peak. The quantitative segmentation results on two databases generated with our method and compared methods are listed in Table \[reibsr\], with highest DC values written in Red. For the six subcortical structures delineated in the IBSR database, the accuracy on the left and right subcortical structures are listed respectively, separated by hyphen. The segmentation quality with available subcortical structures on the LPBA40 database is also reported in this Table. Although BrainSuite utilizes a high-resolution atlas, those patch-based methods (PBL, PBAF and our method) which rely on the low-resolution atlases inside the database, obtain much better performances. When compared with the baseline MV, our approach can create the considerable increase of 16.7 % and 9.3% respectively on two databases. The proposed method can still outperform the preeminent label fusion method PBAF by 3.2% and 0.8%. In Fig. \[revis\], we also present some visual results of 2D slices selected from 3D brain MR image volumes. Each row shows the original intensity slice, its corresponding ground truth, the segmentation results generated with compared methods and our method. The figures in the upper 3 rows are selected from IBSR database and those in the bottom 2 rows are from LBPA40 database. The first column (a) displays the 2D intensity slices from brain MR images, with the ground truth shown in column (b) for reference. The segmentation results generated with MV, PBL, PBAF and our method are shown in column (c) to (f). It can be observed that our method can obtain better segmentation quality. As compared with MV, the shapes of labeled subcortical structures by our method are closer to the ground truth. For the labeling results of the patch-based methods PBL and PBAF, some structures have isolate segments and the structural boundaries are relatively rough as compared with those of our method. Further Discussion ------------------ ![Comparison of the segmentation results on IBSR database using structural signature (extracted by CNN) alone, multiple features with equal weights (EW) and Feature Sensitive Label Prior (FSLP), measured with DC.[]{data-label="ew_fslp"}](fig19){width="\linewidth"} There is an underlying assumption for the proposed Feature Sensitive Label Prior (FSLP): distinct features can assist the segmentation in a complementary way. To test the effects of utilizing multiple features, we compare the preliminary FSLP results with the label fusion using structural signature alone, as displayed in Fig. \[ew\_fslp\]. In FSLP, besides the discriminative feature – structural signature extracted by CNN, the less discriminative features – intensity and gradient are also employed during label fusion. To further evaluate our feature sensitivity strategy, we also compare with the label fusion using fixed uniform feature coefficients, i.e., $\alpha_1=\alpha_2=\alpha_3$, and the results with equal weights (EW) are included in Fig. \[ew\_fslp\]. These results demonstrate that embracing distinct features can yield better performance and the feature sensitivity strategy can consistently improve the segmentation quality. It is noting that FSLP is a general method to capture label prior from multiple features and its usage is not limited to the three kinds of features. Other features, such as Local Binary Pattern (LBP) [@ojala2002multiresolution] or Histogram of Oriented Gradients (HoG) [@dalal2005histograms], can be also encoded in FSLP to improve the performance. For the computation cost of the proposed method, there are four main components to be considered: feature generation, feature matching, FSLP and label fusion with Random Walker. During feature generation, three kinds of features are generated: intensity, gradient and structural signature. The complexity of intensity and gradient extraction is $O(dN)$, where $d$ is the feature length. As for the structural signature, it only needs one forward pass through the CNN network to obtain the feature vector. As discussed in Section \[FeaM\], the feature matching process is carried out with the efficient randomized k-d tree algorithm. The FSLP is a non-convex problem and one heuristic approach is designed for it by alternately solving two convex problems. Given that the value of objective function will decrease strictly during each iteration and this non-singular function is lower-bounded by a finite value, the heuristic approach will converge after several iterations [@kushner2012stochastic]. As both two convex problems (least square and quadratic programming) can be solved efficiently, the process to estimate FSLP can be finished in a short time. The last step is the label fusion with Random Walker, which is a discrete Dirichlet problem and can be solved efficiently using the Laplacian equation with the Dirichlet conditions. In total, the running time for labeling one sub-cortical structure in one target image is around 1.5 minutes using the proposed label fusion method (on a 3.1GHz, Quad-Core CPU with 8GB RAM machine), as compared with 10 minutes using PBAF. Besides the volume overlap measurement Dice Coefficient (DC), we also evaluate the segmentation quality on IBSR database with one distance measurement – Hausdorff Distance (HD). The segmentation results of six subcortical structures are shown in Fig. \[ibsrhd\], measured with HD. Although PBL can obtain higher DC values than MV, its performance is a little worse when measured with HD. This phenomenon may be caused by the lack of label consistency within the subcortical structures, as many holes and outliers exist in the labeled region (as shown in Fig. \[revis\](d)). The results measured with HD demonstrate that our method can still obtain competitive performance as compared with the state-of-the-art methods. ![Segmentation quality on IBSR database, measured with HD.[]{data-label="ibsrhd"}](fig17){width="\linewidth"} ![Comparison of the labeling result generated with FSLP and the complete proposed method on IBSR database, measured with DC.[]{data-label="lref"}](fig15){width="\linewidth"} ![image](fig16){width="0.9\linewidth"} In the proposed method, we collect FSLP from atlases to capture the relationships between local intensity profiles and tissue labels, and utilize anatomical priors from target image to assist graph-based label fusion. To check the effects of two priors in detail, the preliminary segmentation with FSLP is estimated and compared with the result after label fusion. Based on Equation , the intermediate labeling result by FSLP can be generated by assigning labels to pixels with minimum reconstruction error. The comparison has been carried out on IBSR database, with quantitative results measured with DC. Fig. \[lref\] indicates that embracing anatomical priors during label fusion can bring consistent improvements for the labeling of each subcortical structure. Some visual segmentation results for each subcortical structure on IBSR database are displayed in Fig. \[lrefv\]. Each row presents the 3D labeled volumes provided by ground truth for reference, intermediate labeling by FSLP and the final segmentation after graph-based label fusion with anatomical priors. As shown in column (b), the labeling result by FSLP also suffers from the holes and outliers problems (Red Circles) as conventional patch-based methods. By introducing anatomical priors as graph seeds and lattice connections to enforce label consistency, although there are still some defects, the structural boundary becomes more smooth and the segmentation quality can be improved significantly as shown in column (c). The graph-based label fusion with Random Walker is an essential component in the proposed method, which can be also utilized to improve the labeling result by other conventional patch-based methods in the future. Conclusion ========== In this paper, a novel framework for atlas-based image segmentation is proposed. It can effectively encode both the label priors from atlases and anatomical prior from the target image. Three kinds of features are employed to represent a pixel, including conventional intensity values and gradient magnitudes, together with the newly designed structural signature. Besides the FSLP from atlases, the anatomical prior from the target itself is also employed for final label estimation. The label fusion process is formulated on a graph with Random Walker, with priors encoded as edge weights. Although atlas seeds involved in graph construction, an equal but simpler graph can be inferred which just relies on nodes from target image. The iterative strategy is employed to update the target label map gradually. The proposed framework has been compared with other state-of-the-art methods for comprehensive evaluation and experimental results indicate that it can obtain better label fusion quality consistently on two publicly available databases.\ **Proof of Non-convexity** To demonstrate the non-convexity of Equation , one simple instance is first proven to be non-convex and then it can be derived that the original is also non-convex. Considering this simplified scenario – the length of each feature vector is 1, the formulation of $W_\alpha$ becomes $W_\alpha=\left[\begin{array}{lll} \alpha_1 & 0 & 0\\ 0 & \alpha_2 & 0\\ 0 & 0 & \alpha_3 \end{array} \right].$ By defining a new variable $f=y-A\beta=\left[\begin{array}{l} f_1\\f_2\\f_3 \end{array} \right]$, the format of the initial problem turns into: $$\label{pro} \begin{split} \min~~& E=\alpha_1^2f_1^2+\alpha_2^2f_2^2+\alpha_3^2f_3^2+\lambda (\alpha_1^2+\alpha_2^2+\alpha_3^2),\\ s.t.~~ & \sum_i \alpha_i=1,~\alpha_i\geq 0. \end{split}$$ Let us consider a special case $f_1=f_2=f_3=f_*$ and use $\eta^2$ to represent $\alpha_1^2+\alpha_2^2+\alpha_3^2$ for simplification. Equation can be rewritten as follows: $$\label{relaxfsp} \begin{split} \min~~& E=\eta^2f_*^2+\lambda \eta^2,\\ s.t.~~ & 0 \leq \eta^2 \leq 1. \end{split}$$ Given the constraints $\sum_i \alpha_i=1$ and $\alpha_i\geq 0$, the range of $\eta^2$ becomes $0 \leq \eta^2 \leq 1$. If the special case shown in Equation is non-convex, it can be inferred that Equation is also non-convex. For this special case, it is much easier to determine whether it is convex or not. A problem is convex if and only if its Hessian matrix is positive semidefinite [@boyd2004convex]. The Hessian matrix of Equation can be calculated as follows: $$\label{hes} H=\left[\begin{array}{ll} ~\frac{\partial^2E}{\partial \eta^2} & \frac{\partial^2E}{\partial \eta \partial f_*}\\ \frac{\partial^2E}{\partial f_* \partial \eta} & ~\frac{\partial^2E}{\partial f_*^2} \end{array} \right] =\left[\begin{array}{ll} 2f_*^2+2\lambda & 4f_*\eta\\ ~~4f_*\eta & ~2\eta^2 \end{array} \right]$$ If $H$ is positive semidefinite, all of its eigenvalues have to be non-negative. The determinant of $H-\gamma I$ is: $det(H-\gamma I)=\gamma ^2-2(f_*^2+\eta^2+\lambda)\gamma+4\lambda \eta^2-12f_*^2\eta^2$. The eigenvalues of $H$ are $\gamma=(f_*^2+\eta^2+\lambda)\pm \sqrt{(f_*^2+\eta^2+\lambda)^2-(4\lambda \eta^2-12f_*^2\eta^2)}$. As it is not guaranteed that $\lambda > 3f_*^2$, one negative eigenvalue can appear. In this case, the Hessian matrix is not positive semidefinite and the special case shown in Equation is non-convex. It can be inferred that the original Equation is also a non-convex problem.  \ [^1]: This work was supported in part by the Hong Kong Research Grants Council under Grant 16203115. [^2]: S. Bao is with the Lo Kwee-Seong Medical Image Analysis Laboratory, Department of Computer Science and Engineering, The Hong Kong University of Science and Technology, Clear Water Bay, Hong Kong (e-mail: [email protected]). [^3]: A. Chung is with the Lo Kwee-Seong Medical Image Analysis Laboratory, Department of Computer Science and Engineering, The Hong Kong University of Science and Technology, Clear Water Bay, Hong Kong (e-mail: [email protected]).
--- abstract: | Drift analysis is one of the state-of-the-art techniques for the runtime analysis of randomized search heuristics. In recent years, many different drift theorems, including additive, multiplicative and variable drift, have been developed, applied and partly generalized or adapted to particular processes. A comprehensive overview article was missing. We provide not only such an overview but also present a universal drift theorem that generalizes virtually all existing drift theorems found in the literature. On the one hand, the new theorem bounds the expected first hitting time of optimal states in the underlying stochastic process. On the other hand, it also allows for general upper and lower tail bounds on the hitting time, which were not known before except for the special case of upper bounds in multiplicative drift scenarios. As a proof of concept, the new tail bounds are applied to prove very precise sharp-concentration results on the running time of the on , general linear functions and . Moreover, user-friendly specializations of the general drift theorem are given. author: - | Per Kristian Lehre\ School of Computer Science\ University of Nottingham\ Nottingham, NG8 1BB\ United Kingdom - | Carsten Witt\ DTU Compute\ Technical University of Denmark\ 2800 Kgs. Lyngby\ Denmark bibliography: - 'drift.bib' title: General Drift Analysis with Tail Bounds --- Introduction ============ Runtime analysis is a rather recent and increasingly popular approach in the theory of randomized search heuristics. Typically, the aim is to analyze the (random) time until one goal of optimization (optimum found, good approximation found etc.) is reached. This is equivalent to deriving the first hitting time for a set of states of an underlying (discrete-time) stochastic process. Drift analysis has turned out as one of the most powerful techniques for runtime analysis. In a nutshell, drift is the expected progress of the underlying process from one time step to another. An expression for the drift is turned into an expected first hitting time via a drift theorem. An appealing property of such a theorem is that a local property (the one-step) drift is translated into a global property (the first hitting time). [@SasakiHajek1988] introduced drift analysis to the analysis of randomized search heuristics (more precisely, of simulated annealing), and [@HeYao01] were the first to apply drift analysis to evolutionary algorithms. The latter paper presents a drift theorem that is nowadays called *additive drift*. Since then, numerous variants of drift theorems have been proposed, including upper and lower bounds in the scenario of *multiplicative drift* [@DJWMultiplicativeAlgorithmica; @LehreWittAlgorithmica12], variable drift [@Johannsen10; @MitavskiyVariable; @DFWVariable; @RoweSudholtChoice] and generalizations thereof, , variable drift without monotonicity conditions [@DoerrHotaKoetzingGECCO12; @FeldmannKoetzingFOGA13]. Moreover, considerable progress was made in the development of so-called distance functions used to model the process analyzed by drift analysis [@DoerrGoldbergAdaptive; @WittCPC13]. The powerful drift theorems available so far allow for the analysis of randomized search heuristics, in particular evolutionary algorithms and ant colony optimization, on example problems and problems from combinatorial optimization. See also the text books by [@AugerDoerrBook], [@NeumannWittBook] and [@JansenBook] for detailed expositions of the state of the art in runtime analysis of randomized search heuristics. At present, the exciting and powerful research done in drift analysis is scattered over the literature. Existing formulations of similar theorems may share many details but deviate in minor conditions. Notation is not always consistent. Several existing variants of drift theorems contain assumptions that might be convenient to formulate, , Markovian properties and discrete or finite search spaces; however, it was not always clear what assumptions were really needed and whether the drift theorem was general enough. This is one reason why additional effort was spent on removing the assumption of discrete search spaces from multiplicative and variable drift theorems [@FeldmannKoetzingFOGA13] – an effort, as we will show, was not really required. Our work makes two main contributions to the area of drift analysis. The first one is represented by a “universal” formulation of a drift theorem that strives for as much generality as possible. We provably can identify all of the existing drift theorems mentioned above as special cases. While doing this, we propose a consistent notation and remove unnecessary assumptions such as discrete search spaces and Markov processes. In fact, we even identify another famous technique for the runtime analysis of randomized search heuristics, namely *fitness levels* [@SudholtTEC13] as a special case of our general theorem. **Caveat.** When we say “all” existing drift theorems, we exclude a specific but important scenario from our considerations. Our paper only considers the case that the drift is directed towards the target of optimization. The opposite case, , scenarios where the process moves away from the target, is covered by the lower bounds from the so-called *simplified/negative drift* theorem [@OlivetoW11], which states rather different conditions and implications. The conditions and generality of the latter theorem were scrutinized in a recent erratum [@OlivetoWittErratumDriftArxiv]. The second contribution is represented by tail bounds, also called deviation bounds or concentration inequalities, on the hitting time. Roughly speaking, conditions are provided under which it is unlikely that the actual hitting time is above or below its expected value by a certain amount. Such tail bounds were not known before in drift analysis, except for the special case of upper tail bounds in multiplicative drift [@DoerrGoldbergAdaptive]. In particular, our drift theorem is the first to prove lower tails. We use these tail bounds in order to prove very sharp concentration bounds on the running time of a on , general linear functions and . Up to minor details, the following is shown for the running time $T$ of the on (and the same holds on all linear functions): the probability that $T$ deviates (from above or below) from its expectation by an additive term of $rn$ is $e^{-\Omega(r)}$ for any constant $r>0$. With , a deviation by $rn^{3/2}$ from the expected value is proved to have probability $e^{-\Omega(r)}$. Such sharp-concentration results are extremely useful from a practical point of view since they reveal that the process is “almost deterministic” such that very precise predictions of its actual running time can be made. Moreover, the concentration inequalities allow a change of perspective to tell what progress can be achieved within a certain time budget, see the recent line of work on fixed-budget computations [@JansenZarges2012; @DJWZGECCO13]. This paper is structured as follows. Section \[sec:prel\] introduces notation and basics of drift analysis. Section \[sec:tail-variable\] presents the general drift theorem, its proof and suggestions for user-friendly corollaries. Afterwards, specializations are discussed. Section \[sec:specialcasesvariable\] shows how the general drift theorem is related to known variable drift theorems, and Section \[sec:specialcasesother\] specializes our general theorem into existing multiplicative drift theorems. The fitness level technique, both for lower and upper bounds, is identified as a special case in Section \[sec:fitness\]. Section \[sec:applyingtail\] is devoted to the tail bounds contained in the general drift theorem. It is shown how they can directly be applied to prove sharp-concentration results on the running time of the on and general linear functions. Moreover, a more user-friendly special case of the theorem with tail bounds is proved and used to show sharp-concentration results  . We finish with some conclusions. Preliminaries {#sec:prel} ============= #### Stochastic process. Throughout this paper, we analyze time-discrete stochastic processes represented by a sequence of non-negative random variables $(X_t)_{t\ge 0}$. For example, $X_t$ could represent the number of zero- or one-bits of an at generation $t$, a certain distance value of a population-based EA from an optimal population etc. In particular, $X_t$ might aggregate several different random variables realized by a search heuristic at time $t$ into a single one. We do not care whether the state space is discrete (, all non-negative integers or even a finite subset thereof) or continuous. In discrete search spaces, the random variables $X_t$ will have a discrete support; however, this is not important for the formulation of the forthcoming theorems. #### First hitting time. We adopt the convention that the process should pass below some threshold $a\ge 0$ (“minimizes” its state) and define the first hitting time $T_a:=\min\{t\mid X_t\le a\}$. If the actual process seeks to maximize its state, typically a straightforward mapping allows us to stick to the convention of minimization. In a special case, we are interested in the hitting time $T_0$ of state $0$; for example when a is run on and were are interested in the first point of time where the number of zero-bits becomes zero. Note that $T_a$ is a stopping time and that we tacitly assume that the stochastic process is adapted to its natural filtration $\filt:=(X_0,\dots,X_t)$, , the information available up to time $t$. #### Drift. The expected one-step change $\delta_t:=\E{X_t - X_{t+1} \mid \filt}$ for $t\ge 0$ is called drift. Note that $\delta_t$ in general is a random variable since the outcomes of $X_0,\dots,X_t$ are random. Suppose we manage to bound $\delta_t$ from below by some $\delta^*>0$ for all possible outcomes of $\delta_t$, where $t<T$. Then we know that the process decreases its state (“progresses towards $0$”) in expectation by at least $\delta^*$ in every step, and the additive drift theorem (see Theorem \[theo:additive\] below) will provide a bound on $T_0$ that only depends on $X_0$ and $\delta^*$. In fact, the very naturally looking result $\E{T_0\mid X_0} \le X_0/\delta^*$ will be obtained. However, bounds on the drift might be more complicated. For example, a bound on $\delta_t$ might depend on $X_t$ or states at even earlier points of time, , if the progress decreases as the current state decreases. This is often the case in applications to evolutionary algorithms. It is not so often the case that the whole “history” is needed. Simple evolutionary algorithms and other randomized search heuristics are Markov processes such that simply $\delta_t=\E{X_t-X_{t+1}\mid X_t}$. With respect to Markov processes on discrete search spaces, drift conditions traditionally use conditional expectations such as $\E{X_t - X_{t+1} \mid X_t=i}$ and bound these for arbitrary $i>0$ instead of directly bounding the random variable $\E{X_t - X_{t+1} \mid X_t}$ on $X_t>0$. #### Caveat. As pointed out, the drift $\delta_t$ in general is a random variable and should not be confused with the “expected drift” $\E{\delta_t}=\E{E(X_t - X_{t+1} \mid \filt)}$, which rarely is available since it averages over the whole history of the stochastic process. Drift is based on the inspection of the progress from one step to another, taking into account every possible history. This one-step inspection often makes it easy to come up with bounds on $\delta_t$. Drift theorems could also be formulated based on expected drift; however, this might be tedious to compute. See [@Jagerskupper11] for one of the rare analyses of “expected drift”, which we will not get into in this paper. We now present the first formal drift theorem dealing with additive drift. It is based on a formulation by [@HeYao01], from which we removed some unnecessary assumptions, more precisely the discrete search space and the Markov property. We only demand a bounded state space. \[theo:additive\] Let $(X_t)_{t\ge 0}$, be a stochastic process over some bounded state space $S\subseteq \R_0^+$. Assume that $\E{T_0 \mid X_0}<\infty$. Then: 1. If $E(X_t-X_{t+1} \mid \filt; X_t > 0) \ge \delta_{\mathrm{u}}$ then $\E{T_0\mid X_0} \le \frac{X_0}{\delta_{\mathrm{u}}}$. 2. If $E(X_t-X_{t+1} \mid \filt) \le \delta_{\mathrm{\ell}}$ then $\E{T_0\mid X_0} \ge \frac{X_0}{\delta_{\ell}}$. By applying the law of total expectation, Statement $(i)$ implies $\E{T_0} \le \frac{\E{X_0}}{\delta_{\mathrm{u}}}$ and analogously for Statement $(ii)$. For the sake of completeness, we also provide as simple proof using martingale theory, inspired by [@Lehre12DriftTutorial]. This proof is simpler than the original one by [@HeYao01]. [Theorem \[theo:additive\]]{} We prove only the upper bound since the lower bound is proven symmetrically. We define $Y_t=X_0 + t\delta_{\mathrm{u}}$. Note that as long $t<T$, $Y_t$ is a supermartingale  $X_0,\dots,X_t$, more precisely by induction $$\begin{aligned} \E{Y_{t+1} \mid X_0,\dots,X_t} & \;=\; \E{X_{t+1} + (t+1)\delta_{\mathrm{u}} \mid X_0,\dots,X_t} \\ & \; \le \; X_{t} -\delta + (t+1) \delta_{\mathrm{u}} = \; Y_t,\end{aligned}$$ where the inquality uses the drift condition. Since the state space is bounded and $\E{T_0\mid X_0}<\infty$, we can apply the optional stopping theorem and get $0+\E{T_0} \delta_{\mathrm{u}} =\E{Y_{T_0}\mid X_0} \le Y_0 = X_0$. Rearranging terms, the theorem follows. Summing up, additive drift is concerned with the very simple scenario that there is a progress of at least $\delta_{\mathrm{u}}$ from all non-optimal states towards the optimum in $(i)$ and a progress of at most $\delta_{\mathrm{\ell}}$ in $(ii)$. Since the $\delta$-values are not allowed to depend on $X_t$, one has to use the worst-case drift over all $X_t$. This might lead to very bad bounds on the first hitting time, which is why more general theorems (as mentioned in the introduction) were developed. It is interesting to note that these more general theorems are often proved based on Theorem \[theo:additive\] above by using an appropriate mapping from the original state space to a new one. Informally, the mapping “smoothes out” position-dependent drift into an (almost) position-independent drift. We will use the same approach in the following. General Drift Theorem {#sec:tail-variable} ===================== In this section, we present our general drift theorem. As pointed out in the introduction, we strive for a very general statement, which is partly at the expense of simplicity. More user-friendly specializations will be proved in the following sections. Nevertheless, the underlying idea of the complicated-looking general theorem is the same as in all drift theorems. We look into the one-step drift $\E{X_t-X_{t+1} \mid \filt}$ and assume we have a (upper or lower) bound $h(X_t)$ on the drift, which (possibly heavily) depends on $X_t$. Based on $h$, a new function $g$ is defined with the aim of “smoothing out” the dependency, and the drift  $g$ (formally, $\E{g(X_t)-g(X_{t+1})\mid \filt}$) is analyzed. Statements $(i)$ and $(ii)$ of the following Theorem \[theo:main\] provide bounds on $\E{T_a}$ based on the drift  $g$. In fact, $g$ is defined in a very similar way as in existing variable-drift theorems, such that Statements $(i)$ and $(ii)$ can be understood as generalized variable drift theorems for upper and lower bounds on the expected hitting time, respectively. Statement $(ii)$ is also valid (but useless) if the expected hitting time is infinite. Sections \[sec:specialcasesvariable\]–\[sec:fitness\] study specializations of these first two statements into existing variable and multiplicative drift theorems. Statements $(iii)$ and $(iv)$ are concerned with tail bounds on the hitting time. Here moment-generating functions of the drift  $g$ come into play, formally $E(e^{-\lambda (g(X_t)-g(X_{t+1}))}\mid \filt)$ is bounded. Again for the sake of generality, bounds on the moment generating function may depend on the current state $X_t$, as captured by the bounds $\beta_{\mathrm{u}}(X_t)$ and $\beta_{\mathrm{\ell}}(X_t)$. We will see an example in Section \[sec:applyingtail\] where the mapping $g$ smoothes out the position-dependent drift into a (nearly) position-independent drift, while the moment-generating function of the drift  $g$ still heavily depends on the current position $X_t$. \[theo:main\] Let $(X_t)_{t\ge 0}$, be a stochastic process over some state space $S\subseteq \{0\}\cup [\xmin,\xmax]$, where $\xmin\ge 0$. Let $h\colon [\xmin,\xmax]\to\R^+$ be an integrable function and define $g\colon \{0\}\cup [\xmin,\xmax]\to \R^{\ge 0}$ by $g(x) := \frac{\xmin}{h(\xmin)} + \int_{\xmin}^x \frac{1}{h(y)} \,\mathrm{d}y$ for $x\ge \xmin$ and $g(0):=0$. Let $T_a=\min\{t\mid X_t\le a\}$ for $a\in \{0\}\cup [\xmin,\xmax]$. Then: 1. If $E(X_t-X_{t+1} \mid \filt; X_t\ge \xmin) \ge h(X_t)$ and $E(g(X_t)-g(X_{t+1}) \mid \filt; X_t\ge \xmin)\ge \alpha_{\mathrm{u}}$ for some $\alpha_{\mathrm{u}}>0$ then $E(T_0\mid X_0) \le \frac{g(X_0)}{\alpha_{\mathrm{u}}}$. 2. If $E(X_t-X_{t+1} \mid \filt; X_t\ge \xmin) \le h(X_t)$ and $E(g(X_t)-g(X_{t+1}) \mid \filt; X_t\ge \xmin)\le \alpha_{\mathrm{\ell}}$ for some $\alpha_{\mathrm{\ell}}>0$ then $E(T_0\mid X_0) \ge \frac{g(X_0)}{\alpha_{\mathrm{\ell}}}$. 3. If $E(X_t-X_{t+1} \mid \filt; X_t > a) \ge h(X_t)$ and there exists $\lambda>0$ and a function $\beta_{\mathrm{u}}\colon (a,\xmax]\to\R^+$ such that $E(e^{-\lambda (g(X_t)-g(X_{t+1}))}\mid \filt; X_t >a) \le \beta_{\mathrm{u}}(X_t)$ then $\Prob(T_a \ge t^* \mid X_0) < \left(\prod_{r=0}^{t^*-1} \beta_{\mathrm{u}}(X_r)\right) \cdot e^{\lambda (g(X_0)-g(a))}$ for $t^* > 0$. 4. If $E(X_t-X_{t+1} \mid \filt; X_t > a) \le h(X_t)$ and there exists $\lambda>0$ and a function $\beta_{\mathrm{\ell}}\colon (a,\xmax]\to\R^+$ such that $E(e^{\lambda (g(X_t)-g(X_{t+1}))}\mid \filt; X_t > a) \le \beta_{\mathrm{\ell}}(X_t)$ then $\Prob(T_a < t^* \mid X_0>a) \le \left( \sum_{s=1}^{t^*-1} \prod_{r=0}^{s-1} \beta_{\mathrm{\ell}}(X_r)\right) \cdot e^{-\lambda (g(X_0)-g(a))}$ for $t^* > 0$. If additionally the set of states $S\cap \{x\mid x\le a\}$ is absorbing, then $\Prob(T_a < t^* \mid X_0>a) \le \left(\prod_{r=0}^{t^*-1} \beta_{\mathrm{\ell}}(X_r)\right) \cdot e^{-\lambda (g(X_0)-g(a))}$. #### Special cases of $(iii)$ and $(iv)$. If $E(e^{\lambda (g(X_t)-g(X_{t+1}))}\mid \filt; X_t > a) \le \beta_{\mathrm{u}}$ for some position-independent $\beta_{\mathrm{u}}$, then Statement $(iii)$ boils down to $\Prob(T_a \ge t^* \mid X_0) < \beta_{\mathrm{u}}^{t^*} \cdot e^{\lambda (g(X_0)-g(a))}$; similarly for Statement $(iv)$. #### On $\pmb{\xmin}$. Some specializations of Theorem \[theo:main\] require a “gap” in the state space between optimal and non-optimal states, modelled by $\xmin>0$. One example is multiplicative drift, see Theorem \[theo:multiplicative-drift\] in Section \[sec:specialcasesother\]. Another example is the process defined by $X_0\sim \text{Unif}[0,1]$ and $X_t=0$ for $t>0$. Its first hitting time of state $0$ cannot be derived by drift arguments since the lower bound on the drift towards the optimum within the interval $[0,1]$ has limit $0$. [Theorem \[theo:main\]]{} The first two items follow from the classical additive drift theorem (Theorem \[theo:additive\]). To prove the third one, we use ideas implicit in [@HajekDrift] and argue $$\begin{aligned} \Prob(T_a\ge t^* \mid X_0) & \;\le\; \Prob(g(X_{t^*})> g(a)\mid X_0) \; =\; \Prob(e^{\lambda g(X_{t^*})} > e^{\lambda g(a)}\mid X_0) \\ & \;<\; E(e^{\lambda g(X_{t^*}) - \lambda g(a)}\mid X_0),\end{aligned}$$ where the first inequality uses that $g(x)$ is non-decreasing, the equality that $x\mapsto e^x$ is a bijection, and the last inequality is Markov’s inequality. Now, $$\begin{aligned} E(e^{\lambda g(X_{t^*})}\mid X_0) & = E(e^{\lambda g(X_{t^*-1})} \cdot E(e^{-\lambda (g(X_{t^*-1})-g(X_{t^*}))}\mid X_0,\dots,X_{t^*-1}) \mid X_0)\\ & = e^{\lambda g(X_0)} \cdot \prod_{r=0}^{t^*-1} E(e^{-\lambda (g(X_{r-1})-g(X_{r}))}\mid X_0,\dots,X_r),\end{aligned}$$ where the last equality follows inductively (note that this does not assume independence of the $g(X_{r-1})-g(X_{r})$). Using the prerequisite from the third item, we get $$E(e^{\lambda g(X_{t^*})}\mid X_0) \le e^{\lambda g(X_0)} \prod_{r=0}^{t^*-1} \beta_{\mathrm{u}}(X_r),$$ altogether $$\Prob(T_a\ge t^*\mid X_0) < e^{\lambda (g(X_0)-g(a))} \prod_{r=0}^{t^*-1} \beta_{\mathrm{u}}(X_r),$$ which proves the third item. The fourth item is proved similarly as the third one. By a union bound, $$\Prob(T_a<t^*\mid X_0>a) \le \sum_{s=1}^{t^*-1} \Prob(g(X_s) \le g(a)\mid X_0)$$ for $t^*>0$. Moreover, $$\Prob(g(X_s) \le a\mid X_0) = \Prob(e^{-\lambda g(X_s)} \ge e^{-\lambda a}\mid X_0) \le E(e^{-\lambda g(X_s)+\lambda a }\mid X_0)$$ using again Markov’s inequality. By the prerequisites, we get $$E(e^{-\lambda g(X_s) }\mid X_0) \le e^{-\lambda g(X_0)} \prod_{r=0}^{s-1} \beta_{\mathrm{\ell}}(X_r)$$ Altogether, $$\Prob(T_a<t^*) \le \sum_{s=1}^{t^*-1} e^{-\lambda (g(X_0)+g(a))} \prod_{r=0}^{s-1} \beta_{\mathrm{\ell}}(X_r).$$ If furthermore $S\cap \{x\mid x\le a\}$ is absorbing then $T_a<t^*$ is equivalent to $X_{t^*}\le a$. In this case, $$\Prob(T_a<t^*\mid X_0) \le \Prob(g(X_{t^*}) \le g(a)\mid X_0) \le e^{-\lambda (g(X_0)+g(a))} \prod_{r=0}^{t^*-1} \beta_{\mathrm{\ell}}(X_r).$$ Our drift theorem is very general and therefore complicated. In order to apply it, specializations might be welcome based on assumptions that typically are satisfied. The rest of this section discusses such simplifications; however, we do not yet apply them in this paper. By making some additional assumptions on the function $h$, we get the following special cases. \[lemma:mgf-concavity\] Let $\lambda>0$, and $h$ be any real-valued, differentiable function. Define $g(x) := \int_{\xmin}^x 1/h(y) dy.$ - If $h'(x)\geq \lambda$ then $f_1(x)=e^{\lambda g(x) }$ is concave. - If $h'(x)\leq \lambda$ then $f_1(x)=e^{\lambda g(x) }$ is convex. - If $h'(x)\geq -\lambda$ then $f_2(x)=e^{-\lambda g(x)}$ is convex. - If $h'(x)\leq -\lambda$ then $f_2(x)=e^{-\lambda g(x)}$ is concave. The double derivative of $f_1$ is $$\begin{aligned} f_1''(x) = \frac{\lambda e^{\lambda g(x)}}{h(x)^2}\cdot (\lambda-h'(x)), \end{aligned}$$ where the first factor is positive. If $h'(x)\geq \lambda$, then $f_1''(x)\leq 0$, and $f_1$ is concave. If $h'(x)\leq \lambda$, then $f_1''(x)\geq 0$, and $f_1$ is convex. Similarly, the double derivative of $f_2$ is $$\begin{aligned} f_2''(x) = \frac{\lambda e^{-\lambda g(x)}}{h(x)^2}\cdot (\lambda+h'(x)), \end{aligned}$$ where the first factor is positive. If $h'(x)\leq -\lambda$, then $f_2''(x)\leq 0$, and $f_2$ is concave. If $h'(x)\geq -\lambda$, then $f_2''(x)\geq 0$, and $f_2$ is convex. Let $(X_t)_{t\ge 0}$, be a stochastic process over some state space $S\subseteq \{0\}\cup [\xmin,\xmax]$, where $\xmin\ge 0$. Let $h\colon [\xmin,\xmax]\to\R^+$ be a differentiable function. Then the following statements hold for the first hitting time $T:=\min\{t\mid X_t=0\}$. 1. If $E(X_t-X_{t+1} \mid \filt; X_t\ge \xmin) \ge h(X_t)$ and $h'(x)\geq 0$, then $$\begin{aligned} E(T\mid X_0) \le \frac{\xmin}{h(\xmin)} + \int_{\xmin}^{X_0} \frac{1}{h(y)} \,\mathrm{d}y. \end{aligned}$$ 2. If $E(X_t-X_{t+1} \mid \filt; X_t\ge \xmin) \le h(X_t)$ and $h'(x)\leq 0$, then $$\begin{aligned} E(T\mid X_0) \ge \frac{\xmin}{h(\xmin)} + \int_{\xmin}^{X_0} \frac{1}{h(y)} \,\mathrm{d}y. \end{aligned}$$ 3. If $E(X_t-X_{t+1} \mid \filt; X_t\ge \xmin) \ge h(X_t)$ and $h'(x)\geq \lambda$ for some $\lambda>0$, then $$\begin{aligned} \Prob(T\ge t \mid X_0) < \exp\left(-\lambda \left(t-\frac{\xmin}{h(\xmin)}-\int_{\xmin}^{X_0} \frac{1}{h(y)}\,\mathrm{d}y\right)\right). \end{aligned}$$ 4. If $E(X_t-X_{t+1} \mid \filt; X_t\ge \xmin) \le h(X_t)$ and $h'(x)\leq -\lambda$ for some $\lambda>0$, then $$\begin{aligned} \Prob(T < t \mid X_0>0) < \frac{e^{\lambda t}-e^{\lambda}}{e^\lambda-1} \exp\left(-\frac{\lambda\xmin}{h(\xmin)}-\int_{\xmin}^{X_0} \frac{\lambda}{h(y)}\,\mathrm{d}y\right). \end{aligned}$$ Let $g(x) := \xmin/h(\xmin) + \int_{\xmin}^x 1/h(y) \,\mathrm{d}y$, and note that $g''(x)=-h'(x)/h(x)^2$. For (i), it suffices to show that condition (i) Theorem \[theo:main\] is satisfied for $\alpha_u:=1$. From the assumption $h'(x)\geq 0$, it follows that $g''(x)\leq 0$, hence $g$ is a concave function. Jensen’s inequality therefore implies that $$\begin{aligned} E(g(X_t)-g(X_{t+1}) \mid \filt; X_t\ge \xmin) & \geq g(X_t) - g( E(X_{t+1} \mid \filt; X_t\ge \xmin))\\ & \geq \int_{X_t-h(X_t)}^{X_t}\frac{1}{h(y)}\,\mathrm{d}y\\ & \geq \frac{1}{h(X_t)}\cdot h(X_t) = 1, \end{aligned}$$ where the last inequality holds because $h$ is a non-decreasing function. For (ii), it suffices to show that condition (i) Theorem \[theo:main\] is satisfied for $\alpha_\ell:=1$. From the assumption $h'(x)\leq 0$, it follows that $g''(x)\geq 0$, hence $g$ is a convex function. Jensen’s inequality therefore implies that $$\begin{aligned} E(g(X_t)-g(X_{t+1}) \mid \filt; X_t\ge \xmin) & \leq g(X_t) - g( E(X_{t+1} \mid \filt; X_t\ge \xmin))\\ & \leq \int_{X_t-h(X_t)}^{X_t}\frac{1}{h(y)}\,\mathrm{d}y\\ & \leq \frac{1}{h(X_t)}\cdot h(X_t) = 1, \end{aligned}$$ where the last inequality holds because $h$ is a non-increasing function. For (iii), it suffices to show that condition (iii) of Theorem \[theo:main\] is satisfied for $\beta_u := e^{-\lambda}$. By Lemma \[lemma:mgf-concavity\] and Jensen’s inequality, it holds that $$\begin{aligned} E(e^{-\lambda (g(X_t)-g(X_{t+1}))}\mid \filt; X_t\ge \xmin) \leq e^{-\lambda r} \end{aligned}$$ where $$\begin{aligned} r & := g(X_t)-g(\E{X_{t+1}\mid \filt; X_t\ge \xmin})\\ & \geq \int_{X_t-h(X_t)}^{X_t}\frac{1}{h(y)}\,\mathrm{d}y\\ & > \frac{1}{h(X_t)}\cdot h(X_t)=1, \end{aligned}$$ where the last inequality holds because $h$ is strictly monotonically increasing. For (iv) a), it suffices to show that condition (iv) of Theorem \[theo:main\] is satisfied for $\beta_\ell := e^{\lambda}$. By Lemma \[lemma:mgf-concavity\] and Jensen’s inequality, it holds that $$\begin{aligned} E(e^{\lambda (g(X_t)-g(X_{t+1}))}\mid \filt; X_t\ge \xmin) \leq e^{\lambda r} \end{aligned}$$ where $$\begin{aligned} r & := g(X_t)-g(\E{X_{t+1}\mid \filt; X_t\ge \xmin})\\ & \leq \int_{X_t-h(X_t)}^{X_t}\frac{1}{h(y)}\,\mathrm{d}y\\ & < \frac{1}{h(X_t)}\cdot h(X_t)=1, \end{aligned}$$ where the last inequality holds because $h$ is strictly monotonically decreasing. Variable Drift as Special Case {#sec:specialcasesvariable} ============================== The purpose of this section is to show that known variants of variable drift theorems can be derived from our general Theorem \[theo:main\]. Classical Variable Drift and Fitness Levels ------------------------------------------- A clean form of a variable drift theorem, generalizing previous formulations by [@Johannsen10] and [@MitavskiyVariable], was recently presented by [@RoweSudholtChoice]. We restate their theorem in our notation and carry out two generalizations that are obvious: we allow for a continuous state space instead of demanding a finite one and do not fix $\xmin=1$. \[theo:variable-rowe-sudholt\] Let $(X_t)_{t\ge 0}$, be a stochastic process over some state space $S\subseteq \{0\}\cup [\xmin,\xmax]$, where $\xmin>0$. Let $h(x)$ be an integrable, monotone increasing function on $[\xmin,\xmax]$ such that $E(X_t-X_{t+1} \mid \filt) \ge h(X_t)$ if $X_t\ge \xmin$. Then it holds for the first hitting time $T:=\min\{t\mid X_t=0\}$ that $$E(T\mid X_0) \le \frac{\xmin}{h(\xmin)} + \int_{\xmin}^{X_0} \frac{1}{h(x)} \,\mathrm{d}x.$$ Since $h(x)$ is monotone increasing, $1/h(x)$ is decreasing and $g(x)$, defined in Theorem \[theo:main\], is concave. By Jensen’s inequality, we get $$\begin{aligned} & E(g(X_t)-g(X_{t+1}) \mid \filt) \;\ge \; g(X_t) - g(E(X_{t+1}\mid \filt)) \\ & \;=\; \int_{E(X_{t+1}\mid \filt)}^{X_t} \frac{1}{h(x)} \,\mathrm{d}x \;\ge\; \int_{X_t-h(X_t)}^{X_t} \frac{1}{h(x)} \,\mathrm{d}x, \end{aligned}$$ where the equality just expanded $g(x)$. Using that $1/h(x)$ is decreasing, it follows $$\int^{X_t}_{X_t-h(X_t)} \frac{1}{h(x)} \,\mathrm{d}x \ge \int^{X_t}_{X_t-h(X_t)} \frac{1}{h(X_t)} \,\mathrm{d}y = \frac{h(X_t)}{h(X_t)} = 1.$$ Plugging in $\alpha_{\mathrm{u}}:=1$ in Theorem \[theo:main\] completes the proof. [@RoweSudholtChoice] also pointed out that variable drift theorems in discrete search spaces look very similar to bounds obtained from the fitness level technique (also called the method of $f$-based partitions, first formulated by [@WegenerICALP01]). For the sake of completeness, we present the classical upper bounds by fitness levels  the here and prove them by drift analyis. Consider the maximizing some function $f$ and a partition of the search space into non-empty sets $A_1,\dots,A_m$. Assume that the sets form an $f$-based partition, , for $1\le i<j\le m$ and all $x\in A_i$, $y\in A_j$ it holds $f(x)<f(y)$. Let $p_i$ be a lower bound on the probability that a search point in $A_i$ is mutated into a search point in $A_{i+1}\cup \dots\cup A_m$. Then the expected hitting time of $A_m$ is at most $$\begin{aligned} \sum_{i=1}^{m-1} \frac{1}{p_i}.\end{aligned}$$ At each point of time, the is in a unique fitness level. Let $Y_t$ the current fitness level at time $t$. We consider the process defined by $X_t=m-Y_t$. By definition of fitness levels and the , $X_t$ is non-increasing over time. Consider $X_t=k$ for $1\le k\le m-1$. With probability $p_{m-k}$, the $X$-value decreases by at least $1$. Consequently, $\E{X_t-X_{t+1}\mid X_t = k} \ge p_{m-k}$. We define $h(x)=p_{m-\lceil x\rceil}$, $\xmin=1$ and $\xmax=m-1$ and obtain an integrable, monotone increasing function on $[\xmin,\xmax]$. Hence, the upper bound on $E(T\mid X_0)$ from Theorem \[theo:variable-rowe-sudholt\] becomes at most $\frac{1}{p_{1}} + \sum_{i=1}^{m-2} \frac{1}{p_{m-i}}$, which completes the proof. Recently, the fitness-level technique was considerably refined and supplemented by lower bounds [@SudholtTEC13]. We will also identify these extensions as a special case of general drift in Section \[sec:fitness\]. Non-monotone Variable Drift and Lower Bounds by Variable Drift -------------------------------------------------------------- In many applications, a monotone increasing function $h(x)$ bounds the drift from below. For example, the expected progress towards the optimum of increases with the distance of the current search point from the optimum. However, recently @DoerrHotaKoetzingGECCO12 found that that certain ACO algorithms do not have this property and exhibit a non-monotone drift. To handle this case, they present a generalization of Johannsen’s drift theorem that does not require $h(x)$ to be monotone. The most recent version of this theorem is presented in [@FeldmannKoetzingFOGA13]. Unfortunately, it turned out that the two generalizations suffer from a missing condition, relating positive and negative drift to each other. Adding the condition and removing an unnecessary assumption (more precisely, the continuity of $h(x)$) the theorem by [@FeldmannKoetzingFOGA13] can be corrected as follows. \[theo:variablenonmonotone\] Let $(X_t)_{t\ge 0}$, be a stochastic process over some state space $S\subseteq \{0\}\cup [\xmin,\xmax]$, where $\xmin>0$. Suppose there exists two functions $h,d\colon [\xmin,\xmax] \to \R^+$, where $h$ is integrable, and a constant $c\ge 1$ such that for all $t\ge 0$ 1. \[it:i\] $E(X_t-X_{t+1} \mid \filt; X_t\ge \xmin) \ge h(X_t)$, 2. \[it:ii\] $\frac{E((X_{t+1} - X_{t}) \cdot \indic{X_{t+1}> X_t}\;\mid\; \filt; X_t\ge \xmin)}{E((X_t - X_{t+1}) \cdot \indic{X_{t+1}<X_t}\;\mid\; \filt; X_t\ge \xmin)} \le \frac{1}{2c^2}$, 3. \[it:iii\] $\lvert X_t-X_{t+1}\rvert\le d(X_t)$ if $X_t\ge \xmin$, 4. \[it:iv\] for all $x,y\ge \xmin$ with $\lvert x-y\rvert \le d(x)$, it holds $h(\min\{x,y\}) \le c h(\max\{x,y\})$. Then it holds for the first hitting time $T:=\min\{t\mid X_t=0\}$ that $$E(T\mid X_0) \le 2c\left(\frac{\xmin}{h(\xmin)} + \int_{\xmin}^{X_0} \frac{1}{h(x)} \,\mathrm{d}x\right).$$ It is worth noting that Theorem \[theo:variable-rowe-sudholt\] is not necessarily a special case of Theorem \[theo:variablenonmonotone\]. Using the definition of $g$ according to Theorem \[theo:main\] and assuming $X_t\ge \xmin$, we compute the drift $$\begin{aligned} & E(g(X_t)-g(X_{t+1}) \mid \filt) \;=\; \mathord{E}\mathord{\left(\int_{X_{t+1}}^{X_{t}} \frac{1}{h(x)} \,\mathrm{d}x\bigm| \filt\right)} \\ & \;=\; \mathord{E}\mathord{\left(\int_{X_{t+1}}^{X_{t}} \frac{1}{h(x)} \,\mathrm{d}x \cdot \indic{X_{t+1}<X_t}\bigm| \filt\right)} - \mathord{E}\mathord{\left(\int_{X_{t}}^{X_{t+1}} \frac{1}{h(x)} \,\mathrm{d}x\cdot\indic{X_{t+1} > X_t}\bigm| \filt\right)}.\end{aligned}$$ Item  from the prerequisites yields $h(x)\le c h(X_t)$ if $X_t-d(X_t)\le x<X_t$ and $h(x)\ge h(X_t)/c$ if $X_t<x\le X_t+d(X_t)$. Using this and $\lvert X_{t}-X_{t+1}\rvert \le d(X_t)$, the drift can be further bounded by $$\begin{aligned} & \mathord{E}\mathord{\left(\int_{X_{t+1}}^{X_{t}} \frac{1}{ch(X_t)} \,\mathrm{d}x \cdot \indic{X_{t+1}<X_t}\bigm| \filt\right)} - \mathord{E}\mathord{\left(\int_{X_t}^{X_{t+1}} \frac{c}{h(X_t)} \, \mathrm{d}x \cdot \indic{X_{t+1}>X_t}\bigm| \filt\right)} \\ & \;\ge\; \mathord{E}\mathord{\left(\int_{X_t}^{X_{t+1}} \frac{1}{2ch(X_t)} \, \mathrm{d}x \cdot \indic{X_{t+1}<X_t}\bigm| \filt\right)} \; = \; \frac{E((X_t-X_{t+1}\bigm| \filt)\cdot \indic{X_{t+1}<X_t}) }{2c h(X_t)}\\ & \;\ge\; \frac{h(X_t)}{2c h(X_t)} \;=\; \frac{1}{2c}, \end{aligned}$$ where the first inquality used the Item  from the prerequisites and the last one Item . Plugging in $\alpha_{\mathrm{u}}:=1/(2c)$ in Theorem \[theo:main\] completes the proof. Finally, so far only a single variant dealing with upper bounds on variable drift and thus lower bounds on the hitting time seems to have been published. It was derived by @DFWVariable. Again, we present a variant without unnecessary assumptions, more precisely we allow continuous state spaces and use less restricted $c(x)$ and $h(x)$. \[theo:variable-dfw\] Let $(X_t)_{t\ge 0}$, be a stochastic process over some state space $S\subseteq \{0\}\cup [\xmin,\xmax]$, where $\xmin>0$. Suppose there exists two functions $c(x)$ and $h(x)$ on $[\xmin,\xmax]$ such that $h(x)$ is monotone increasing and integrable and for all $t\ge 0$, 1. $X_{t+1} \le X_t$, 2. $X_{t+1} \ge c(X_t)$ for $X_t\ge \xmin$, 3. $E(X_t-X_{t+1} \mid \filt) \le h(c(X_t))$ for $X_t\ge \xmin$. Then it holds for the first hitting time $T:=\min\{t\mid X_t=0\}$ that $$E(T\mid X_0) \ge \frac{\xmin}{h(\xmin)} + \int_{\xmin}^{X_0} \frac{1}{h(x)} \,\mathrm{d}x.$$ Using the definition of $g$ according to Theorem \[theo:main\], we compute the drift $$\begin{aligned} & E(g(X_t)-g(X_{t+1}) \mid \filt) \;=\; \mathord{E}\mathord{\left(\int_{X_{t+1}}^{X_{t}} \frac{1}{h(x)} \,\mathrm{d}x \mid \filt \right)} \\ & \;\le\; \mathord{E}\mathord{\left(\int_{X_{t+1}}^{X_{t}} \frac{1}{h(c(X_t))} \,\mathrm{d}x \mid \filt\right)},\end{aligned}$$ where we have used that $X_t\ge X_{t+1}\ge c(X_t)$ and that $h(x)$ is monotone increasing. The last integral equals $$\frac{X_t-E(X_{t+1}\mid \filt)}{h(c(X_t))} \,\mathrm{d}x \le \frac{h(c(x))}{h(c(x))} \;=\;1.$$ Plugging in $\alpha_{\mathrm{\ell}}:=1$ in Theorem \[theo:main\] completes the proof. Multiplicative Drift as Special Case {#sec:specialcasesother} ==================================== We continue by showing that Theorem \[theo:main\] can be specialized in order to re-obtain other classical and recent variants of drift theorems. Of course, Theorem \[theo:main\] is a generalization of additive drift (Theorem \[theo:additive\]), which interestingly was used to prove the general theorem itself. The remaining important strand of drift theorems is thus represented by so-called multiplicative drift, which we focus on in this section. The following theorem is the strongest variant of the multiplicative drift theorem (originally introduced by [@DJWMultiplicativeAlgorithmica]), which can be found in [@DoerrGoldbergAdaptive]. In this section, we for the first time use a tail bound from our main theorem (more precisely, the third item in Theorem \[theo:main\]). Note that the multiplicative drift theorem requires $\xmin$ to be positive, , a gap in the state space. Without the gap, no finite first hitting time can be proved from the prerequisites of multiplicative drift. \[theo:multiplicative-drift\] Let $(X_t)_{t\ge 0}$, be a stochastic process over some state space $S\subseteq \{0\}\cup [\xmin,\xmax]$, where $\xmin>0$. Suppose that there exists some $\delta$, where $0<\delta<1$ such that $E(X_t-X_{t+1} \mid \filt) \ge \delta X_t$. Then the following statements hold for the first hitting time $T:=\min\{t\mid X_t=0\}$. 1. $E(T\mid X_0) \le \frac{\ln(X_0/\xmin)+1}{\delta}$. 2. $\Prob(T \ge \frac{\ln(X_0/\xmin) + r}{\delta} \mid X_0)\le e^{-r}$ for all $r> 0$. In fact, our formulation is minimally stronger than the one by Doerr and Goldberg, who prove $\Prob(T > \frac{\ln(X_0/\xmin) + r}{\delta} \mid X_0)\le e^{-r}$. Using the notation from Theorem \[theo:main\], we choose $h(x) = \delta x$ and obtain $E(X_t-X_{t+1} \mid \filt) \ge h(X_t)$ by the prerequisite on multiplicative drift. Moreover, $g(x) = \xmin/(\delta \xmin) + \int_{\xmin}^x 1/(\delta y) \,\mathrm{d}y = 1/\delta + \ln(x/\xmin) / \delta$ for $x\ge \xmin$. Now we proceed as in the proof of Theorem \[theo:variable-rowe-sudholt\]. Since $\ln(x)$ is concave, Jensen’s inequality yields $E(g(X_t)-g(X_{t+1})\mid X_t) \ge g(X_t)-g(E(X_{t+1}\mid X_t)) \ge \ln(X_t/\xmin)/\delta - \ln((1-\delta) X_t/\xmin) / \delta = -\ln(1-\delta)/\delta \ge 1$, where the last inequality used $\ln(1-\delta)\le -\delta$. Hence, using $\alpha_\mathrm{u}=1$ and $g(X_0)\le 1/\delta + \ln(X_0/\xmin)/\delta$ in the first item of Theorem \[theo:main\], we obtain the first claim of this theorem. To prove the second claim, let $a:=0$ and consider $$\begin{aligned} E(e^{-\delta(g(X_t)-g(X_{t+1}))}\mid \filt; X_t\ge \xmin) & = E(e^{ \ln(X_{t+1}/\xmin) - \ln(X_t/\xmin)})\mid \filt; X_t\ge \xmin) \\ & = E((X_{t+1}/X_t) \mid \filt; X_t\ge \xmin) \le 1-\delta,\end{aligned}$$ Hence, we can choose $\beta_{\mathrm{u}}(X_t)=1-\delta$ for all $X_t\ge \xmin$ and $\lambda=\delta$ in the third item of Theorem \[theo:main\] to obtain $$\Prob(T \ge t^* \mid X_0) < (1-\delta)^{t^*} \cdot e^{\delta (g(X_0)-g(\xmin))} \le e^{-\delta t^*+\ln(X_0/\xmin)}.$$ Now the claim follows by choosing $t^*:=(\ln(X_0/\xmin) + r)/\delta$. Compared to the upper bound, the following lower-bound version includes a condition on the maximum step-wise progress and requires non-increasing sequences. It generalizes the version in [@WittCPC13] and its predecessor in [@LehreWittAlgorithmica12] in that it does not assume $\xmin\ge 1$. \[theo:multdrift-lower\] Let $(X_t)_{t\ge 0}$, be a stochastic process over some state space $S\subseteq \{0\}\cup [\xmin,\xmax]$, where $\xmin>0$. Suppose that there exist $\beta,\delta$, where $0< \beta,\delta\le 1$ such that for all $t\ge 0$ 1. $X_{t+1} \le X_t$, 2. $\Prob(X_t-X_{t+1}\ge \beta X_t) \;\le\; \frac{\beta\delta}{1+\ln (X_t/\xmin)}$. 3. $E(X_t-X_{t+1} \mid \filt) \le \delta X_t$ Then it holds for the first hitting time $T:=\min\{t\mid X_t=0\}$ that $$\begin{aligned} \E{T\mid X_0}\;\ge\; \frac{1+\ln(X_0/\xmin)}{\delta}\cdot \frac{1-\beta}{1+\beta}.\end{aligned}$$ Using the definition of $g$ according to Theorem \[theo:main\], we compute the drift $$\begin{aligned} & E(g(X_t)-g(X_{t+1}) \mid \filt) \;=\; \mathord{E}\mathord{\left(\int_{X_{t+1}}^{X_{t}} \frac{1}{h(x)} \,\mathrm{d}x \mid \filt \right)} \\ & \;\le\; \mathord{E}\mathord{\left(\int_{X_{t+1}}^{X_{t}} \frac{1}{h(x)} \,\mathrm{d}x \mid \filt; X_{t+1}\ge (1-\beta)X_t \right)} \cdot \Prob(X_{t+1}\ge (1-\beta) X_t) \\ & \qquad\qquad + g(X_t) \cdot (1- \Prob(X_{t+1}\ge (1-\beta)X_t))\end{aligned}$$ where we used the law of total probability and $g(X_{t+1}) \ge 0$. As in the proof of Theorem \[theo:multiplicative-drift\], we have $g(x)=(1 + \ln(x/\xmin))/\delta$. Plugging in $h(x)=\delta x$, using the bound on $\Prob(X_{t+1}\ge (1-\beta)X_t)$ and $X_{t+1}\le X_t$, the drift is further bounded by $$\begin{aligned} & \mathord{E}\mathord{\left(\int_{X_{t+1}}^{X_{t}} \frac{1}{\delta (1-\beta)X_t} \,\mathrm{d}x \mid \filt\right)} + \frac{\beta\delta}{1+\ln (X_t/\xmin)} \cdot \frac{1+\ln (X_t/\xmin)}{\delta} \\ & \;=\; \frac{E(X_t-X_{t+1}\mid \filt)}{\delta (1-\beta)X_t} + \beta \;\le\; \frac{\delta X_t}{\delta (1-\beta)X_t} + \beta \;\le\; \frac{1+\beta}{1-\beta},\end{aligned}$$ Using $\alpha_\ell=(1+\beta)/(1-\beta)$ and expanding $g(X_0)$, the proof is complete. Fitness Levels Lower and Upper Bounds as Special Case {#sec:fitness} ===================================================== We pick up the consideration of fitness levels again and prove the following lower-bound theorem due to [@SudholtTEC13] by drift analysis. See Sudholt’s paper for possibly undefined or unknown terms. Consider an algorithm $\mathcal{A}$ and a partition of the search space into non-empty sets $A_1,\dots,A_m$. For a mutation-based EA $\mathcal{A}$ we again say that $\mathcal{A}$ is in $A_i$ or on level $i$ if the best individual created so far is in $A_i$. Let the probability of $\mathcal{A}$ traversing from level $i$ to level $j$ in one step be at most $u_i\cdot \gamma_{i,j}$ and $\sum_{j=i+1}^m \gamma_{i,j}=1$. Assume that for all $j>i$ and some $0\le \chi\le 1$ it holds $$\label{eq:cond-chi-fitness} \gamma_{i,j} \ge \chi \sum_{k=j}^m \gamma_{i,k}.$$ Then the expected hitting time of $A_m$ is at least $$\begin{aligned} & \sum_{i=1}^{m-1} \Prob(\text{$\mathcal{A}$ starts in $A_i$}) \cdot \left(\frac{1}{u_i}+\chi \sum_{j=i+1}^{m-1} \frac{1}{u_j}\right) \\ & \ge \sum_{i=1}^{m-1} \Prob(\text{$\mathcal{A}$ starts in $A_i$}) \cdot \chi \sum_{j=i}^{m-1} \frac{1}{u_j}.\end{aligned}$$ Since $\chi\le 1$, the second lower bound follows immediately from the first one, which we prove in the following. To adopt the perspective of minimization, we say that $\mathcal{A}$ is on distance level $m-i$ if the best individual created so far is in $A_i$. Let $X_t$ be the algorithm’s distance level at time $t$. We define the potential function $g$ mapping distance levels to non-negative numbers (which then form a new stochastic process) by $$g(m-i) = \frac{1}{u_i} + \chi\sum_{j=i+1}^{m-1} \frac{1}{u_j}$$ for $1\le i\le m-1$. Defining $u_m:=\infty$, we extend the function to $g(0)=0$. Our aim is to prove that the drift $$\Delta_{t}(m-i) := E(g(m-i)-g(X_{t+1}) \mid X_t=m-i)$$ has expected value at most $1$. Then the theorem follows immediately using additive drift (Theorem \[theo:additive\]) along with the law of total probability to condition on the starting level. To analyze the drift, consider the case that the distance level decreases from $m-i$ to $m-\ell$, where $\ell > i$. We obtain $$g(m-i) - g(m-\ell) = \frac{1}{u_i}-\frac{1}{u_{\ell}} + \chi \sum_{j=i+1}^{\ell} \frac{1}{u_j},$$ which by the law of total probability (and as the distance level cannot increase) implies $$\begin{aligned} \Delta_{t}(m-i) & = \sum_{\ell=i+1}^{m} u_i\cdot \gamma_{i,\ell} \left(\frac{1}{u_i}-\frac{1}{u_{\ell}} + \chi \sum_{j=i+1}^{\ell} \frac{1}{u_j}\right)\\ & = 1 + u_i \sum_{\ell=i+1}^{m} \gamma_{i,\ell} \left(-\frac{1}{u_{\ell}} + \chi \sum_{j=i+1}^{\ell} \frac{1}{u_j}\right),\end{aligned}$$ where the last equality used $\sum_{\ell=i+1}^m \gamma_{i,\ell}=1$. If we can prove that $$\label{eq:fitness-rearr} \sum_{\ell=i+1}^{m} \gamma_{i,\ell} \chi \sum_{j=i+1}^{\ell} \frac{1}{u_j} \le \sum_{\ell=i+1}^{m} \gamma_{i,\ell} \cdot \frac{1}{u_{\ell}}$$ then $\Delta_{t}(m-i) \le 1$ follows and the proof is complete. To show this, observe that $$\sum_{\ell=i+1}^{m} \gamma_{i,\ell} \chi \sum_{j=i+1}^{\ell} \frac{1}{u_j} = \sum_{j=i+1}^m \frac{1}{u_j}\cdot \chi\sum_{\ell=j}^{m}\gamma_{i,\ell}$$ since the term $\tfrac{1}{u_j}$ appears for all terms $\ell=j,\dots,m$ in the outer sum, each term weighted by $\gamma_{i,\ell}\chi$. By , we have $\chi\sum_{\ell=j}^{m}\gamma_{i,\ell} \le \gamma_{i,j}$, and follows. We remark here without going into the details that also the refined upper bound by fitness levels [Theorem 4 in @SudholtTEC13] can be proved using general drift. Applying the Tail Bounds {#sec:applyingtail} ======================== So far we have mostly derived bounds on the expected first hitting time using Statements $(i)$ and $(ii)$ of our general drift theorem. This section is devoted to applications of the tail bounds (Statements $(iii)$ and $(iv)$). OneMax and Linear Functions --------------------------- In this subsection, we study a classical benchmark problem, namely the on . We start by deriving very precise bounds on first the expected optimization time and then prove tail bounds. The lower bounds obtained will imply results for a much larger function class. Note that @DFWVariable already proved the following result. The expected optimization time of the on is at most $en\ln n - c_1 n + O(1)$ and at least $en \ln n - c_2n$ for certain constants $c_1,c_2>0$. The constant $c_2$ is not made explicit by @DFWVariable, whereas the constant $c_1$ is stated as $0.369$. However, unfortunately this value is due to a typo in the very last line of their proof – $c_1$ should have been 0.1369 instead. We correct this mistake in a self-contained proof. Furthermore, we improve the lower bound using variable drift. It is worth noting that @DFWVariable used variable drift as well, but overestimated the drift function. Here we use a more precise (upper) bound on the drift. \[lem:bound-drift-onemax\] Let $X_t$ denote the number of zeros of the current search point of the on . Then $$\left(1-\frac{1}{n}\right)^{n-x}\frac{x}{n} \le E(X_t-X_{t+1}\mid X_t=x) \le \left(\left(1-\frac{1}{n}\right)\left(1+\frac{x}{(n-1)^2}\right)\right)^{n-x} \frac{x}{n}.$$ The lower bound considers the expected number of flipping zero-bits, assuming that no one-bit flips. The upper bound is obtained in the proof of Lemma 6 in @DFWVariable and denoted by $S_1\cdot S_2$, but is not made explicit in the lemma. \[theo:expected-onemax-upper-lower\] The expected optimization time of the on is at most $en\ln n - 0.1369n + O(1)$ and at least $en \ln n - 5.9338 n - O(\log n)$. Note that with probability $1-2^{-\Omega(n)}$ we have $\tfrac{(1-\epsilon)n}{2} \le X_0 \le \tfrac{(1+\epsilon)n}{2}$ for an arbitrary constant $\epsilon>0$. Hereinafter, we assume this event to happen, which only adds an error term of absolute value $2^{-\Omega(n)}\cdot n\log n=2^{-\Omega(n)}$ to the expected optimization time. In order to apply the variable drift theorem (more precisely, Theorem \[theo:variable-rowe-sudholt\] for the upper and Theorem \[theo:variable-dfw\] for the lower bound), we manipulate and estimate the expressions from Lemma \[lem:bound-drift-onemax\] to make them easy to integrate. To prove the upper bound on the optimization time, we observe $$\begin{aligned} E(X_t-X_{t+1} \mid X_t=x) & \;\ge\; \left(1-\frac{1}{n}\right)^{n-x}\frac{x}{n} \\ & \;=\; \left(1-\frac{1}{n}\right)^{n-1} \cdot \left(1-\frac{1}{n}\right)^{-x} \cdot \frac{x}{n} \cdot \left(1-\frac{1}{n}\right) \\ & \;\ge\; e^{-1+\tfrac{x}{n}} \cdot \frac{x}{n} \cdot \left(1-\frac{1}{n}\right) =:h_\ell(x).\end{aligned}$$ Now, by the variable drift theorem, the optimization time $T$ satisfies $$\begin{aligned} \E{T\mid X_0} & \le \frac{1}{h_\ell(1)} + \int_{1}^{(1+\epsilon)n/2} \frac{1}{h_\ell(x)} \,\mathrm{d}x \le \left(en + \int_{1}^{(1+\epsilon)n/2} e^{1-\frac{x}{n}} \cdot \frac{n}{x} \right)\left(1-\frac{1}{n}\right)^{-1}\\ & \le \left(en - en \left[E_1(x/n)\right]_{1}^{(1+\epsilon)n/2}\right)\left(1+\bigO{\frac{1}{n}}\right),\end{aligned}$$ where $E_1(x):=\int_{x}^\infty \frac{e^{-t}}{t}\,\mathrm{d}t$ denotes the exponential integral (for $x>0$). The latter is estimated using the series representation $E_1(x)=-\ln x - \gamma + \sum_{k=1}^{\infty} \tfrac{(-x)^{k} }{kk!}$, with $\gamma = 0.577\dots$ being the Euler-Mascheroni constant [see @AbramotitzStegun Equation 5.1.11]. We get for sufficiently small $\epsilon$ that $$-\left[E_1(x/n)\right]_{1}^{(1+\epsilon)n/2} = E_1(1/n) -E_1((1+\epsilon)/2) \le -\ln (1/n) - \gamma + O(1/n) - 0.559774.$$ Altogether, $$\E{T\mid X_0} \le en\ln n + en(1-0.559774 -\gamma) + O(\log n) \le en\ln n - 0.1369n + O(\log n)$$ which proves the upper bound. For the lower bound on the optimization time, we need according to Theorem \[theo:variable-dfw\] a monotone process (which is satisfied) and a function $c$ bounding the progress towards the optimum. We use $c(x)=x-\log x-1$. Since each bit flips with probability $1/n$, we get $$\begin{aligned} \Prob(X_{t+1} \le X_t - \log(X_t)-1) & \;\le\; \binom{X_t}{\log(X_t)+1} \left(\frac{1}{n}\right)^{\log(X_t)+1} \\ & \;\le\; \left(\frac{eX_t}{n\log(X_t)+n}\right)^{\log(X_t)+1}.\end{aligned}$$ The last bound takes its maximum at $X_t=2$ within the interval $[2,\dots,n]$ and is $O(n^2)$ then. For $X_t=1$, we trivially have $X_{t+1} \ge c(X_t)=0$. Hence, by assuming $X_{t+1}\ge c(X_t)$ for all $t=O(n\log n)$, we only introduce an additive error of value $O(\log n)$. Next the upper bound on the drift from Lemma \[lem:bound-drift-onemax\] is manipulated. We get for some sufficiently large constant $c^*>0$ that $$\begin{aligned} E(X_t-X_{t+1} \mid X_t=x) & \le \left(\left(1-\frac{1}{n}\right)\left(1+\frac{x}{(n-1)^2}\right)\right)^{n-x} \cdot \frac{x}{n} \\ & \le e^{-1+\frac{x}{n}+\frac{x(n-x)}{n^2}} \cdot \frac{x}{n} \cdot \left(\frac{1+x/(n-1)^2}{1+x/(n^2)}\right)^{n-x} \\ & \le e^{-1+\frac{2x}{n}} \cdot \frac{x}{n} \cdot \left(1+{\frac{c^*}{n}}\right) =: h^*(x),\end{aligned}$$ where we used $1+x\le e^x$ twice. The drift theorem requires a function $h_{\mathrm{u}}(x)$ such that $h^*(x)\le h_\mathrm{u}(c(x)) = h_{\mathrm{u}}(x-\log x-1)$. Introducing the substitution $y:=y(x):=x-\log x-1$ and its inverse function $x(y)$, we choose $h_{\mathrm{u}}(y):=h^*(x(y))$. We obtain $$\begin{aligned} & \E{T\mid X_0} \ge \left(\frac{1}{h^*(x(1))} + \int_{1}^{(1-\epsilon)n/2} \frac{1}{h^*(x(y))} \,\mathrm{d}y \right) \left(1-\bigO{\frac{1}{n}}\right) \\ & \ge \left(\frac{1}{h^*(2)} + \int_{x(1)}^{x((1-\epsilon)n/2)} \frac{1}{h^*(x)} \left(1-\frac{1}{x}\right) \,\mathrm{d}x \right) \left(1-\bigO{\frac{1}{n}}\right)\\ & \ge \left(\frac{en}{2} + \int_{2}^{(1-\epsilon)n/2} e^{1-\frac{2x}{n}} \cdot \frac{n}{x} \left(1-\frac{1}{x}\right) \,\mathrm{d}x \right) \left(1-\bigO{\frac{1}{n}}\right)\\ & = \left(\frac{en}{2} + \int_{2}^{(1-\epsilon)n/2} e^{1-\frac{2x}{n}} \cdot \frac{n}{x} \,\mathrm{d}x - \int_{2}^{(1-\epsilon)n/2} e^{1-\frac{2x}{n}} \cdot \frac{n}{x^2} \,\mathrm{d}x \right) \left(1-\bigO{\frac{1}{n}}\right)\end{aligned}$$ where the second inequality uses integration by substitution and $x(1)=2$, the third one $x(y)\le y$, and the last one partial integration. With respect to the first integral in the last bound, the only difference compared to the upper bound is the $2$ in the exponent of $e^{-1+\frac{2x}{n}}$, such that we can proceed analogously to the above and obtain $-en E_1(2x/n)+C$ as anti-derivative. The anti-derivative of the second integral is $2e E_1(2x/n) - e^{1-2x/n} \tfrac{n}{x} + C$. We obtain $$\begin{aligned} & \E{T\mid X_0} \ge \left(\frac{en}{2} + \left[-(2e+en)E_1(2x/n)+e^{1-2x/n} \frac{n}{x}\right]_{2}^{(1-\epsilon)n/2}\right)\left(1-\bigO{\frac{1}{n}}\right) \end{aligned}$$ Now, for sufficiently small $\epsilon$, $$-\left[E_1(2x/n)\right]_{2}^{(1-\epsilon)n/2} \ge -\ln (4/n) - \gamma - O(1/n) - 0.21939 \ge \ln n -2.18291 - O(1/n)$$ and $$\left[e^{1-2x/n} \frac{n}{x}\right]_{2}^{(1-\epsilon)n/2} \ge 1.9999 - \frac{en}{2} - O(1/n).$$ Altogether, $$\E{T\mid X_0} \ge en\ln n - 5.9338n - O(\log n)$$ as suggested. Knowing the expected optimization time very precisely, we now can derive sharp bounds. Note that the following upper concentration inequality in Theorem \[theo:tails-onemax\] is not new but is already implicit in the work on multiplicative drift analysis by [@DJWMultiplicativeAlgorithmica]. In fact, a very similar upper bound is even available for all linear functions [@WittCPC13]. By constrast, the lower concentration inequality is a novel non-trivial result. \[theo:tails-onemax\] The optimization time of the on is at least $en\ln n - c n - ren$, where $c$ is a constant, with probability at least $1-e^{-r/2}$ for any $r\ge 0$. It is at most $en\ln n + r e n$ with probability at least $1-e^{-r}$. [Theorem \[theo:tails-onemax\], upper tail]{} The upper tail can be easily derived from the multiplicative drift theorem with tail bounds (Theorem \[theo:multiplicative-drift\]). Let $X_t$ denote the number of zeros at time $t$. By Lemma \[lem:bound-drift-onemax\], we can choose $\delta:=1/(en)$. Then the upper bound follows since $X_0\le n$ and $\xmin=1$. We are left with the lower tail. The aim is to prove it using Theorem \[theo:main\].$(iv)$, which includes a bound on the moment-generating function of the drift of $g$. We first set up the $h$ (and thereby the $g$) used for our purposes. Obviously, $\xmin:=1$. \[lem:lowertailonemax-one\] Consider the on and let the random variable $X_t$ denote the current number of zeros at time $t\ge 0$. Then $$h(x) := e^{-1+\frac{2\lceil x\rceil }{n}} \cdot \frac{\lceil x\rceil}{n} \cdot \left(1+{\frac{c^*}{n}}\right),$$ where $c^*>0$ is a sufficiently large constant, satisfies the condition $E(X_t-X_{t+1} \mid X_t=i) \le h(i)$ for $i\in\{1,\dots,n\}$. Moreover, define $g(i):=\xmin/h(\xmin) + \int_{\xmin}^i 1/h(y)\,\mathrm{d}{y}$ and $\Delta_t:=g(X_t)-g(X_{t+1})$. Then $g(i) = \sum_{j=1}^i 1/h(j)$ and $$\Delta_t \le \sum_{j=X_{t+1}+1}^{X_t} \frac{e^{1-2X_{t+1}/n} \cdot n}{j}.$$ According to Lemma \[lem:bound-drift-onemax\], $h^*(x):=((1-\tfrac{1}{n})(1+\tfrac{x}{(n-1)^2})^{n-x} \frac{x}{n}$ is an upper bound on the drift. We obtain $h(x)\ge h^*(x)$ using the simple estimations exposed in the proof of Theorem \[theo:expected-onemax-upper-lower\], lower bound part. The representation of $g(i)$ as a sum follows immediately from $h$ due to the ceilings. The bound on $\Delta_t$ follows from $h$ by estimating $e^{-1+\frac{2\lceil x\rceil }{n}} \cdot \left(1+{\frac{c^*}{n}}\right)\ge e^{-1+2x/n}$. The next lemma provides a bound on the moment-generating function (mgf.) of the drift of $g$, depending on the current state. Note that we do not need the whole filtration based on $X_0,\dots,X_t$ but only $X_t$ since we are dealing with a Markov chain. \[lem:mgf-onemax\] Let $\lambda:=1/(en)$ and $i\in\{1,\dots,n\}$. Then $$E(e^{\lambda \Delta_t}\mid X_t=i) \;\le\; 1+ \lambda + \frac{2\lambda}{ i} + \littleo{\frac{\lambda}{\log n}}.$$ We distinguish between three major cases. **Case 1:** $i=1$. Then $X_{t+1}=0$, implying $\Delta_t\le en$, with probability at most $(1/n)(1-1/n)^{n-1} = (1/(en))(1+1/(n-1))$ and $X_{t+1}=i$ otherwise. We get $$\begin{aligned} E(e^{\lambda \Delta_t}\mid X_t=i) & \;\le\; \frac{1}{en} \cdot e^1 + \left(1-\frac{1}{en}\right) + \bigO{\frac{1}{n^2}} \\ & \;\le\; 1 + \frac{e-1}{en} + \bigO{\frac{1}{n^2}} \;\le\; 1 + \lambda + \frac{(e-2)\lambda}{i} + \littleo{\frac{\lambda}{\ln n}}.\end{aligned}$$ **Case 2:** $2\le i\le \ln^3 n$. Let $Y:=i-X_{t+1}$ and note that $\Prob(Y \ge 2) \le (\ln^6 n)/n^2$ since the probability of flipping a zero-bit is at most $(\ln^3 n)/n$. We further subdivide the case according to whether $Y\ge 2$ or not. **Case 2a:** $2\le i\le \ln^3 n$ and $Y\ge 2$. The largest value of $\Delta_t$ is taken when $Y=i$. Using Lemma \[lem:lowertailonemax-one\] and estimating the $i$-th Harmonic number, we have $\lambda\Delta_t \le (\ln i) + 1 \le 3(\ln\ln n) +1$. The contribution to the mgf. is bounded by $$E(e^{\lambda \Delta_t}\cdot\indic{X_{t+1}\le i-2}\mid X_t=i) \;\le\; e^{3\ln\ln n + 1} \cdot \left(\frac{\ln^6 n}{n^2} \right) = \littleo{\frac{\lambda}{\ln n}}.$$ **Case 2b:** $2\le i\le \ln^3 n$ and $Y<2$. Then $X_{t+1}\ge X_t-1$, which implies $\Delta_t \le en(\ln(X_t)-\ln(X_{t+1}))$. We obtain $$\begin{aligned} & E(e^{\lambda \Delta_t}\cdot\indic{X_{t+1}\ge i-1}\mid X_t=i) \;\le\; E(e^{\ln (i/X_{t+1})}) \;\le\; E(e^{\ln(1+\frac{i-X_{t+1}}{i-1})}) \\ & \;=\; \expec{1+\frac{Y}{i-1}},\end{aligned}$$ where the first inequality estimated $\sum_{i=j+1}^k \tfrac{1}{i}\le \ln(k/j)$ and the second one used $X_{t+1}\ge i-1$. From Lemma \[lem:bound-drift-onemax\], we get $E(Y) \le \frac{i}{en}(1+O((\ln^3 n)/n))$ for $i\le \ln^3 n$. This implies $$\begin{aligned} & \expec{1+\frac{i-X_{t+1}}{i-1}} \le 1 + \frac{i}{en(i-1)} \left(1+\bigO{\frac{\ln^3 n}{n}}\right) \\ & = 1 + \frac{1}{en}\cdot \left(1+\frac{1}{i-1}\right) \left(1+\bigO{\frac{\ln^3 n}{n}}\right) = 1 + \lambda + \frac{2\lambda}{i} + \littleo{\frac{\lambda}{\ln n}},\end{aligned}$$ using $i/(i-1)\le 2$ in the last step. Adding the bounds from the two subcases proves the lemma in Case 2. **Case 3:** $i>\ln^3 n$. Note that $\Prob(Y\ge \ln n)\le \binom{n}{\ln n}\left(\tfrac{1}{n}\right)^{\ln n} \le 1/(\ln n)!$. We further subdivide the case according to whether $Y\ge \ln n$ or not. **Case 3a:** $i>\ln^3 n$ and $Y\ge \ln n$. Since $\Delta_t\le en (\ln n+1)$, we get $$E(e^{\lambda \Delta_t}\cdot \indic{X_{t+1}\le i-\ln^3 n}\mid X_t = i) \le \frac{1}{(\ln n)!} \cdot e^{\ln n+1} = \littleo{\frac{\lambda}{\ln n}}$$ **Case 3b:** $i>\ln^3 n$ and $Y< \ln n$. Then, using Lemma \[lem:lowertailonemax-one\] and proceeding similarly as in Case 2b, $$\begin{aligned} & E(e^{\lambda \Delta_t}\cdot \indic{X_{t+1}> i-\ln n}\mid X_t = i) \\ & \le E(e^{\lambda \exp(1-2(i-\ln n)/n) \cdot n \ln(i/X_{t+1}) } \mid X_t = i) = \expec{\left(1+\frac{i-X_{t+1}}{X_{t+1}}\right)^{\exp((-2i+\ln n)/n)}}.\end{aligned}$$ Using $i>\ln^3 n$ and Jensen’s inequality, the last expectation is at most $$\begin{aligned} & \left(1+\expec{\frac{i-X_{t+1}}{X_{t+1}}}\right)^{\exp((-2i+\ln n)/n)} \le \left(1 + \E{\frac{Y}{i-\ln n}}\right)^{\exp((-2i+\ln n)/n)} \\ & \le \left(1 + \E{\frac{Y}{i(1-1/\!\ln^2 n)}}\right)^{\exp((-2i+\ln n)/n)},\end{aligned}$$ where the last inequality used again $i>\ln^3 n$. Since $E(Y)\le e^{-1+2i/n} \frac{i}{n} (1+c^*/n)$, we conclude $$\begin{aligned} & E(e^{\lambda \Delta_t}\cdot \indic{X_{t+1}> i-\ln n}\mid X_t = i) \le \left(1 + \frac{e^{2i/n}}{en(1-1/\!\ln^2 n)}\right)^{\exp((-2i+\ln n)/n)}\\ & \le \left(1 + \frac{1}{en(1-1/\!\ln^2 n)}\right) \left(1+\bigO{\frac{\ln n}{n^2}}\right) \le 1 + \lambda + \littleo{\frac{\lambda}{\ln n}},\end{aligned}$$ where we used $(1+ax)^{1/a} \le 1+x$ for $x\ge 0$ and $a\ge 1$. Adding up the bounds from the two subcases, we have proved the lemma in Case 3. Altogether, $$E(e^{\lambda \Delta_t}\mid X_t=i) \le 1 + \lambda + \frac{2\lambda}{i} + \littleo{\frac{\lambda}{\ln n}}.$$ for all $i\in\{1,\dots,n\}$. The bound on the mgf. of $\Delta_t$ derived in the previous Lemma \[lem:mgf-onemax\] is particularly large for $i=O(1)$, , if the current state number $X_t$ is small. If $X_t=O(1)$ held during the whole optimization process, we could not prove the lower tail in Theorem \[theo:tails-onemax\] from the lemma. However, it is easy to see that $X_t=i$ only holds for an expected number of at most $en/i$ steps. Hence, most of the time during the optimization the term $2\lambda/i$ from the lemma is negligible, and the position-dependent $\beta_{\mathrm{\ell}}(X_t)$-term from Theorem \[theo:main\].$(iv)$ comes into play. We make this precise in the following final proof, where we iteratively bound the probability of the process being at “small” states. [Theorem \[theo:tails-onemax\], lower tail]{} With overwhelming probability $1-2^{-\Omega(n)}$, $X_0\ge (1-\epsilon)n/2$ for an arbitrarily small constant $\epsilon>0$, which we assume to happen. We consider phases in the optimization process. Phase $1$ starts with initialization and ends before the first step where $X_t< e^{\frac{\ln n - 1}{2}}=\sqrt{n}\cdot e^{-1/2}$. Phase $i$, where $i>1$, follows Phase $i-1$ and ends before the first step where $X_t< \sqrt{n}\cdot e^{-i/2}$. Obviously, the optimum is not found before the end of Phase $\ln(n)$; however, this does not tell us anything about the optimization time yet. We say that Phase $i$ is *typical* if it does not end before time $eni-1$. We will prove inductively that the probability of one of the first $i$ phases not being typical is at most $c'e^{\frac{i}{2}}/\!\sqrt{n} = c'e^{\frac{i-\ln n}{2}}$ for some constant $c'>0$. This implies the theorem since an optimization time of at least $en\ln n - cn - ren$ is implied by the event that Phase $\ln n -\lceil r-c/e\rceil$ is typical, which has probability at least $1-c'e^{\frac{-r+c/e+1}{2}} = 1-e^{\frac{-r}{2}}$ for $c=e(2\ln c'+1)$. Fix some $k>1$ and assume for the moment that all phases up to and including Phase $k-1$ are typical. Then for $1\le i\le k-1$, we have $X_t\ge \sqrt{n}e^{-i/2}$ in Phase $i$, , when $en(i-1)\le t\le eni-1$. We analyze the event that additionally Phase $k$ is typical, which subsumes the event $X_t\ge \sqrt{n}e^{-k/2}$ throughout Phase $k$. According to Lemma \[lem:mgf-onemax\], we get $$\expec{e^{\lambda \Delta_t} \mid X_t} \le 1+\lambda + \frac{2\lambda e^{i/2}}{\sqrt{n}} + \littleo{\frac{\lambda}{\ln n}} = e^{\lambda + \frac{2\lambda e^{i/2}}{\sqrt{n}} + \littleo{\frac{\lambda}{\ln n}}}$$ in Phase $i$, where $1\le i\le k$, and therefore for $\lambda:=\frac{1}{en}$ $$\prod_{t=0}^{enk-1} \expec{e^{\lambda \Delta_t} \mid X_t} \le e^{\lambda enk + \frac{2\lambda en}{\sqrt{n}}\sum_{i=1}^k e^{i/2} + enk \cdot \littleo{\frac{\lambda}{\ln n}}} \le e^{k + \frac{6e^{k/2}}{n\sqrt{n}} + \littleo{1}} \le e^{k+o(1)},$$ where we used that $k\le \ln n$. From Theorem \[theo:main\].$(iv)$ for $a=\sqrt{n}e^{-k/2}$ and $t^*=enk-1$ we obtain $$\Prob(T_a < t^*) \le e^{k+o(1) - \lambda (g(X_0) - g(e^{-k/2}/\sqrt{n}))}.$$ From the proof of of Theorem \[theo:expected-onemax-upper-lower\], lower bound part, we already know that $g(X_0)\ge en\ln n -c''n$ for some constant $c''>0$ (which is assumed large enough to subsume the $-O(\log n)$ term). Moreover, $g(x)\le en (\ln x+1)$ according to Lemma \[lem:lowertailonemax-one\]. We get $$\Prob(T_a < t^*) \le e^{k+o(1) - \ln n + O(1) - k /2 + (\ln n)/2} = e^{\frac{k-\ln n +O(1)}{2}} = c'''\frac{e^{k/2}}{\sqrt{n}},$$ for some sufficiently large constant $c'''>0$, which proves the desired bound on the probability of Phase $k$ not being typical (without making statements about the earlier phases). The probability that all phases up to and including Phase $k$ are typical is then at least $1-(\sum_{i=1}^k c''' e^{i/2})/\!\sqrt{n} \ge 1-c' e^{k/2}/\!\sqrt{n}$ for an appropriate constant $c'>0$. This completes the proof. Finally, we deduce a concentration inequality  linear functions, , functions of the kind $f(x_1,\dots,x_n)=w_1 x_1 + \dots + w_n x_n$, where $w_i\neq 0$. This class of functions contains and has been subject of intense research the last 15 years. The optimization time of the on an arbitrary linear function with non-zero weights is at least $en\ln n - c n - ren$, where $c$ is a constant, with probability at least $1-e^{-r/2}$ for any $r\ge 0$. It is at most $en\ln n + (1+r) e n + O(1)$ with probability at least $1-e^{-r}$. The upper tail is proved in Theorem 5.1 in [@WittCPC13]. The lower bound follows from the lower tail in Theorem \[theo:tails-onemax\] in conjunction with the fact that the optimization time within the class of linear functions is stochastically smallest for [Theorem 6.2 in @WittCPC13]. Simplifications of the Tail Bounds ---------------------------------- The third and fourth condition of Theorem \[theo:main\] involve a moment-generating function, which may be tedious to compute. Greatly inspired by [@HajekDrift] and [@Lehre12DriftTutorial], we show that bounds on moment-generating function follow from more user-friendly conditions. They are based on stochastic dominance of random variables, which is represented by the symbol $\prec$ in the following theorem. \[theo:main-simplifiedexponential\] Let $(X_t)_{t\ge 0}$, be a stochastic process over some state space $S\subseteq \{0\}\cup [\xmin,\xmax]$, where $\xmin\ge 0$. Let $h\colon [\xmin,\xmax]\to\R^+$ be an integrable function. Suppose there exist a random variable $Z$ and some $\lambda>0$ such that $\lvert \int_{X_{t+1}}^{X_t} 1/h(x)\,\mathrm{d}x\rvert \prec Z$ for $X_{t}\ge \xmin$ and $E(e^{\lambda Z}) = D$. Then the following two statements hold for the first hitting time $T:=\min\{t\mid X_t=0\}$. 1. If $E(X_t-X_{t+1} \mid \filt; X_t\ge \xmin) \ge h(X_t)$ then for any $\delta>0$, $\eta:=\min\{\lambda, \delta\lambda^2/(D-1-\lambda)\}$ and $t^*>0$ it holds $$\Prob(T\ge t^* \mid X_0) \le e^{\eta (\int_{\xmin}^{X_0} 1/h(x) \,\mathrm{d}x-(1-\delta)t^*)}.$$ 2. If $E(X_t-X_{t+1} \mid \filt; X_t\ge \xmin) \le h(X_t)$ then for any $\delta>0$, $\eta:=\min\{\lambda, \delta\lambda^2/(D-1-\lambda)\}$ and $t^*>0$ it holds $$\Prob(T < t^* \mid X_0) \le \frac{e^{\eta ((1+\delta)t^* - \int_{\xmin}^{X_0} 1/h(x) \,\mathrm{d}x)}}{\eta(1+\delta)}.$$ Furthermore, if state $0$ is absorbing then $$\Prob(T < t^* \mid X_0) \le e^{\eta ((1+\delta)t^* - \int_{\xmin}^{X_0} 1/h(x) \,\mathrm{d}x)}$$ #### Stochastic dominance. Theorem \[theo:main-simplifiedexponential\] assumes a stochastic dominance of the kind $\lvert \int_{X_{t+1}}^{X_t} 1/h(x)\,\mathrm{d}x\rvert \prec Z$. This is implied by $\lvert X_{t+1}-{X_t}\rvert (1/\!\inf_{x\ge \xmin} h(x)) \prec Z$. As in Theorem \[theo:main\], define $g\colon \{0\}\cup [\xmin,\xmax]\to \R^{\ge 0}$ by $g(x) := \frac{\xmin}{h(\xmin)} + \int_{\xmin}^x \frac{1}{h(y)} \,\mathrm{d}y$ for $x\ge \xmin$ and $g(0):=0$. Let $\Delta_t:=g(X_t)-g(X_{t+1})$ and note that $\Delta_t = \int_{X_{t+1}}^{X_t} \frac{1}{h(x)}\,\mathrm{d}x$. To satisfy the third condition of Theorem \[theo:main\], we note $$\begin{aligned} E(e^{-\eta \Delta_t}) & = 1-\eta E(\Delta_t) + \sum_{k=2}^\infty \frac{\eta^k E(\Delta_t^k)}{k!} \le 1-\eta E(\Delta_t) + \eta^2 \sum_{k=2}^\infty \frac{\eta^{k-2} E(\lvert \Delta_t\rvert ^k)}{k!}\\ & \le 1-\eta E(\Delta_t) + \eta^2 \sum_{k=2}^\infty \frac{\lambda^{k-2} E(\lvert \Delta_t\rvert ^k)}{k!} = 1-\eta + \frac{\eta^2}{\lambda^2} (e^{\lambda Z}-\lambda E(Z)-1),\end{aligned}$$ where we have used $E(\Delta_t)\ge 1$ (proved in Theorem \[theo:main\]) and $\lambda\ge \eta$ . Since $\lvert\Delta_t\rvert \prec Z$, also $E(Z)\ge 1$. Using $e^{\lambda Z} = D$ and $\eta\le \delta\lambda^2/(D-1-\lambda)$, we obtain $$E(e^{-\eta \Delta_t}) \le 1- \eta + \delta \eta = 1- (1-\delta) \eta \le e^{-\eta (1-\delta)}.$$ Setting $\beta_\mathrm{u}:=e^{-\eta(1-\delta)}$ and using $\eta$ as the $\lambda$ of Theorem \[theo:main\] proves the first statement. For the second statement, analogous calculations prove $$E(e^{\eta \Delta_t}) \le 1 + (1+\delta)\eta \le e^{\eta (1+\delta)}.$$ We set $\beta_\mathrm{\ell}:=e^{\eta(1+\delta)}$, use $\eta$ as the $\lambda$ of Theorem \[theo:main\].$(iv)$ and note that $$\frac{e^{\lambda(1+\delta) t}-e^{\lambda(1+\delta)}}{ e^{\lambda(1+\delta)} -1 } \le \frac{e^{\lambda(1+\delta) t}}{\lambda(1+\delta)},$$ which was to be proven. If additionally an absorbing state $0$ is assumed, the stronger upper bound follows from the corresponding statement in Theorem \[theo:main\].$(iv)$. LeadingOnes ----------- [@DJWZGECCO13] have proved tail bounds on the optimization time of the on . Their result represents a fundamentally new contribution, but suffers from the fact it is dependent on a very specific structure and closed formula for the optimization time. Using Theorem \[theo:main-simplifiedexponential\], we will prove similarly strong tail bounds without needing this exact formula. As in [@DJWZGECCO13], we are actually interested in a more general statement. Let $T(a)$ denote the number of steps until an -value of at least $a$ is reached, where $0\le a\le n$. A key observation is that the drift of the -value can be determined exactly. Here and hereinafter, let $X_t:=\max\{0,a-\LO(x_t)\}$ denote the distance in -value of the search point at time $t$ from the target $a$. \[lem:drift-lo\] For all $i>0$, $\E{X_{t}-X_{t+1}\mid X_t=i} = (2-2^{-n+a-i+1}) \cdot (1-1/n)^{a-i} \cdot (1/n)$. (This is taken from [@DJWZGECCO13]). The leftmost zero-bit is at position $a-i+1$. To increase the -value (it cannot decrease), it is necessary to flip this bit and not to flip the first $a-i$ bits, which is reflected by the last two terms in the lemma. The first term is due to the expected number of free-rider bits. Note that there can be between $0$ and $n-a+i-1$ such bits. By the usual argumentation using a geometric distribution, the expected number of free riders in an improving step equals $$\sum_{k=0}^{n-a+i-1} k\cdot \left(\frac{1}{2}\right)^{\min\{n-a+i-1,k+1\}} = 1- 2^{-n+a-i+1},$$ hence the expected progress in an improving step is $2-2^{-n+a-i+1}$. We can now supply the tail bounds. Let $T(a)$ the time for the to reach a -value of at least $a$. Moreover, let $r\ge 0$. Then 1. $E(T(a)) = \frac{n^2-n}{2}\left(\left(1+\frac{1}{n-1}\right)^a-1\right)$. 2. For $0<a\le n-\log n$, we have $$T(a) \le \frac{n^2}{2}\left(\left(1+\frac{1}{n-1}\right)^a-1\right) + r$$ with probability at least $1-e^{-\Omega(rn^{-3/2})}$. 3. For $\log^2 n-1\le a\le n$, we have $$T(a) \ge \frac{n^2-n}{2}\left(\left(1+\frac{1}{n-1}\right)^{a}-1-\frac{2\log^2 n}{n}\right) - r$$ with probability at least $1-e^{-\Omega(rn^{-3/2})}-e^{-\Omega(\log^2 n)}$. The first statement is already contained in [@DJWZGECCO13] and proved without drift analysis. We now turn to the second statement. From Lemma \[lem:drift-lo\], $h(x) = (2-2/n)(1-1/n)^{a-x}/n$ is a lower bound on the drift $E(X_t-X_{t+1}\mid X_t=x)$ if $x\ge \log n$. To bound the change of the $g$-function, we observe that $h(x)\ge 1/(en)$ for all $x\ge 1$. This means that $X_{t}-X_{t+1} = k$ implies $g(X_t)-g(X_{t+1}) \le enk$. Moreover, to change the -value by $k$, it is necessary that - the first zero-bit flips (which has probability $1/n$) - $k-1$ free-riders occur. The change does only get stochastically larger if we assume an infinite supply of free-riders. Hence, $g(X_t)-g(X_{t+1})$ is stochastically dominated by a random variable $Z=en Y$, where $Y$ - is $0$ with probability $1-1/n$ and - follows the geometric distribution with parameter $1/2$ otherwise (where the support is $1,2,\dots$). The mgf. of $Y$ therefore equals $$E(e^{\lambda Y}) = \left(1-\frac{1}{n}\right)e^0 + \frac{1}{n}\frac{1/2}{e^{-\lambda}-(1-1/2)} \le 1+\frac{1}{n(1-2\lambda)},$$ where we have used $e^{-\lambda}\ge 1-\lambda$. For the mgf. of $Z$ it follows $$E(e^{\lambda Z}) = E(e^{\lambda en Y}) \le 1 + \frac{1}{n(1-2en\lambda)},$$ hence for $\lambda:=1/(4en)$ we get $D:=E(e^{\lambda Z})=1+2/n = 1+ 8e\lambda$, which means $D-1-\lambda = (8e-1)\lambda$. We get $$\eta:= \frac{\delta\lambda^2}{D-1-\lambda} = \delta (8e-1) \lambda = \frac{\delta (8e-1)}{4en}$$ (which is less than $\lambda$ if $\delta\le 8e-1$) . Choosing $\delta:=n^{-1/2}$, we obtain $\eta=Cn^{-3/2}$ for $C:=(8e-1)/(4e)$. We set $t^*:=(\int_{\xmin}^{X_0} 1/h(x)\,\mathrm{d}x +r)/(1-\delta)$ in the first statement of Theorem \[theo:main-simplifiedexponential\]. The integral within $t^*$ can be bounded according to $$\begin{aligned} U & :=\int_{\xmin}^{X_0} \frac{1}{h(x)}\,\mathrm{d}x \le \sum_{i=1}^a \frac{1}{(2-2/n)(1-1/n)^{a-i}/n} \\ & = \left(\frac{1}{2}+\frac{1}{2n-2}\right)\cdot n \cdot \frac{(1+1/(n-1))^a-1}{1/(n-1)} = \frac{n^2}{2} \left(\left(1+\frac{1}{n-1}\right)^a - 1\right)\end{aligned}$$ Hence, using the theorem we get $$\Prob(T\ge t^*) = \Prob(T \ge (U + r)/(1-\delta)) \le e^{-\eta r} \le e^{- Cr n^{-3/2}}.$$ Since $U \le en^2$ and $1/(1-\delta)\le 1+2\delta = 1 + 2n^{-1/2}$, we get $$\Prob(T \ge U + 2en^{3/2} + 2r) \le e^{-Cr n^{-3/2}}.$$ Using the upper bound on $U$ derived above, we obtain $$\mathord{\Prob}\mathord{\left(T \ge \frac{n^2}{2} \left(\left(1+\frac{1}{n-1}\right)^a - 1\right) +r \right)} \le e^{-\Omega(r n^{-3/2})}$$ as suggested. Finally, we prove the third statement of this theorem in a quite symmetrical way to the second one. We can choose $h(x):=2(1-1/n)^{a-x}/n$ as an upper lower bound on the drift $E(X_t-X_{t+1}\mid X_t=x)$. The estimation of the $E(e^{\lambda Z})$ still applies. We set $t^*:= (\int_{\xmin}^{X_0} 1/h(x)\,\mathrm{d}x - r)/(1-\delta)$. Moreover, we assume $X_0\ge n-\log^2 n-1$, which happens with probability at least $1-e^{-\Omega(\log^2 n)}$. Note that $$\begin{aligned} L & := \int_{\xmin}^{X_0} \frac{1}{h(x)}\,\mathrm{d}x \ge \sum_{i=1}^{a-\log^2 n} \frac{1}{2(1-1/n)^{a-i}/n} \\ & = \frac{n^2-n}{2} \left(\left(1+\frac{1}{n-1}\right)^{a} - \left(1+\frac{1}{n-1}\right)^{\log^2 n}\right)\\ & \ge \frac{n^2-n}{2} \left(\left(1+\frac{1}{n-1}\right)^{a} - 1 -\frac{\log^2 n}{n}\right),\end{aligned}$$ where the last inequality used $e^{x}\le 1+2x$ for $x\le 1$. The second statement of Theorem \[theo:main-simplifiedexponential\] yields (since state $0$ is absorbing) $$\Prob(T< t^*) = \Prob(T < (U - r)/(1+\delta)) \le e^{-\eta r} \le e^{- Cr n^{-3/2}}.$$ Now, since $$\frac{L-r}{1+\delta} \ge (L-r) - \delta(L-r) \ge L-r-en^{3/2},$$ (using $L\le en^2$), we get the third statement by analogous calculations as above. Conclusions =========== We have presented a general and versatile drift theorem with tail bounds. The new theorem can be understood as a general variable drift theorem and can be specialized into all existing variants of variable, additive and multiplicative drift theorems we found in the literature as well as the fitness-level technique. Moreover, it provides for lower and upper tail bounds, which were not available before in the context of variable drift. We used the tail bounds to prove sharp concentration inequalities on the optimization time of the on , linear functions and . The proofs also give general advice on how to use the tail bounds and we provide simplified (specialized) versions of the corresponding statements. We believe that the research presented here helps consolidating the area of drift analysis. The general formulation of drift analysis increases our understanding of the power of the technique and also its limitations. The tail bounds were heavily awaited in order to prove more practically relevant statements on the optimization time besides the plain expected time. We expect to see further applications of our theorem in the future.
--- abstract: 'An $L$ operator is presented related to an infinite dimensional limit of the fusion $R$ matrices for $U_q(A^{(1)}_{n-1})$ and $U_q(D^{(1)}_n)$. It is factorized into the local propagation operators which quantize the deterministic dynamics of particles and antiparticles in the soliton cellular automata known as the box-ball systems and their generalizations. Some properties of the dynamical amplitudes are also investigated.' address: - 'Research Institute for Mathematical Sciences, Kyoto University, Kyoto 606-8502, Japan' - 'Institute of Physics, University of Tokyo, Tokyo 153-8902, Japan' - 'Division of Mathematical Science, Graduate School of Engineering Science, Osaka University, Osaka 560-8531, Japan' author: - 'R. Inoue' - 'A. Kuniba' - 'M. Okado' title: 'A quantization of box-ball systems' --- Introduction {#sec:intro} ============ The discovery of the box-ball systems [@TS; @T; @TTMS] and their connection to the crystal basis theory [@HKT1; @HHIKTT; @FOY] has led to a new parallelism across the integrable systems of three origins, quantum, ultradiscrete and classical [@KOTY2]. They are a class of two dimensional vertex models in statistical mechanics, one dimensional soliton cellular automata and discrete soliton equations. The fundamental objects that govern the local dynamics in these systems are the triad of quantum $R$, combinatorial $R$ and tropical $R$, all satisfying the Yang-Baxter equation. They are a finite dimensional matrix, a bijection among finite sets and a birational map, which are characterized as the intertwiners of $U_q$ modules, crystals and geometric crystals, respectively. The box-ball systems $({\ensuremath{\mathfrak{g}_n}}=A^{(1)}_{n-1})$ and their generalizations to the ${\ensuremath{\mathfrak{g}_n}}$ automata [@HKT1; @HKOTY] are associated with the combinatorial $R$, which arises both as the $q \rightarrow 0$ limit of the quantum $R$ and as the ultradiscretization of the tropical $R$ [@KOTY1]. An interesting feature in these automata is the factorization of time evolution into a product of propagation operators of particles and antiparticles with fixed color [@HKT3; @KTT]. This is a consequence of the factorization of the combinatorial $R$ shown in [@HKT2]. Our aim in this paper is to elucidate a similar factorization for the relevant quantum $R$, and thereby to launch an integrable quantization of the deterministic dynamics of particles and antiparticles in the generalized box-ball systems. To illustrate the idea, consider for example the quantum affine algebra $U_q(A^{(1)}_{n-1})$ and its irreducible finite dimensional representation $V_m$ of $m$ fold symmetric tensors. The quantum $R$ matrix for $V_m \ot V_1$ (\[eqa:Rm1w\]) gives rise to the commuting transfer matrix $T_m(z)$ acting on $\cdots \ot V_1 \ot V_1 \ot \cdots$, which reduces, at $q=0$, to the time evolution of the box-ball system with capacity $m$ carrier [@TM]. One can naturally extract an $L$ operator, a Weyl algebra valued matrix, from the $m \rightarrow \infty$ limit of the $R$ matrix in the vicinity of the lowest weight vector. See (\[eqa:ex23\]) and (\[eqa:ex4\]) for example. More general $L$ operators can be constructed similarly corresponding to the $m$ generic situation. The limit considered here is motivated by the box-ball systems and has a special feature in that the resulting $L$ admits the factorization as in Proposition \[pra:factor\]. Each operator $K_i$ appearing there encodes the amplitudes for a local propagation of color $i$ particles as depicted in Fig. \[fig:Ki\]. At $q=0$, it reduces to the deterministic dynamics in the box-ball system [@T]. Sections \[subsec:R\]–\[subsec:qbbs3\] are devoted to an exposition of these observations. Sections \[subsec:norm\] and \[subsec:ba\] are concerned with some properties of the dynamical amplitudes and the implication of the Bethe ansatz, respectively. In section \[sec:D\] we establish parallel results on $D^{(1)}_n$ case. The calculation of the fusion $R \in \text{End}(V_m \ot V_1)$ is more involved than $A^{(1)}_{n-1}$. It is done in the limit $m \rightarrow \infty$ in appendix \[appD:W\]. The $L$ operator is given in section \[subsec:Ld\] and factorized in section \[subsec:facLd\]. The propagation operators describe the amplitudes of pair creation and annihilation of particles and antiparticles as depicted in Fig. \[fig:Kmu\]. A quantized $D^{(1)}_n$ automaton is presented in section \[subsec:dbbs\] with a few basic properties. The fusion construction of the $R$ matrices and their matrix elements for $A^{(1)}_{n-1}$ given in section \[sec:A\] are not new. They have been included for the sake of self-containment. The content of this paper may be regarded as a generalization of the one in [@HKT2] for $q=0$. It will be interesting to investigate the present results in the light of the works [@KT; @KR; @S]. $A^{(1)}_{n-1}$ case {#sec:A} ==================== $R$ matrix $R(z)$ and its fusion $R^{(m,1)}(z)$ {#subsec:R} ------------------------------------------------ We recall the standard fusion construction [@KRS]. Let $V=\C v_1 \oplus \cdots \oplus \C v_n$ be the vector representation of the quantum affine algebra $U_q=U_q(A^{(1)}_{n-1})$ without the derivation operator. Here $v_1$ is the highest weight vector and our convention of the coproduct is $\Delta(e_i) = e_i\ot 1 + t_i\ot e_i, \Delta(f_i) = f_i\ot t^{-1}_i + 1 \ot f_i$ for the Chevalley generators. The $R$ matrix $R(z) \in {\rm End}(V\ot V)$ reads $$\label{eqa:r} \begin{split} &R(z) = a(z)\sum_iE_{ii}\ot E_{ii} + b(z)\sum_{i\neq j}E_{ii}\ot E_{jj} + c(z)\left(z\sum_{i<j}+\sum_{i>j}\right) E_{ji}\ot E_{ij},\\ &a(z) = 1-q^2z,\quad b(z) = q(1-z),\quad c(z) = 1-q^2, \end{split}$$ where $E_{ij}$ is the matrix unit acting as $E_{ij}v_k = \delta_{jk}v_i$. It satisfies the Yang-Baxter equation $R_{23}(z'/z)R_{13}(z')R_{12}(z) = R_{12}(z)R_{13}(z')R_{23}(z'/z)$. The matrix ${\check R}(z)=PR(z)$ commutes with $\Delta(U_q)$, where $P$ denotes the transposition of the components. Let $V_m$ be the irreducible $U_q$ module spanned by the $m$ fold $q-$symmetric tensors. We take $V_1 = V$ and realize the space $V_m$ as the quotient $V^{\otimes m}/A$, where $A= \sum_j V^{\otimes j} \ot {\rm Im}PR(q^{-2}) \ot V^{\ot m-2-j}$. It is easy to see ${\rm Im}PR(q^{-2}) = {\rm Ker}PR(q^{2}) = \bigoplus_{i<j}\C(v_i\ot v_j - q v_j \ot v_i)$. For $n \ge i_1 \ge \cdots \ge i_m \ge 1$, we write the vector $(v_{i_1}\ot \cdots \ot v_{i_m} \mod A) \in V_m$ as $x=[x_1,\ldots, x_n]$, where $x_i$ is the number of the letter $i$ in the sequence $i_1, \ldots, i_m$. Thus, $x_i \in \Z_{\ge 0}$ and $x_1 + \cdots + x_n = m$ holds. Due to the Yang-Baxter equation, the operator $$\label{eqd:Rcomp} \frac{R_{1,m+1}(zq^{m-1})R_{2,m+1}(zq^{m-3})\cdots R_{m,m+1}(zq^{-m+1})} {a(zq^{m-3})a(zq^{m-5}) \cdots a(zq^{-m+1})}$$ can be restricted to ${\rm End}(V_m \ot V)$. As a result we get an $m$ by 1 fusion $R$ matrix $R^{(m,1)}(z) \in {\rm End}(V_m \ot V)$, which reads explicitly as $$\begin{aligned} &R^{(m,1)}(z)(x\ot v_j) = \sum_k w_{j k}[x \vert y] (y \ot v_k), \label{eqa:Rm1w}\\ &w_{j k}[x \vert y] = \begin{cases} q^{m-x_k}-q^{x_k+1}z & j=k\\ (1-q^{2x_k})q^{x_{k+1}+x_{k+2}+\cdots+ x_{j-1}}z & j>k\\ (1-q^{2x_k})q^{m-(x_j+x_{j+1}+\cdots+x_{k})} & j<k. \end{cases} \label{eqa:element}\end{aligned}$$ It is customary to attach the matrix element $w_{j k}[x \vert y]$ with a diagram like Fig. \[fig:wjk\]. (50,17)(-5,18) (10,24.8)[(1,0)[10]{}]{} (10,25.2)[(1,0)[10]{}]{} (15,30)[(0,-1)[10]{}]{} (7.8,24.3)[$x$]{}(21,24.3)[$y$]{} (14.2,31.6)[$j$]{}(14.2,16.7)[$k$]{} (-10,24)[$w_{j k}[x \vert y] = $]{} Here $y=[y_i]$ is specified by the weight conservation as $$\label{eqa:y} y_i = x_i + \delta_{i j}- \delta_{i k}$$ in terms of $x, j$ and $k$. At $q=0$, the matrix element $w_{j k}[x \vert y]$ is nonzero if and only if $x \ot v_j \simeq v_k \ot y$ in the combinatorial $R$: $B_m \ot B_1 \simeq B_1 \ot B_m$, where it takes the value $z^H$, with $1-H=$ winding number [@NY]. The fusion $R$ matrix $R^{(m,1)}(z)$ reduces to $R(z)$ in (\[eqa:r\]) for $m=1$, and it satisfies the Yang-Baxter equation in ${\rm End}(V_m \ot V \ot V)$: $$\label{eqa:ybe2} R_{23}(z'/z)R^{(m,1)}_{13}(z')R^{(m,1)}_{12}(z) = R^{(m,1)}_{12}(z)R^{(m,1)}_{13}(z') R_{23}(z'/z).$$ The $R$ matrix $R^{(1,m)}(z) \in {\rm End }(V \ot V_m)$ is similarly obtained as $R^{(1,m)}(z)(v_j \ot x) = \sum_k \bar{w}_{j k}[x \vert y] (v_k \ot y)$, where $$\bar{w}_{j k}[x \vert y] = \begin{cases} q^{m-x_k}-q^{x_k+1}z & j=k\\ (1-q^{2x_k})q^{m-(x_k+x_{k+1}+\cdots+x_{j})} & j>k\\ (1-q^{2x_k})q^{x_{j+1}+x_{j+2}+\cdots+ x_{k-1}}z & j<k. \end{cases}$$ The inversion relation $$\label{eqa:inv} PR^{(1,m)}(z^{-1})PR^{(m,1)}(z) = (1-q^{m+1}z)(1-q^{m+1}z^{-1})\text{Id}$$ is valid. $L$ operator $L(z)$ {#subsec:L} -------------------- Now we extract an $L$ operator $L(z)$ from a certain limit of $R^{(m,1)}(z)$. We illustrate the idea along the $n=3$ case. The 3 by 3 matrix $(w_{ji}[x \vert y])_{1 \le i,j \le 3}$ with $y$ chosen as (\[eqa:y\]) looks as $$\begin{pmatrix} q^{x_2+x_3}-q^{x_1+1}z & (1-q^{2x_1})z & (1-q^{2x_1})q^{x_2}z \\ (1-q^{2x_2})q^{x_3} & q^{x_1+x_3}-q^{x_2+1}z & (1-q^{2x_2})z \\ 1-q^{2x_3} & (1-q^{2x_3})q^{x_1} & q^{x_1+x_2}-q^{x_3+1}z \end{pmatrix}.$$ Throughout the paper we assume that $\vert q \vert < 1$. Consider the limit $m \rightarrow \infty$ with $x_1$ and $x_2$ kept fixed. Namely we take $x_3 \rightarrow \infty$ and stay in the vicinity of the lowest weight vector of $V_m$ as $m$ goes to infinity. The above matrix simplifies to $$\label{eqa:mat3} \begin{pmatrix} -q^{x_1+1}z & (1-q^{2x_1})z & (1-q^{2x_1})q^{x_2}z \\ 0 & -q^{x_2+1}z & (1-q^{2x_2})z \\ 1 & q^{x_1} & q^{x_1+x_2} \end{pmatrix}.$$ In the limit, the constraint $x_1+x_2 \le m$ becomes void and the vector $x =[x_1,x_2,x_3] \in V_m$ gets effectively labeled as $[x_1,x_2]$ with arbitrary $x_1, x_2 \in \Z_{\ge 0}$. For generic (nonzero) $x_1$ and $x_2$, the $(1,2)$ element $(1-q^{2x_1})z$ in (\[eqa:mat3\]), for example, is the matrix element of the transition $[x_1,x_2] \rightarrow [x_1-1,x_2+1]$ in view of (\[eqa:y\]). Similarly the $(2,3)$ element $(1-q^{2x_2})z$ is the one for $[x_1,x_2] \rightarrow [x_1,x_2-1]$. Introducing the operator $P_2$ and $Q_2$ that act on $[x_1, x_2]$ as $P_2[x_1,x_2]= q^{x_2}[x_1,x_2]$ and $Q_2[x_1,x_2]= [x_1,x_2+1]$, the $(2,3)$ element of (\[eqa:mat3\]) is represented as $zQ_2^{-1}(1-P^2_2)$. With the similar operators $P_1$ and $Q_1$ concerning the coordinate $x_1$, the matrix (\[eqa:mat3\]) is presented as $$\label{eqa:mat3pq} \begin{pmatrix} -zqP_1 & zQ^{-1}_1(1-P_1^2)Q_2 & zQ^{-1}_1(1-P_1^2)P_2 \\ 0 & -zqP_2 & zQ^{-1}_2(1-P^2_2) \\ Q_1 & P_1Q_2 & P_1P_2 \end{pmatrix}.$$ where operators are all commutative except $P_iQ_i = qQ_iP_i$. Motivated by these observations, we prepare for general $n$ the Weyl algebra generated by the pairs $P_i^{\pm 1}, Q_i^{\pm 1}\; (1 \le i \le n-1)$ under the relations $$\label{eqa:pqcom} \begin{split} &Q_iQ_j=Q_jQ_i, \quad P_iP_j=P_jP_i,\quad P_iQ_j = q^{\delta_{ij}}Q_jP_i,\\ &Q_iQ^{-1}_i = Q^{-1}_iQ_i=1, \quad P_iP^{-1}_i =P^{-1}_iP_i=1. \end{split}$$ We actually consider a slight generalization of (\[eqa:mat3pq\]) containing parameters $a_1, \ldots, a_{n-1}$. Let ${\mathcal A}$ be the subalgebra of the Weyl algebra generated by $$\label{eqa:pqr} P_i, \;\; Q_i, \;\; R_i = Q^{-1}_i(1-a_iP^2_i)\quad 1 \le i \le n-1.$$ We also use the subsidiary symbol $P'_i = -a_iqP_i$. The previous discussion corresponds to $\forall a_i=1$ case. The combination $R_i \in {\mathcal A}$ introduced here should not be confused with the $R$ matrix. Then we define the operator $L(z) \in {\mathcal A}\ot {\rm End}(V)$ by $$L(z) = \begin{pmatrix} L_{11}(z) & \cdots & L_{1n}(z)\\ \vdots & \ddots & \vdots\\ L_{n1}(z) & \cdots & L_{nn}(z) \end{pmatrix},$$ where $L_{ij}(z)\in {\mathcal A}$ is given by ($P_{i,j} = P_iP_{i+1}\cdots P_j$ for $i\le j$) $$\label{eqa:L-elements} L(z)_{i i} = \begin{cases} zP'_i & i < n,\\ P_{1, n-1} & i = n, \end{cases} \qquad L(z)_{i j} = \begin{cases} zR_iP_{i+1,j-1}Q_j & i < j < n,\\ zR_iP_{i+1,n-1} & i < j=n,\\ P_{1,j-1}Q_j & j<i=n,\\ 0 & j<i<n. \end{cases}$$ This is an operator interpretation of $w_{ji}[x \vert y]$ (\[eqa:element\]) in the limit $x_n \rightarrow \infty$ deformed with $a_1, \ldots, a_{n-1}$. See (\[eqa:LW\]). For example for $A^{(1)}_{1}$ and $A^{(1)}_{2}$, they read $$\label{eqa:ex23} L(z) = \begin{pmatrix} zP'_1 & zR_1\\ Q_1 & P_1 \end{pmatrix},\qquad L(z) = \begin{pmatrix} zP'_1 & zR_1Q_2 & zR_1P_2\\ 0 & zP'_2 & zR_2\\ Q_1 & P_1Q_2& P_{1,2} \end{pmatrix}.$$ The latter agrees with (\[eqa:mat3pq\]) when $\forall a_i = 1$. For $A^{(1)}_{3}$ one has $$\label{eqa:ex4} L(z) = \begin{pmatrix} zP'_1 & zR_1Q_2 & zR_1P_2Q_3 & zR_1P_{2,3} \\ 0 & zP'_2 & zR_2Q_3 & zR_2P_3 \\ 0 & 0 & zP'_3 & zR_3 \\ Q_1 & P_1Q_2 & P_{1,2}Q_3 & P_{1,3} \end{pmatrix}.$$ Our convention is $L(z)(\alpha \ot v_j) = \sum_i(L_{ij}(z)\alpha)\ot v_i$ for $\alpha \in {\mathcal A}$. Similarly we let $\overset{1}{L}(z) , \overset{2}{L}(z) \in {\mathcal A} \ot {\rm End}(V \ot V)$ denote the operators acting as $\overset{1}{L}(z)(\alpha \ot v_i \ot v_j) = \sum_k (L_{ki}(z)\alpha)\ot v_k \ot v_j$ and $\overset{2}{L}(z)(\alpha \ot v_i \ot v_j) = \sum_k (L_{kj}(z)\alpha)\ot v_i \ot v_k$. As an analogue of the Yang-Baxter equation (\[eqa:ybe2\]), we have \[pra:RLL\] $$R(z_2/z_1)\overset{2}{L}(z_2)\overset{1}{L}(z_1) = \overset{1}{L}(z_1)\overset{2}{L}(z_2)R(z_2/z_1) \in {\mathcal A}\ot{\rm End}(V \ot V).$$ In section \[subsec:facL\], this will be proved based on the factorization of $L(z)$. Factorization of $L(z)$ {#subsec:facL} ------------------------ Let us introduce the operators $K_i \in {\mathcal A} \ot {\rm End }(V)$ for $1 \le i \le n-1$ by $$\label{eqa:kdef} \begin{split} &K_i = ((K_{i})_{j,k})_{1 \le j,k \le n},\\ &(K_{i})_{i,i} = P'_i,\; (K_{i})_{i,n} = R_i,\; (K_{i})_{n,i} = Q_i,\; (K_{i})_{n,n} = P_i,\\ &(K_{i})_{j,j} = 1\,(j \neq i, n). \end{split}$$ The other elements are zero. The $K_i$ with $\forall a_i=1$ will be interpreted as the local propagation operator in quantized box-ball system in section \[subsec:qbbs3\]. We also introduce an $n$ by $n$ matrix $$D(z) = z \hbox{ diag}(1,\ldots, 1,z^{-1}),$$ which acts on $V$ only. \[pra:factor\] $$L(z) = D(z)K_1K_2 \cdots K_{n-1}$$ For example the latter in (\[eqa:ex23\]) is expressed as $$\begin{pmatrix} zP'_1 & zR_1Q_2 & zR_1P_2\\ 0 & zP'_2 & zR_2\\ Q_1 & P_1Q_2& P_1P_2 \end{pmatrix} = \text{diag}(z,z,1) \begin{pmatrix} P'_1 & 0 & R_1\\ 0 & 1 & 0\\ Q_1 & 0 & P_1 \end{pmatrix} \begin{pmatrix} 1 & 0 & 0 \\ 0 & P'_2 & R_2\\ 0 & Q_2& P_2 \end{pmatrix}.$$ Denote the $n$ by $n$ matrix $L(z\!=\!1)$ defined by (\[eqa:L-elements\]) by $L_n$. We are to show $K_1K_2\ldots K_{n-1} = L_n$ for $A^{(1)}_{n-1}$. This is done by induction on $n$. The case $n=3$ is checked in the above. Suppose the equality is valid for $n$. Then from the structure of the matrices $K_i$, one can evaluate $K_1K_2 \cdots K_{n}$ for $A^{(1)}_n$ as the product of $K_1$ and the rest as $$\label{eqa:kpro} \begin{small} \begin{pmatrix} P'_1 & & R_1 \\ & & \\ & \openone_{n-1} & \\ & & \\ Q_1 & & P_1 \end{pmatrix} \end{small} \begin{pmatrix} 1 & 0 & \cdots & 0\\ 0 & & & \\ \vdots & & L_n^+ \\ 0 & & & \end{pmatrix} = L_{n+1}.$$ Here $L_n^+$ is $L_n$ with all the constituent operators $X_i (X = P, P', Q, R)$ replaced by $X_{i+1}$, and $\openone_{n-1}$ is the identity matrix of size $n-1$. It is straightforward to verify this identity. \[rema:com\] Elements of ${\mathcal A}$ contained in any single $L_{ij}(z)$ (\[eqa:L-elements\]) are all commutative. As a result, the identity (\[eqa:kpro\]) holds under any interchange of $P_i, P'_i, Q_i$ and $R_i$ on the both sides. Let us make use of the factorization to prove Proposition \[pra:RLL\]. We first define $\sigma_1, \ldots, \sigma_{n-1}, \sigma \in {\rm End }(V)$ by $$\begin{aligned} &\sigma_iv_j = \begin{cases} v_{i+1} & \hbox{ if } j=i\\ v_i & \hbox{ if } j=i+1\\ v_j & \hbox{ otherwise,} \end{cases}\\ &\sigma = \sigma_{n-1}\sigma_{n-2} \cdots \sigma_1.\end{aligned}$$ Thus $\sigma v_j = v_{j-1}$ is valid for indices in $\Z/n\Z$. Consider the following gauge transformation of $K_i$: $$\label{eqa:sdef1} S_i = \sigma_i \sigma_{i+1}\cdots\sigma_{n-1}K_i \sigma_{n-1}\sigma_{n-2}\cdots \sigma_{i+1} \quad 1 \le i \le n-1.$$ The components of $S_i \in {\mathcal A} \ot {\rm End }(V)$ are given by $$\label{eqa:sdef2} \begin{split} &S_i = ((S_{i})_{j,k})_{1 \le j,k \le n},\\ &(S_{i})_{i,i+1} = P_i,\; (S_{i})_{i+1,i+1} = R_i,\; (S_{i})_{i,i} = Q_i,\; (S_{i})_{i+1,i} = P'_i,\\ &(S_{i})_{j,j} = 1\,(j \neq i, i+1). \end{split}$$ The other components are zero. Note that Proposition \[pra:factor\] is rewritten as $$\label{eqa:ls} L(z) = D(z)\sigma S_1 S_2 \cdots S_{n-1}.$$ Now Proposition \[pra:RLL\] is a corollary of the formula (\[eqa:ls\]) and \[lema:rss\] $$\begin{aligned} &R(z_2/z_1)(D(z_1)\sigma \ot D(z_2)\sigma) = (D(z_1)\sigma \ot D(z_2)\sigma)R(z_2/z_1),\\ &R(z)\overset{2}{S}_i\overset{1}{S}_i = \overset{1}{S}_i\overset{2}{S}_iR(z)\quad 1 \le i \le n-1.\end{aligned}$$ The first relation is directly confirmed. It is enough to check the latter at two distinct values of $z$. It is trivially valid at $z=1$ and easily checked at $z=q^{-2}$. \[rema:K2\] If $a_i = 1$, the property $$K^2_i = \openone_n$$ is valid for $1 \le i \le n-1$. This is a remnant of the inversion relation (\[eqa:inv\]). It implies $L(z)^{-1} = K_{n-1}\cdots K_1D(z^{-1})$. The formula (\[eqa:ls\]) was known at $q=0$ as a factorization of combinatorial $R$ [@HKT2], where $S_i$ appeared as the Weyl group operator on crystal basis. For $A^{(1)}_1$, the $L$ operator here can also be obtained by specializing the $q$ generic case of the one in [@BS]. The case $\forall a_i = 0$ has appeared in the quantized Volterra model for $A^{(1)}_{n-1}$ [@HIK]. Quantized box-ball system: Space of states {#subsec:qbbs1} ------------------------------------------ Consider the formal infinite tensor product of $V = \C v_1 \oplus \cdots \oplus \C v_n$: $$\label{eqa:VV} \cdots \ot V \ot V \ot V \ot \cdots = \oplus\; \C \, (\cdots \ot v_{j_{-1}}\ot v_{j_0} \ot v_{j_1} \ot \cdots).$$ An element of the form $c(\cdots \ot v_{j_{-1}}\ot v_{j_0} \ot v_{j_1} \ot \cdots)$ will be called a monomial (a monic monomial if with $c=1$). The space of states of our quantized box-ball system is the subspace of (\[eqa:VV\]) given by $$\label{eqa:pdef} {\mathcal P} = \{ \sum_{p: \text{monic monomial}} c_p p \mid \hbox{conditions (i) and (ii)} \},$$ where (i) $\sum_{k \in \Z} \vert j_k -n \vert < \infty$ for any $p = \cdots \ot v_{j_{-1}}\ot v_{j_0} \ot v_{j_1} \ot \cdots$ appearing in the sum, (ii) there exists $N \in \Z$ such that $\lim_{q \rightarrow 0} q^N\sum_p c_p p = 0$. Monomials can be classified according to the numbers $w_1,\ldots, w_{n-1}$ of occurrence of the letters $1, \ldots, n-1$ in the set $\{j_k\}$. Consequently one has the direct sum decomposition: $$\label{eqa:dsd} {\mathcal P} = \oplus {\mathcal P}_{w_1,w_2,\ldots, w_{n-1}},$$ where the sum runs over $(w_1,\ldots, w_{n-1}) \in \Z^{n-1}_{\ge 0}$. We have ${\mathcal P}_{0,\ldots,0}= \C p_{\rm vac}$, where $p_{\rm vac} = \cdots \ot v_n \ot v_n \ot \cdots$. The local states $v_{j_k} \in V$ is regarded as the $k$th box containing a ball with color $j_k$ if $j_k \neq n$, and the empty box if $j_k=n$. The space of states of the box-ball system is the totality of the monomials in the above sense. The space of states ${\mathcal P}$ of our quantized box-ball system consists of linear superpositions thereof. Time evolution {#subsec:qbbs2} -------------- We set $\forall a_i = 1$ in the remainder of section \[sec:A\]. Then the following provides an ${\mathcal A}$ module ${\mathcal M}$: $$\label{eqa:M} \begin{split} &{\mathcal M} = \oplus_{m_1, \ldots, m_{n-1} \in \Z_{\ge 0}} \C [m_1,\ldots, m_{n-1}],\\ &P_i[\ldots, m_i,\ldots] = q^{m_i}[\ldots, m_i,\ldots],\\ &Q_i[\ldots, m_i,\ldots] = [\ldots,m_i\!+\!1,\ldots],\\ &R_i[\ldots, m_i,\ldots] = (1-q^{2m_i})[\ldots, m_i\!-\!1,\ldots], \end{split}$$ where the right hand side of the last formula is to be understood as 0 at $m_i=0$. The space ${\mathcal M}$ will be regarded as the space of the quantum carrier. By construction, for $x = [x_1,\ldots, x_{n-1}] \in {\mathcal M}$ one has $$\label{eqa:LW} \begin{split} &L(z)(x \ot v_j) = \sum_kW_{jk}[x \vert y](y \ot v_k),\\ &W_{jk}[x \vert y]= \lim_{x_n \rightarrow \infty} w_{jk}[x_1,\ldots,x_{n-1},x_n \vert y_1,\ldots,y_{n-1},y_n], \end{split}$$ where $y$ is determined from (\[eqa:y\]) in terms of $j,k$ and $x$. According to the standard construction of transfer matrices in two dimensional solvable vertex models [@Bax], the time evolution $T(z): {\mathcal P} \rightarrow {\mathcal P}$ is constructed as a composition of local $L$ operators as $$\label{eqa:defT} T(z) = \bigl(\cdots \overset{1}{L}(z) \overset{0}{L}(z) \overset{-1}{L}(z) \cdots \bigr)_{0,0}.$$ Here $\overset{k}{L}(z) \in {\rm End }({\mathcal M} \ot {\mathcal P})$ signifies the representation of the $L$ operator: $$\begin{split} &\overset{k}{L}(z)\bigl(m \ot (\cdots \ot v_{j_{k-1}} \ot v_{j_k} \ot v_{j_{k+1}} \ot \cdots) \bigr)\\ &=\sum_i \bigl(L_{ij_k}(z)m\bigr) \ot (\cdots \ot v_{j_{k-1}} \ot v_{i} \ot v_{j_{k+1}} \ot \cdots), \end{split}$$ where $L_{ij_k}(z)m$ for $m \in {\mathcal M}$ is specified by (\[eqa:M\]). The symbol $(\cdots)_{0,0}$ in (\[eqa:defT\]) stands for the element in ${\rm End }({\mathcal P})$ that is attached to the transition $[0,\ldots,0] \mapsto [0,\ldots,0]$ in the ${\mathcal M}$ part. By the definition $T(z)$ preserves the weight subspace ${\mathcal P}_{w_1,w_2,\ldots, w_{n-1}}$ and acts homogeneously on it as $$\label{eqa:homo} T(z)p = z^{w_1+\cdots+w_{n-1}}T(1)p\quad \hbox{for } p \in {\mathcal P}_{w_1,w_2,\ldots, w_{n-1}}.$$ Therefore the commutativity $T(z)T(z') = T(z')T(z)$ is trivially valid. Henceforth we concentrate on $T=T(z\!=\!1)$, and $T(p)$ for $p \in {\mathcal P}$ is to be understood as $T(1)p$. Factorized dynamics {#subsec:qbbs3} ------------------- The time evolution $T$ admits a simple description as the product of propagation operators. Set $$\label{eqa:pKdef1} {\mathcal K}_i = \bigl(\cdots \overset{1}{K}_{i} \overset{0}{K}_{i} \overset{-1}{K}_{i} \cdots \bigr)_{0,0} \in {\rm End }({\mathcal P}) \quad 1 \le i \le n-1,$$ where the representation $\overset{k}{K}_{i} \in {\rm End }({\mathcal M}\ot{\mathcal P})$ is specified from $K_i$ (\[eqa:kdef\]) in the same way as $\overset{k}{L}(z)$ was done via $L(z)$. To interpret ${\mathcal K}_i$ pictorially, we attach the following diagrams to the local operator $K_i$. (125,23)(0,-0.7) (0,0)(20,0)[5]{}[ (5,8)[(1,0)[6]{}]{}(8,11)[(0,-1)[6]{}]{} (1.6,7.3)[$m_i$]{}]{} (12,7.3)[$m_i$]{} (32,7.3)[$m_i\!-\!1$]{} (52,7.3)[$m_i\!+\!1$]{} (72,7.3)[$m_i$]{} (92,7.3)[$m_i$]{} (7.5,12.1)[$n$]{}(7.5,2.1)[$n$]{} (27.5,12.1)[$n$]{}(27.5,2.1)[$i$]{} (47.5,12.1)[$i$]{}(47.5,2.1)[$n$]{} (67.5,12.1)[$i$]{}(67.5,2.1)[$i$]{} (87.5,12.1)[$j$]{}(87.5,2.1)[$j$]{} (7.1,-2)[$q^{m_i}$]{}(7.1,16.5)[$P_i$]{} (24.8,-2)[$1-q^{2m_i}$]{}(27.1,16.5)[$R_i$]{} (47.3,-2)[$1$]{}(47.1,16.5)[$Q_i$]{} (65,-2)[$-q^{m_i+1}$]{}(65.1,16.5)[$-qP_i$]{} (87.3,-2)[$1$]{}(87.3,16.5)[$1$]{} Here $m_i \in \Z_{\ge 0}$ is a coordinate in $[m_1,\ldots, m_{n-1}] \in {\mathcal M}$. The horizontal and vertical arrows correspond to ${\mathcal M}$ and $V$, respectively. The diagrams depict the interaction between the local box and the quantum carrier containing $m_i$ balls of color $i$. The carrier coming from the left encounters the local box whose state are specified on the top. It picks up/down a color $i$ ball or does nothing and proceeds to the right leaving the box in the state given in the bottom with the listed amplitudes. The first line in the figure gives the operators acting on ${\mathcal M}$ that yield the amplitudes on the last line. For example one has $$\begin{aligned} K_i([\ldots,m_i,\ldots]\ot v_n) &= (P_i[\ldots,m_i,\ldots]) \ot v_n + (R_i[\ldots,m_i,\ldots]) \ot v_i\\ &= q^{m_i}[\ldots,m_i,\ldots] \ot v_n + (1-q^{2m_i})[\ldots, m_i-1,\ldots] \ot v_i.\end{aligned}$$ The second term describes unloading whereas the first term is just a passage. It is easy to see that at $q=0$, $K_i$ reduces to the deterministic operator which coincides with the local interaction between a carrier and a box [@TM] in the conventional box-ball system [@T; @TS]. Now the composition (\[eqa:pKdef1\]) is expressed as Fig. \[fig:calKi\]. (90,16)(-5,3) (0,0)(25,0)[3]{} [ (5,8)[(1,0)[6]{}]{}(8,11)[(0,-1)[6]{}]{} (16,8)[(1,0)[6]{}]{}(19,11)[(0,-1)[6]{}]{} ]{} (-2.3,8)[(1,0)[4]{}]{}(76.2,8)[(1,0)[4]{}]{} (2.5,7.3)[$0$]{}(13,7.3)[$0$]{} (27.2,7.3)[$s_0$]{}(38,7.3)[$s_1$]{}(48.4,7.3)[$s_2$]{} (63.1,7.3)[$0$]{}(73.7,7.3)[$0$]{} (-6,7.3)[$\cdots$]{}(23,7.3)[$\cdots$]{} (51.1,7.3)[$\cdots$]{}(81,7.3)[$\cdots$]{} (7.4,12.2)[$n$]{}(18.4,12.2)[$n$]{} (32.4,12.2)[$j_0$]{}(43.4,12.2)[$j_1$]{} (57.4,12.2)[$n$]{}(68.4,12.2)[$n$]{} (7.4,2.2)[$n$]{}(18.4,2.2)[$n$]{} (32.4,2.2)[$i_0$]{}(43.4,2.2)[$i_1$]{} (57.4,2.2)[$n$]{}(68.4,2.2)[$n$]{} The amplitude of ${\mathcal K}_i$ assigned with the transition from $(\cdots \ot v_{j_0}\ot v_{j_1} \ot \cdots)$ to $(\cdots \ot v_{i_0}\ot v_{i_1} \ot \cdots)$ is obtained as the product of all the amplitudes attached to the local vertices in Fig. \[fig:calKi\] according to the rule specified in Fig. \[fig:Ki\]. The calculation involves an infinite product, which is well defined for elements in ${\mathcal P}$. See section \[subsec:norm\] for examples of computations of the amplitudes. \[th:facT\] The time evolution of the quantized box-ball system admits a factorization into propagation operators as $$T = {\mathcal K}_1 \cdots {\mathcal K}_{n-1}.$$ This is a consequence of the definitions (\[eqa:defT\]), (\[eqa:pKdef1\]) and the factorization of the $L$ operator established in Proposition \[pra:factor\]. At $q=0$, Theorem \[th:facT\] reduces to the original description of the time evolution in the box-ball system [@T] as the composition of finer process to move balls with a fixed color. Some properties of amplitudes {#subsec:norm} ----------------------------- For simplicity we concentrate on $A^{(1)}_1$ case in the remainder of section \[sec:A\], where one only has one kind of ball and $T= {\mathcal K}_1$. However, by virtue of Theorem \[th:facT\], all the essential statements are equally valid for general $A^{(1)}_{n-1}$ under an appropriate resetting. In particular, Proposition \[pra:TT\] and Proposition \[pra:norm1\] remain valid not only for $T$ but also ${\mathcal K}_i$ for any $1 \le i \le n-1$. Let us write the action of the time evolution of a monic monomial $p \in {\mathcal P}$ as $T(p) = \sum_{p'}A_{p',p}p'$, where the sum is taken over monic monomials $p' \in {\mathcal P}$. We then define the transposition ${}^tT$ of $T$ by ${}^tT(p) = \sum_{p'}A_{p,p'}p'$. \[pra:TT\] $${}^tT = T^{-1}$$ In view of Remark \[rema:K2\], the inverse $T^{-1}= {\mathcal K}^{-1}_1$ is obtained by reversing the horizontal arrows in Fig. \[fig:Ki\] and sending the carrier from the right to the left correspondingly in Fig. \[fig:calKi\]. By using this fact, one can verify the claim. See also Remark \[rema:inv\]. Let $(\;,\;)$ be the inner product such that $(p,p') = \delta_{p,p'}$ for all the monic monomials $p$ and $p'$. It is well defined on a subset of ${\mathcal P} \times {\mathcal P}$. Then Proposition \[pra:TT\] tells that $(T(r),T(s)) = (r,s)$ for $(r,s)$ belonging to the subset. This property leads to a family of $q$-series identities. In fact one has $\sum_p A_{p,r}A_{p,s} = \delta_{r s}$ for any monic monomials $r$ and $s$. Pick the monomial $p = \cdots \ot v_2 \ot v_1 \ot v_2 \ot \cdots$ for instance. Then the left hand side of $(T(p),T(p))=1$, the sum of squared amplitudes, is calculated as $$(-q)^2 + \sum_{k \ge 0}\bigl(q^k(1-q^2)\bigr)^2 = 1.$$ Similarly for the monomial $p = \cdots \ot v_2 \ot v_1 \ot v_1 \ot v_2 \ot \cdots$, the contributions to $(T(p),T(p))=1$ are grouped into the four cases as in Fig. \[fig:2sol\], which add up to 1. (50,42)(25,0) (0,0)(0,10)[4]{} [(5,8)(8,8)]{} (5,35)(8,35) (30,36)[$(-q)^4$]{} (5,25)(8,25) (9,25)[$\underbrace{\cdots}_{k}$]{} (15,25) (30,26)[$(-q)^2\sum_{k \ge 0} (q^k)^2(1-q^2)^2 = q^2(1-q^2)$]{} (5,15)(8,15) (9,15)[$\underbrace{\cdots}_{k}$]{} (15,15) (30,16)[$(-q^2)^2\sum_{k \ge 0} (q^k)^2(1-q^2)^2 = q^4(1-q^2)$]{} (5,5)(8,5) (9,5)[$\underbrace{\cdots}_{k_1}$]{} (15,5) (16.5,5)[$\underbrace{\cdots}_{k_2}$]{} (22.5,5) (30,6)[$\sum_{k_1,k_2 \ge 0} (q^{2k_1+k_2})^2(1-q^2)^2(1-q^4)^2 = (1-q^2)(1-q^4)$]{} Here the symbols $\bullet$ and $\circ$ stand for a ball $v_1$ and an empty box $v_2$, respectively. The symbol $\cdots$ represents an array of empty boxes of the specified number. In each group, the upper configuration is $p$ and the lower one is a monomial occurring in $T(p)$. So far we have considered the quadratic form $(\;,\;)$. Now we turn to a linear one. We use the standard notation $$\begin{aligned} &(z)_m = (z;q)_m = (1-z)(1-zq)\cdots (1-zq^{m-1}),\\ &\left[ \begin{array}{c} m \\ k \end{array} \right] = \frac{(q)_m}{(q)_{k}(q)_{m-k}}.\end{aligned}$$ For $t \le \min(l,m)$, let $\beta_{m,t,l}$ be the sum of all the amplitudes for $l$ successive vacant boxes to acquire $t$ balls during the passage of a carrier containing $m$ balls. Namely, it is the sum of the amplitudes for Fig. \[fig:beta\] over $1 \le i_1 < i_2 < \cdots < i_t \le l$. (100,16)(-20,2) (14.4,12.8)[$1$]{}(19.4,12.8)[$2$]{}(44.4,12.8)[$l$]{} (10,8)[(1,0)[17]{}]{}(28.5,7.3)[$\cdots$]{}(33,8)[(1,0)[17]{}]{} (15,5)(5,0)[3]{}[(0,0)[(0,1)[6]{}]{}]{} (35,5)(5,0)[3]{}[(0,0)[(0,1)[6]{}]{}]{} (6,7)[$m$]{}(52,7)[$m-t$]{} (15,11.5)(5,0)[3]{}[(0,0)]{} (35,11.5)(5,0)[3]{}[(0,0)]{} (25,4.5)(40,4.5) (15,4.5)(20,4.5) (35,4.5)(45,4.5) (24.7,1.5)[$i_1$]{}(39.7,1.5)[$i_t$]{} \[lema:beta\] $$\beta_{m,t,l} = q^{(m-t)(l-t)} (1-q^{2m})(1-q^{2m-2})\cdots (1-q^{2(m-t+1)}) \left[ \begin{array}{c} l \\ t \end{array} \right].$$ The contribution from Fig. \[fig:beta\] is $$\begin{split} &(1-q^{2m})(1-q^{2m-2})\cdots (1-q^{2(m-t+1)})\\ &\times q^{m(i_1-1)+(m-1)(i_2-i_1-1)+\cdots + (m-t+1)(i_t-i_{t-1}-1) +(m-t)(l-i_t)}. \end{split}$$ The claim follows by summing this over $1 \le i_1 < i_2 < \cdots < i_t \le l$. Let ${\mathcal P}_{\rm fin}$ be the subspace of ${\mathcal P}$ spanned by the superpositions of monomials $\sum_p c_p p$ in which $\sum_p c_p$ exists. For instance, monomials are elements of ${\mathcal P}_{\rm fin}$. Consider the linear function ${\mathcal N}: {\mathcal P}_{\rm fin} \rightarrow \C$ that takes value $1$ on all the monic monomials. \[pra:norm1\] $T$ preserves ${\mathcal N}$, i.e., ${\mathcal N}(T(p)) = {\mathcal N}(p)$ for any $p \in {\mathcal P}_{\rm fin}$. For example for $p = \cdots v_2 \ot v_1 \ot v_2 \ot \cdots$, one has $${\mathcal N}(p) = -q + \sum_{k \ge 0}q^k(1-q^2) = 1.$$ For $p = \cdots v_2 \ot v_1 \ot v_1 \ot v_2 \ot \cdots$ considered in Fig. \[fig:2sol\], one has $${\mathcal N}(p) = (-q)^2 - q\sum_{k \ge 0}q^k(1-q^2) - q^2\sum_{k \ge 0}q^k(1-q^2) + \sum_{k_1,k_2 \ge 0}q^{2k_1+k_2}(1-q^2)(1-q^4) = 1.$$ The remainder of this section \[subsec:norm\] is devoted to a proof of Proposition \[pra:norm1\]. We begin by introducing a map $\Phi_m$ for $m \in \Z_{\ge 0}$, which is a slight generalization of $T$. We set $\Phi_0 = T$. For $m \ge 1$, $\Phi_m$ acts on ${\mathcal P}_{\rm fin}\setminus\{p_{\rm vac}\}$ as follows. Pick any monic monomial $p \in {\mathcal P}_{\rm fin}\setminus\{p_{\rm vac}\}$ and decompose it uniquely as $p = p_{{\rm {\small left}}} \ot p_{{\rm {\small right}}}$, so that $p_{{\rm {\small left}}}$ is free of balls and the leftmost component of $p_{{\rm {\small right}}}$ is a ball. Let $p'_{{\rm {\small right}}}$ be the linear combination of the monic monomials generated by the penetration of the carrier initially containing $m$ balls through $p_{{\rm {\small right}}}$ to the right. See Fig. \[fig:p’\]. (60,20)(-7,-2) (14.4,12.8)[$\overbrace{\qquad\qquad\qquad}^{p_{{\rm {\small right}}}}$]{} (10,8)[(1,0)[17]{}]{} (15,5)(5,0)[2]{}[(0,0)[(0,1)[6]{}]{}]{} (6,7)[$m$]{}(29,7)[$\cdots$]{} (15,11.5) (14.4,4)[$\underbrace{\qquad\qquad\qquad}_{p'_{{\rm {\small right}}}}$]{} We set $\Phi_m(p) = p_{{\rm {\small left}}} \ot p'_{{\rm {\small right}}}$ and extend it linearly to the map $\Phi_m: {\mathcal P}_{\rm fin} \setminus\{p_{\rm vac}\} \rightarrow {\mathcal P}_{\rm fin}\setminus\{p_{\rm vac}\}$. It is a direct sum of the action ${\mathcal P}_{{\rm fin},N} \rightarrow {\mathcal P}_{{\rm fin},N+m}$ over $N \in \Z_{\ge 1}$, where the notation ${\mathcal P}_{{\rm fin},N}$ is the $n=2$ case of (\[eqa:dsd\]) restricted to ${\mathcal P}_{\rm fin}$. Proposition \[pra:norm1\] is obvious for $p = p_{\rm vac}$. Since ${\mathcal N}$ is linear, the other case follows from the $m=0$ case of \[pra:alpha\] For any monic monomial $p \in {\mathcal P}_N$, $$\label{eqa:alpha} {\mathcal N}(\Phi_m(p)) = (1+q)(1+q^2)\cdots (1+q^m)$$ is valid for any $m \ge 0$ and $N \ge 1$. The right hand side depends on $m$ but not on $N$, hence it will be denoted by $\alpha_m$. Note that $\alpha_m = \beta_{m,m,\infty}$. We show (\[eqa:alpha\]) by induction on $N$. For $N=1$, the relevant configurations either accommodate a ball or not just below the initial one. The former contributes $-q^{m+1}\beta_{m,m,\infty}$ to ${\mathcal N}(\Phi_m(p))$ and the latter does $\beta_{m+1,m+1,\infty}$. The two contributions indeed sum up to $\alpha_m$. Assume the claim for $N$. In the monic monomial $p \in {\mathcal P}_{N+1}$, suppose there are $l$ empty boxes between the leftmost ball and its nearest neighbor. The configurations that accommodate $t$ balls in the $l$ boxes are classified into the two cases in Fig. \[fig:two\]. (100,32)(-20,-17) (0,0)(0,-17)[2]{} (10,8)[(1,0)[13]{}]{}(51,8)[(1,0)[8]{}]{} (25,7.3)[$\cdots$]{}(61.5,7.3)[$\cdots$]{} (30,8)[(1,0)[10]{}]{} (15,5)[(0,1)[6]{}]{} (20,5)[(0,1)[6]{}]{} (35,5)[(0,1)[6]{}]{} (55,5)[(0,1)[6]{}]{} (6,7)[$m$]{} (15,11.5) (20,11.5) (35,11.5) (55,11.5) (20,4)[$\underbrace{\qquad\qquad\qquad}_{t \;{\rm balls}}$]{} (43,7)[$m\!-\!t$]{} (41,-10)[$m\!+\!1\!-\!t$]{} (15,4.5) (15,-12.5) Accordingly we have the recursion relation $${\mathcal N}(\Phi_m(p)) = -q^{m+1} \sum_{t=0}^{\min(l,m)}\beta_{m,t,l}{\mathcal N}(\Phi_{m-t}({\tilde p}))+ \sum_{t=0}^{\min(l,m+1)}\beta_{m+1,t,l} {\mathcal N}(\Phi_{m+1-t}({\tilde p})),$$ where ${\tilde p} \in {\mathcal P}_N$ is the monic monomial obtained by removing the leftmost ball from $p$. Thus we are done if $$\alpha_m = -q^{m+1} \sum_{t=0}^{\min(l,m)}\beta_{m,t,l}\alpha_{m-t}+ \sum_{t=0}^{\min(l,m+1)}\beta_{m+1,t,l}\alpha_{m+1-t}$$ is shown. This is a corollary of Lemma \[lema:qpol\]. \[lema:qpol\] Let $l,m \in \Z_{\ge 0}$. Then $$\alpha_m = \sum_{t=0}^{\min(l,m)}\beta_{m,t,l}\alpha_{m-t}.$$ We are to show $$1 = \sum_{t=0}^{\min(l,m)}q^{(l-t)(m-t)} \frac{(q)_l(q)_m}{(q)_t(q)_{l-t}(q)_{m-t}}.$$ Since the both sides are symmetric with respect to $l$ and $m$, we assume with no loss of generality that $l\le m$. Applying the $q-$binomial identity $(z;q)_t = \sum_{s=0}^t \left[ \begin{array}{c} t \\ s \end{array} \right] (-z)^sq^{s(s-1)/2}$, we expand the factor $(q)_m/(q)_{m-t} = (q^{m-t+1};q)_t$. Then the right hand side becomes $$\sum_{t=0}^l\sum_{s=0}^t(-1)^sq^{(m-t)(l-t+s)+s(s+1)/2} \frac{(q)_l}{(q)_{l-t}(q)_s(q)_{t-s}}.$$ By eliminating $t$ by setting $t=s+i$, this is written as $$\sum_{i=0}^l \left[ \begin{array}{c} l \\ i \end{array} \right]q^{(m-i)(l-i)} \sum_{s=0}^{l-i} \left[ \begin{array}{c} l-i \\ s \end{array} \right] (-q^{i-l+1})^sq^{s(s-1)/2}.$$ The $q-$binomial identity tells that the sum over $s$ is equal to $(q^{i-l+1};q)_{l-i}=\delta_{il}$. \[rema:eigen\] Set $u = \sum_p p$, where the sum extends over all the monic monomials in ${\mathcal P}_N$ for any $N \ge 0$. Then Proposition \[pra:TT\] and Proposition \[pra:norm1\] tell that $T(u) = u$. Conversely, this property and Proposition \[pra:TT\] imply Proposition \[pra:norm1\] since ${\mathcal N}(p) = (p,u) = (T(p),T(u)) = (T(p),u) = {\mathcal N}(T(p))$. Bethe ansatz {#subsec:ba} ------------ Consider the commuting family of transfer matrices $T_m(z)\; (m \in \Z_{\ge 1})$ constructed from the fusion $R$ matrix $R^{(m,1)}(z)$ (\[eqa:Rm1w\]). Normalize them so that $T_m(z)p_{\rm vac} = p_{\rm vac}$. Then the time evolution $T$ of our quantized box-ball system belongs to the family as $T=T_\infty(1)$. It therefore shares the eigenvectors with the simplest one $T_1(z)$, which corresponds to the well known six vertex model [@Bax]. A slight peculiarity here is that we work on ${\mathcal P}$, which implies an infinite system from the onset under a fixed boundary condition. The Bethe ansatz result is adapted to such a circumstance as follows: $$\begin{aligned} &T_m(z) \vert \xi_1, \ldots, \xi_N \rangle_B = \lambda_m(z,\xi_1) \cdots \lambda_m(z,\xi_N) \vert \xi_1, \ldots, \xi_N \rangle_B,\\ & \vert \xi_1, \ldots, \xi_N \rangle_B = \sum_{i_1 < \cdots < i_N}C_{i_1,\ldots,i_N}(\xi_1,\ldots, \xi_N) \vert i_1, \ldots, i_N \rangle,\\ &C_{i_1,\ldots,i_N}(\xi_1,\ldots, \xi_N) = \sum_{P \in \mathfrak{S}_N} \text{sign}(P) \bigl(\prod_{j < k}A_{P_j,P_k}\bigr) \xi^{i_1}_{P_1}\cdots \xi^{i_N}_{P_N},\\ &A_{j,k} = q \eta_j - q^{-1}\eta_k,\quad \eta_i = \frac{1-q\xi_i}{\xi_i - q},\quad \lambda_m(z,\xi_i) = \frac{q^m + \eta_i z}{1+q^m\eta_i z},\end{aligned}$$ where $N$ is an arbitrary nonnegative integer, $\vert \cdots \rangle_B \in {\mathcal P}_N$ is the joint eigenvector of Bethe, and $\vert i_1, \ldots, i_N \rangle$ is the monic monomial describing the ball configuration at positions $i_1, \ldots, i_N$. The sum over $P$ runs over the symmetric group $\mathfrak{S}_N$, and $\text{sign}(P) = \pm 1$ denotes the signature of $P$. The above result holds for $q \in {\mathbb R}$ such that $-1 < q < 1$ and $z \in \C$ such that $\vert z q \vert < 1$. The parameters $\xi_1, \ldots, \xi_N$ should be all distinct for the Bethe vector not to vanish. They are to be taken from $\exp(\sqrt{-1}{\mathbb R})$ to match the condition (ii) in (\[eqa:pdef\]), but otherwise arbitrary free from the Bethe equation. One sees that $\lambda_m(z,\xi_i)$ tends to $\eta_i z$ in the limit $q^m \rightarrow 0$ in agreement with (\[eqa:homo\]) with $n=2$. The one particle eigenvalue $\lambda_m(z) = \lambda_m(z,\xi_i)$ satisfies the degenerate $T$ system $\lambda_m(zq)\lambda_m(zq^{-1}) = \lambda_{m+1}(z)\lambda_{m-1}(z)$. Except the obvious $N=1$ case, it is not known to us whether the property $T(u) = u$ in Remark \[rema:eigen\] can be deduced from the Bethe ansatz result quoted here. \[rema:inv\] In terms of $T_m(z)$ considered here and its transposition defined similarly to section \[subsec:norm\], Proposition \[pra:TT\] is the $m \rightarrow \infty$ case of ${}^tT_m(z^{-1}) = T_m(z)^{-1}$ derivable from the inversion relation (\[eqa:inv\]). $D^{(1)}_n$ case {#sec:D} ================= $R$ matrix $R(z)$ {#subsec:R11d} ------------------ Let $J = \{1,2,\cdots, n, -n, -n+1, \cdots -1 \}$ be the set equipped with an order $1 \prec 2 \prec \cdots \prec n \prec -n \prec \cdots \prec -2 \prec -1$. In the following, elements of $2n \times 2n$ matrices with indices from $J$ are arranged in the increasing order with respect to $\prec$ from the top left. We use the notation $$\xi = q^{2n-2},\qquad \bar i = \begin{cases} i & i>0, \\ i+2n & i<0. \end{cases}$$ Let $V = \oplus_{\mu \in J} \C v_{\mu}$ be the vector representation of $U_q(D_n^{(1)})$. The $R$ matrix $R(z) \in {\rm End }(V \ot V)$ was obtained in [@B; @J]. Here we start with the following convention: $$\begin{aligned} \label{Dn-R} \begin{split} R(z) &= a(z) \sum_{k}E_{k k} \otimes E_{k k} + b(z) \sum_{j \neq k} E_{j j} \otimes E_{k k} + c(z)\left(z\sum_{j \prec k }+\sum_{j \succ k}\right) E_{kj} \otimes E_{jk} \\ & ~~~~~~~~ + (z-1)(1-q) \sum_{j,k} f_{jk}(z) E_{j k} \otimes E_{-j\; -k}, \end{split}\end{aligned}$$ where the sums extend over $J$ and $E_{ij}v_k = \delta_{jk}v_i$. $$\begin{aligned} \label{Dn-BW} \begin{split} &a(z) = (1-q^2z)(1-\xi z), ~ b(z) = q(1-z)(1-\xi z), ~ c(z) = (1-q^2)(1-\xi z), \\ &f_{jk}(z) = \begin{cases} q + \xi z & j=k, \\ (1+q) (-1)^{j+k}q^{\bar{k}-\bar{j}} & j \prec k, \\ (1+q) (-1)^{j+k}q^{\bar{k}-\bar{j}}\xi z & j \succ k. \end{cases} \end{split}\end{aligned}$$ The $R$ matrix satisfies the Yang-Baxter equation. We denote by $\sigma$ the automorphism of $V$ acting as $\sigma v_{\pm 1} = v_{\mp 1},\; \sigma v_{\pm n} = v_{\mp n}$, and $\sigma v_\mu = v_\mu$ for $\mu \neq \pm 1, \pm n$. Fusion $R$ matrix and its limit {#subsec:Rmd} -------------------------------- As the $A_{n-1}^{(1)}$ case, we set $V_1 = V$ and realize the space $V_m$ of the $m$ fold $q-$symmetric tensors as the quotient $V^{\otimes m}/A$, where $A = \sum_j V^{\otimes j} \ot {\rm Im}PR(q^{-2}) \ot V^{\ot m-2-j}$. The basis of ${\rm Im}PR(q^{-2})$ can be taken as $$\begin{aligned} &v_i\ot v_j - q v_j \ot v_i, \text{ for } i \prec j, \; i \neq \pm j, \\ &v_1 \ot v_{-1} - q^2 v_{-1}\ot v_1,\quad v_n \ot v_{-n} - v_{-n}\ot v_n, \\ &v_j \ot v_{-j} - v_{-j} \ot v_j - qv_{-j-1}\ot v_{j+1} + q^{-1}v_{j+1}\ot v_{-j-1}, \text{ for } 1 \le j \le n-1.\end{aligned}$$ A vector of the form $v_{i_1} \ot v_{i_2} \ot \cdots \ot v_{i_m}$ is called normal ordered if $-1 \succeq i_1 \succeq \cdots \succeq i_m \succeq 1$ and the sequence $i_1, \ldots, i_m$ does not contain the letters $n$ and $-n$ simultaneously. The set of normal ordered vectors $v_{i_1} \ot v_{i_2} \ot \cdots \ot v_{i_m} \mod A$ form the basis of $V_m$. We label them as $x=[x_1,\ldots,x_n, x_{-n},\ldots,x_{-1}]$, where $x_i \in \Z_{\ge 0}$ is the number of the letter $i$ in the sequence $i_1, \ldots, i_m$. Thus $x_1 + \cdots + x_{-1} = m$ and $x_n x_{-n} = 0$ hold in accordance with the label in [@KKM]. In $V^{\ot m}$ normal ordering is done according to the local rule $\mod \hbox{Im} PR(q^{-2})$: $$\label{eqd:no} \begin{split} &v_1 \ot v_{-1} = q^2 v_{-1}\ot v_1,\quad v_i\ot v_j = q v_j \ot v_i \quad i \prec j, \; i \neq \pm j, \\ &v_j \ot v_{-j} = q^2 v_{-j}\ot v_j - (1-q^2)\sum_{i=1}^{j-1} (-q)^{j-i}v_{-i}\ot v_i\quad 2 \le j \le n-1, \\ &v_n \ot v_{-n} = v_{-n}\ot v_n = -\sum_{i=1}^{n-1} (-q)^{n-i}v_{-i}\ot v_i. \end{split}$$ Then the fusion $R$ matrix $R^{(m,1)}(z)$ is the restriction of the operator (\[eqd:Rcomp\]) to ${\rm End}(V_m \ot V)$. For $x \in V_m $ and $\mu \in J$ we set $$R^{(m,1)}(z)(x \ot v_\mu) = \sum_{\nu \in J, y \in V_m} w_{\mu \nu}[x \vert y](y \ot v_\nu).$$ Due to the weight conservation the matrix element $w_{\mu \nu}[x \vert y]$ is zero unless $$\label{eqd:wt} {\rm wt }(x) + {\rm wt }(v_\mu) = {\rm wt }(y) + {\rm wt }(v_\nu),$$ where the weights may be regarded as elements in $\Z^n$ by $$\label{eqd:wtdef} \begin{split} &{\rm wt }([x_1,\ldots,x_n, x_{-n},\ldots,x_{-1}]) = (x_1-x_{-1},\ldots, x_n - x_{-n}),\\ &{\rm wt }(v_\mu) = (0\ldots,0, \overset{\vert \mu \vert{\rm th}}{\pm 1},0,\ldots,0)\;\; \text{ for } \pm \mu >0. \end{split}$$ Leaving the calculation of $w_{\mu \nu}[x \vert y]$ in general case aside, we present the result for the limit $$\label{eqd:Wlim} W_{\mu \nu}[x \vert y] := \lim_{x_{-n} \rightarrow \infty} w_{\mu \nu}[x \vert y].$$ Note that one necessarily has $x_n = y_n = 0$ by the weight reason. Therefore $x$ appearing in $W_{\mu \nu}[x \vert y]$ is to be understood as the array $(x_1,\ldots, x_{n-1}, x_{-n+1}, \ldots, x_{-1})$ that does not contain the $\pm n$ components, and the same applies to $y$ as well. For positive integers $j$ and $k$ such that $j \le k$ we use the symbols $$x_{j,k} = x_j + x_{j+1} + \cdots + x_k, \quad x_{-j, -k} = x_{-j} + x_{-j-1} + \cdots + x_{-k}.$$ They are to be understood as zero for $j>k$. Derivation of $W_{\mu \nu}[x \vert y]$ is outlined in Appendix \[appD:W\]. We summarize the result in \[prd:W\] Suppose $j,k,l \in \{1, 2, \ldots, n-1\}$. The nonzero matrix elements $W_{\mu \nu}[x \vert y]$ are exhausted by the following list: $$\begin{aligned} \begin{split} &W_{\pm j,\pm j}[x \vert x] = -zq^{x_j + x_{-j}+1}, \\ &W_{j<k}[x \vert x-(-j)+(-k)] = (-1)^{j+k} z(1-q^{2x_{-j}})q^{k-j+x_j + x_{-j-1,-k+1}}, \\ &W_{j>k}[x \vert x+(j)-(k)] = z(1-q^{2x_k})q^{x_{k+1,j-1} + x_{-k}}, \\ &W_{j,k}[x\vert x-(l)-(-l)+(j)+(-k)]_{l < \min(j,k)} \\ & ~~ = (-1)^{k+l+1} z(1-q^{2x_l})(1-q^{2x_{-l}}) q^{k-l-1+x_{l+1,j-1}+x_{-l-1,-k+1}}, \end{split}\end{aligned}$$ $$\begin{aligned} &W_{-j>-k}[x\vert x+(-j)-(-k)] = z(1-q^{2x_{-k}})q^{x_j + x_{-j-1,-k+1}}, \\ &W_{-j<-k}[x\vert x+(k)-(j)] = (-1)^{j+k} z(1-q^{2x_{j}})q^{j-k+x_{k+1,j-1} + x_{-k}}, \\ &W_{-j,-k}[x\vert x+(l)+(-l)-(j)-(-k)]_{l < \min(j,k)} \\ & ~~ = (-1)^{j+l+1} z(1-q^{2x_j})(1-q^{2x_{-k}}) q^{j-l-1+x_{l+1,j-1} + x_{-l-1,-k+1}},\end{aligned}$$ $$\begin{aligned} \begin{split} &W_{-j,k}[x\vert x-(j)+(-k)] = (-1)^{j+k} z^2(1-q^{2x_j})q^{j+k-2+x_{1,j-1}+x_{-1,-k+1}}, \\ &W_{j,-k}[x \vert x+(j)-(-k)] = (1-q^{2x_{-k}})q^{x_{1,j-1}+x_{-1,-k+1}}, \end{split}\end{aligned}$$ $$\begin{aligned} &W_{n,k}[x\vert x-(-n)+(-k)] = (-1)^{n+k} z^2 q^{n+k-2+x_{1,n-1}+x_{-1,-k+1}}, \\ &W_{n,-k}[x \vert x-(-n)+(k)] = (-1)^{n+k} zq^{n-k+x_{k+1,n-1}+x_{-k}}, \\ &W_{n,-k}[x \vert x+(l)+(-l)-(-n)-(-k)]_{l < k} \\ & ~~= (-1)^{n+l+1} z(1-q^{2x_{-k}})q^{n-l-1+x_{l+1,n-1}+x_{-l-1,-k+1}},\end{aligned}$$ $$\begin{aligned} &W_{-n,k}[x \vert x+(-n)-(k)] = z(1-q^{2x_k})q^{x_{k+1,n-1}+x_{-k}}, \\ &W_{-n,k}[x \vert x-(l)-(-l)+(-n)+(-k)]_{l<k}\\ & ~~= (-1)^{k+l+1} z(1-q^{2x_l})(1-q^{2x_{-l}}) q^{k-l-1+x_{l+1,n-1}+x_{-l-1,-k+1}}, \\ &W_{-n,-k}[x \vert x+(-n)-(-k)] = (1-q^{2x_{-k}})q^{x_{1,n-1}+x_{-1,-k+1}},\end{aligned}$$ $$\begin{aligned} &W_{j,n}[x \vert x-(-j)+(-n)] = (-1)^{j+n}z(1-q^{2x_{-j}})q^{n-j+x_j+ x_{-j-1,-n+1}}, \\ &W_{j,n}[x \vert x-(l)-(-l)+(j)+(-n)]_{l<j}\\ &~~= (-1)^{l+n+1} z(1-q^{2x_l})(1-q^{2x_{-l}}) q^{n-l-1+x_{l+1,j-1}+x_{-l-1,-n+1}}, \\ &W_{-j,n}[x \vert x-(j)+(-n)] = (-1)^{j+n} z^2(1-q^{2x_j})q^{n+j-2+x_{1,j-1}+x_{-1,-n+1}},\end{aligned}$$ $$\begin{aligned} &W_{j,-n}[x \vert x+(j)-(-n)] = q^{x_{1,j-1}+x_{-1,-n+1}}, \\ &W_{-j,-n}[x \vert x-(-n)+(-j)] = zq^{x_j + x_{-j-1,-n+1}}, \\ &W_{-j,-n}[x \vert x-(j)-(-n)+(l)+(-l)]_{l<j}\\ & ~~= (-1)^{j+l+1} z(1-q^{2x_j})q^{j-l-1+x_{l+1,j-1}+x_{-l-1,-n+1}},\end{aligned}$$ $$\begin{aligned} &W_{n,n}[x \vert x] = z^2q^{2n-2+x_{1,n-1}+x_{-1,-n+1}}, \\ &W_{-n,-n}[x \vert x] = q^{x_{1,n-1}+x_{-1,-n+1}}, \\ &W_{n,-n}[x \vert x-2(-n)+(l)+(-l)] = (-1)^{n+l+1} zq^{n-l-1+x_{l+1,n-1}+x_{-l-1,-n+1}}, \\ &W_{-n,n}[x \vert x+2(-n)-(l)-(-l)] \\ &~~= (-1)^{n+l+1} z(1-q^{2x_l})(1-q^{2x_{-l}}) q^{n-l-1+x_{l+1,n-1}+x_{-l-1,-n+1}}.\end{aligned}$$ Here the notation $y = x + (l)+(-l)-(j)-(-k)$ for example means that $y$ is obtained from $x$ by setting $x_l \rightarrow x_l + 1, x_{-l} \rightarrow x_{-l} + 1, x_j \rightarrow x_j - 1, x_{-k} \rightarrow x_{-k} - 1$. Since $x_{-n}$ becomes irrelevant in the limit (\[eqd:Wlim\]), $(-n)$ in the argument of $W_{\mu \nu}$ may just be dropped. It has been included in the above formulas as a reminder of the conservation of the number of components. The matrix elements of the form $W_{\mu \nu}[x \vert x - (\lambda) \pm \cdots]$ with any $\lambda \in \{\pm 1, \ldots, \pm(n-1) \}$ contain the factor $1-q^{2x_{\lambda}}$ as they should. $L$ operator $L(z)$ {#subsec:Ld} -------------------- We consider the Weyl algebra generated by $P^{\pm 1}_\mu, Q^{\pm 1}_\mu$ with $\mu \in J \setminus \{\pm n\}$ under the same relation as (\[eqa:pqcom\]). The subalgebra of the Weyl algebra generated by $P_\mu, Q_\mu$ and $R_\mu=Q^{-1}_{\mu}(1-a_\mu P^2_{\mu})$ with $\mu \in J \setminus \{\pm n\}$ will again be denoted by ${\mathcal A}$, where $a_\mu$ is a parameter. We define the $L$ operator $L(z)=(L_{\mu\nu}(z))_{\mu,\nu \in J} \in {\mathcal A} \ot {\rm End }(V)$ so that $L_{\mu \nu}(z) \in {\mathcal A}$ with $\forall a_\mu = 1$ becomes the operator version of $W_{\nu \mu}[x \vert y]$ in Proposition \[prd:W\]. See (\[eqd:LW\]). To present it explicitly, we assume $1 \le j,k,l \le n-1$ in this subsection. We set $P'_\mu = -qa_\mu P_\mu$ and use the symbols $$\begin{aligned} &P_{j,k} = P_j P_{j+1} \cdots P_{k},\quad P_{-j,-k} = P_{-j}P_{-j-1} \cdots P_{-k},\\ &P'_{j,k} = P'_j P'_{j+1} \cdots P'_{k},\quad P'_{-j,-k} = P'_{-j}P'_{-j-1} \cdots P'_{-k}\end{aligned}$$ for $j \le k$. For $j > k$ they should be understood as $1$. Then $L_{\mu\nu}(z) \in {\mathcal A}$ reads as follows: $$\begin{aligned} &L_{jj}(z) = zP'_jP_{-j} + z\sum_{l=1}^{j-1}R_{-l}P'_{-l-1,-j+1}Q_{-j}R_lP_{l+1,j-1}Q_j,\\ &L_{-j,-j}(z) = zP_jP'_{-j} + z\sum_{l=1}^{j-1}Q_{-l}P_{-l-1,-j+1}R_{-j}Q_lP'_{l+1,j-1}R_j,\\ &L_{k>j}(z) = zR_{-j}P'_{-j-1,-k+1}Q_{-k}P'_j + z\sum_{l=1}^{j-1}R_{-l}P'_{-l-1,-k+1}Q_{-k}R_lP_{l+1,j-1}Q_j,\\ &L_{k<j}(z) = zP_{-k}R_kP_{k+1,j-1}Q_j + z\sum_{l=1}^{k-1}R_{-l}P'_{-l-1,-k+1}Q_{-k}R_lP_{l+1,j-1}Q_j,\end{aligned}$$ $$\begin{aligned} &L_{-k < -j}(z) = zQ_{-j}P_{-j-1,-k+1}R_{-k}P_j + z\sum_{l=1}^{j-1}Q_{-l}P_{-l-1,-k+1}R_{-k}Q_lP'_{l+1,j-1}R_j,\\ &L_{-k > -j}(z) = zP'_{-k}Q_kP'_{k+1,j-1}R_j + z\sum_{l=1}^{k-1}Q_{-l}P_{-l-1,-k+1}R_{-k}Q_lP'_{l+1,j-1}R_j,\\ &L_{k,-j}(z) = z^2P'_{-1,-k+1}Q_{-k}P'_{1,j-1}R_j,\\ &L_{-k,j}(z) = P_{-1,-k+1}R_{-k}P_{1,j-1}Q_j,\end{aligned}$$ $$\begin{aligned} &L_{k,n}(z) = z^2P'_{-1,-k+1}Q_{-k}P'_{1,n-1},\\ &L_{-k,n}(z) = zP'_{-k}Q_kP'_{k+1,n-1} + z\sum_{l=1}^{k-1}Q_{-l}P_{-l-1,-k+1}R_{-k}Q_lP'_{l+1,n-1},\\ &L_{k,-n}(z) = zP_{-k}R_kP_{k+1,n-1} + z\sum_{l=1}^{k-1}R_{-l}P'_{-l-1,-k+1}Q_{-k}R_lP_{l+1,n-1},\\ &L_{-k,-n}(z) = P_{-1,-k+1}R_{-k}P_{1,n-1},\end{aligned}$$ $$\begin{aligned} &L_{n,j}(z) = zR_{-j}P'_{-j-1,-n+1}P'_j + z\sum_{l=1}^{j-1}R_{-l}P'_{-l-1,-n+1}R_lP_{l+1,j-1}Q_j,\\ &L_{n,-j}(z) = z^2P'_{-1,-n+1}P'_{1,j-1}R_j,\\ &L_{-n,j}(z) = P_{-1,-n+1}P_{1,j-1}Q_j,\\ &L_{-n,-j}(z) = zP_jQ_{-j}P_{-j-1,-n+1} + z\sum_{l=1}^{j-1}Q_{-l}P_{-l-1,-n+1}Q_lP'_{l+1,j-1}R_j,\end{aligned}$$ $$\begin{aligned} &L_{n,n}(z) = z^2P'_{1,n-1}P'_{-1,-n+1},\\ &L_{-n,n}(z) = z\sum_{l=1}^{n-1}Q_lP_{-l-1,-n+1}Q_{-l}P'_{l+1,n-1},\\ &L_{n,-n}(z) = z\sum_{l=1}^{n-1}R_{-l}P'_{-l-1,-n+1}R_lP_{l+1,n-1},\\ &L_{-n,-n}(z) = P_{1,n-1}P_{-1,-n+1}.\end{aligned}$$ In these formulas, the operators $P_\mu, Q_\mu, R_\mu$ and $P'_\mu$ appearing in a single summand always have distinct indices hence their ordering does not matter. Factorization of $L(z)$ {#subsec:facLd} ------------------------ For $\mu \in J \setminus \{\pm n\}$, let $K_\mu = ((K_\mu)_{\lambda,\nu})_{\lambda,\nu \in J} \in {\mathcal A}\ot{\rm End }(V)$ be the operator having the elements $$\label{eqd:kdef} \begin{split} &(K_\mu)_{-n,\mu} = (K_\mu)_{-\mu, n} = Q_\mu, \\ &(K_\mu)_{\mu,-n} = (K_\mu)_{n,-\mu} = R_\mu, \\ &(K_\mu)_{-n,-n} = (K_\mu)_{-\mu,-\mu} = P_\mu, \\ &(K_\mu)_{\mu, \mu} = (K_\mu)_{n, n} = P'_\mu, \\ &(K_\mu)_{\nu,\nu} = 1 \quad \nu \neq \pm\mu, \pm n. \end{split}$$ All the other elements are zero. Here $R_\mu = Q^{-1}_{\mu}(1-a_\mu P^2_{\mu})$ and $P'_\mu = -qa_\mu P_\mu$ as in section \[subsec:Ld\]. We also introduce $S_\mu, \bar{S}_\mu \in {\mathcal A}\ot {\rm End }(V)$ for $\mu = 0,\ldots, n$ as follows. First we specify $S_1, \ldots, S_{n-1}$ by $$\begin{aligned} \begin{split} &(S_\mu)_{\mu,\mu} = (S_\mu)_{-\mu-1,-\mu-1} = Q_\mu, \\ &(S_\mu)_{\mu+1,\mu+1} = (S_\mu)_{-\mu,-\mu} = R_\mu, \\ &(S_\mu)_{\mu,\mu+1} = (S_\mu)_{-\mu-1,-\mu} = P_\mu, \\ &(S_\mu)_{\mu+1,\mu} = (S_\mu)_{-\mu,-\mu-1} = P'_\mu, \\ &(S_\mu)_{\nu,\nu} = 1 \quad \nu \neq \pm \mu, \pm (\mu+1), \end{split}\end{aligned}$$ where the other elements are zero. Then $\bar{S}_{\mu} \in {\mathcal A}\ot {\rm End }(V)$ with $1 \le \mu \le n-1$ is obtained from $S_\mu$ by replacing $P_\mu, Q_\mu, R_\mu$ and $P'_\mu$ with $P_{-\mu}, Q_{-\mu}, R_{-\mu}=Q^{-1}_{-\mu}(1-a_{-\mu}P^2_{-\mu})$ and $P'_{-\mu} = -qa_{-\mu}P_{-\mu}$, respectively. Finally the remaining ones are determined by $$\label{eqd:s01} S_0 = \sigma S_1 \sigma, \quad S_n = \sigma S_{n-1} \sigma, \quad \bar{S}_0 = \sigma \bar{S}_1 \sigma, \quad \bar{S}_n = \sigma \bar{S}_{n-1} \sigma,$$ where $\sigma = \sigma^{-1}$ is defined in the end of section \[subsec:R11d\]. The operators $K_\mu$ and $S_\nu, \bar{S}_\nu$ are connected via a gauge transformation analogous to (\[eqa:sdef1\]). To explain it we prepare the Weyl group operators $\sigma_0, \ldots, \sigma_n \in {\rm End }(V)$ which act as identity except $$\begin{aligned} {2} &\sigma_0: v_1 \leftrightarrow v_{-2}, & &v_{-1} \leftrightarrow v_2,\\ &\sigma_i: v_i \leftrightarrow v_{i+1}, & &v_{-i} \leftrightarrow v_{-i-1} \quad 1 \le i \le n-1,\\ &\sigma_n: v_{n-1} \leftrightarrow v_{-n}, & &\quad v_{-n+1} \leftrightarrow v_{n}.\end{aligned}$$ In terms of the sequences $$\begin{aligned} &(i_{2n-2},\ldots, i_2, i_1 ) = (n, n-2, n-3, \ldots, 2, 0, 1, 2, \ldots, n-2, n),\\ &(\mu_{2n-2}, \ldots, \mu_2, \mu_1) = (-n+1, \ldots, -2, -1, 1, 2, \ldots, n-1),\end{aligned}$$ the gauge transformation is given by $$\label{eqd:ks} K_{\mu_k} = \begin{cases} \sigma_{i_1}\cdots \sigma_{i_k}S_{i_k}\sigma_{i_{k-1}}\cdots \sigma_{i_1} & 1 \le k \le n-1,\\ \sigma_{i_1}\cdots \sigma_{i_k} \bar{S}_{i_k}\sigma_{i_{k-1}}\cdots \sigma_{i_1} & n \le k \le 2n-2. \end{cases}$$ We note the relations $$\label{eqd:sigS} \begin{split} &\sigma = \sigma_{i_1} \cdots \sigma_{i_{2n-2}},\\ &\sigma S_i \sigma = S_i, \quad \sigma \bar{S}_i \sigma = \bar{S}_i \quad 1 \le i \le n-1. \end{split}$$ Define the diagonal matrices $$\begin{aligned} &d(z) = z\hbox{ diag}(z^{-1}\overbrace{1, \ldots, 1}^{2n-2},z), \nonumber\\ &D(z) = \sigma_{i_1}\cdots \sigma_{i_{n-1}}d(z) \sigma_{i_{n-1}}\cdots \sigma_{i_1} = z\hbox{ diag}(\overbrace{1, \ldots, 1}^{n-1},z,z^{-1}, \overbrace{1, \ldots, 1}^{n-1}). \label{eqd:dd}\end{aligned}$$ \[prd:facL\] The $L$ operator in section \[subsec:Ld\] is factorized as $$L(z) = K_{-n+1} \cdots K_{-1} D(z) K_{1} \cdots K_{n-1}.$$ Equivalently it is also expressed as $$\begin{aligned} L(z) &= \sigma \bar{S}_{i_{2n-2}} \cdots \bar{S}_{i_{n}} d(z) S_{i_{n-1}} \cdots S_{i_1}\\ &= \bar{S}_{n-1} \bar{S}_{n-2} \cdots \bar{S}_{2}\bar{S}_{1}\sigma d(z) S_1 S_2 \cdots S_{n-2} S_{n}.\end{aligned}$$ The equivalence of the first and the second expressions is due to (\[eqd:ks\]) and (\[eqd:dd\]). The second one and the third are connected by (\[eqd:s01\]) and (\[eqd:sigS\]). The first expression is proved in Appendix \[app:LK\]. \[prd:RLL\] The $L$ operator and the $R$ matrix (\[Dn-R\]) satisfy the same $RLL$ relation as in Proposition \[pra:RLL\]. Proposition \[prd:RLL\] is a corollary of Proposition \[prd:facL\] and \[lemd:rss\] $$\begin{aligned} &R(z_2/z_1)(\sigma d(z_1) \ot \sigma d(z_2)) = (\sigma d(z_1) \ot \sigma d(z_2)) R(z_2/z_1),\\ &R(z)\stackrel{2}{S_\mu} \,\stackrel{1}{S_\mu} = \stackrel{1}{S_\mu} \, \stackrel{2}{S_\mu}R(z), \quad 1 \le \mu \le n, \\ &R(z)\stackrel{2}{\bar{S}_\mu} \,\stackrel{1}{\bar{S}_\mu} = \stackrel{1}{\bar{S}_\mu} \, \stackrel{2}{\bar{S}_\mu}R(z), \quad 1 \le \mu \le n. \end{aligned}$$ The first relation is straightforward to check. Next consider the second relation with $1 \le \mu \le n-1$. Comparing the $R$ matrices (\[eqa:r\]) and (\[Dn-R\]), we find that the contributions proportional to $a(z), b(z)$ and $c(z)$ on the both sides are equal due to Lemma \[lema:rss\] for $A^{(1)}_{n-1}$ case. Thus we are to show the equality with $R(z)$ replaced with $\sum_{j,k} f_{jk}(z)E_{j k} \otimes E_{-j\; -k}$. It is easily checked at $z=0$ and $z=\xi^{-1}$ for example, which suffices since $f_{jk}(z)$ is linear in $z$. Then the second relation with $\mu = n$ follows from $\mu=n-1$ case by using $S_n = (\sigma d(z))^{-1}S_{n-1} \sigma d(z)$. The third relation can be shown similarly. As Remark \[rema:K2\], if $a_\mu = 1$, the property $K_\mu^2=\openone_{2n}$ holds for any $\mu \in J \setminus \{ \pm n \}$. Quantized $D^{(1)}_n$ automaton {#subsec:dbbs} -------------------------------- Here we set up the quantized $D^{(1)}_n$ automaton. It is a system of particles and antiparticles on one dimensional lattice whose dynamics is governed by the $L$ operator constructed in section \[subsec:Ld\]. In the limit $q \rightarrow 0$, the dynamics become deterministic and the system reduces to the $D^{(1)}_n$ automaton [@HKT3; @HKT1]. Since our results are parallel with those in subsections \[subsec:qbbs1\] – \[subsec:norm\], we shall only give a brief sketch and omit the details. The space of states ${\mathcal P}$ is given by (\[eqa:pdef\]), where $V$ is now understood as the $2n$ dimensional vector representation $V = \C v_1 \oplus \cdots \oplus \C v_{-1}$. The condition (ii) remains the same while the condition (i) is replaced by $\sum_{k \in \Z} \vert j_k + n \vert < \infty$. Monomials $\cdots \ot v_{j_{-1}}\ot v_{j_0} \ot v_{j_1} \ot \cdots$ can be classified according to the numbers $w_1,\ldots, w_{n}, w_{-n+1},\ldots, w_{-1}$ of occurrence of the letters $1, \ldots, n, -n+1, \ldots, -1$ in the set $\{j_k\}$. Consequently one has the direct sum decomposition ${\mathcal P} = \oplus {\mathcal P}_{w_1,\ldots, w_{-1}}$ analogous to (\[eqa:dsd\]), where ${\mathcal P}_{0,\ldots,0}= \C p_{\rm vac}$ with $p_{\rm vac} = \cdots \ot v_{-n} \ot v_{-n} \ot \cdots$. The local states $v_{j_k} \in V$ is regarded as the $k$th box containing a particle of color $j_k$ if $j_k \in \{\pm 1, \ldots, \pm (n-1)\}$. Particles having colors with opposite signs are regarded as antiparticles of the other. The case $j_k = -n$ is interpreted as an empty box, while $j_k = n$ represents a bound state of a particle and an antiparticle. To formulate the time evolution, we assume $\forall a_\mu = 1$ from now on, and consider the space of the quantum carrier, namely, the ${\mathcal A}$ module ${\mathcal M}$ defined similarly to (\[eqa:M\]). The difference now is that we need $2n-2$ coordinates and to set ${\mathcal M} = \oplus \C [m_1, \ldots, m_{n-1}, m_{-n+1}, \ldots, m_{-1}]$. Then the actions of $P_\mu, Q_\mu, R_\mu$ and $P'_\mu = -qP_\mu$ are again given by (\[eqa:M\]) by simply extending the index $i$ to $\mu = \pm 1, \ldots, \pm(n-1)$. By construction we have $$\label{eqd:LW} L(z)(x \ot v_\mu) = \sum_{\nu \in J,\, y\in {\mathcal M}} W_{\mu \nu}[x \vert y](y \ot v_\nu)$$ for $x \in {\mathcal M}$. Here the sum over $y$ is taken under the constraint (\[eqd:wt\]), where the weight $\text{wt}$ should now be understood as (\[eqd:wtdef\]) without the $n$th component. The time evolution $T(z): {\mathcal P} \rightarrow {\mathcal P}$ is also given by the same formula (\[eqa:defT\]), where $(\cdots )_{0,0}$ now signifies the element in ${\rm End }({\mathcal P})$ corresponding to the transition from $[\overbrace{0,\ldots,0}^{2n-2}]$ to itself in the ${\mathcal M}$ part. From (\[eqd:dd\]) one has $T(z)p = z^{w_1+\cdots+w_{n-1}+2w_n+ w_{-n+1}+\cdots+w_{-1}}T(1)p$ for $p \in {\mathcal P}_{w_1,\ldots, w_{-1}}$. The power of $z$ is the total number of particles and antiparticles, for $v_n$ represents a bound state of a particle and an antiparticle. As it turns out, the total number is conserved, which implies the commutativity $T(z)T(z') = T(z')T(z)$. We concentrate on $T=T(1)$ henceforth. The propagation operators ${\mathcal K}_\mu$ for $\mu = \pm 1, \ldots, \pm(n-1)$ are defined in the same way as as the product of $K_\mu$ acting locally. This time the local interaction and their amplitudes implied by (\[eqd:kdef\]) are depicted in Fig. \[fig:Kmu\]. (125,38)(0,-3.5) (7.1,31.5)[$P_\mu$]{}(27.1,31.5)[$R_\mu$]{}(47.1,31.5)[$Q_\mu$]{} (66.2,31.5)[$P'_\mu$]{}(87.3,31.5)[$1$]{} (0,15)(20,0)[5]{}[ (5,8)[(1,0)[6]{}]{}(8,11)[(0,-1)[6]{}]{} (1.0,7.3)[${m_\mu}$]{}]{} (0,0)(20,0)[4]{}[ (5,8)[(1,0)[6]{}]{}(8,11)[(0,-1)[6]{}]{} (1.0,7.3)[${m_\mu}$]{}]{} (0,0)(0,15)[2]{}[ (12,7.3)[${m_\mu}$]{} (32,7.3)[${m_\mu}\!-\!1$]{} (52,7.3)[${m_\mu}\!+\!1$]{} (72,7.3)[${m_\mu}$]{}]{} (92,22.3)[${m_\mu}$]{} (6.5,27.3)[$-n$]{}(6.5,17.1)[$-n$]{} (26.5,27.3)[$-n$]{}(27.5,17.1)[$\mu$]{} (47.5,27.1)[$\mu$]{}(46.5,17.1)[$-n$]{} (67.5,27.1)[$\mu$]{}(67.5,17.1)[$\mu$]{} (6.5,12.3)[$-\mu$]{}(6.5,2.1)[$-\mu$]{} (26.5,12.3)[$-\mu$]{}(27.5,2.1)[$n$]{} (47.5,12.1)[$n$]{}(46.5,2.1)[$-\mu$]{} (67.5,12.1)[$n$]{}(67.5,2.1)[$n$]{} (87.5,27.3)[$\nu$]{}(87.5,17.1)[$\nu$]{} (7.1,-3)[$q^{m_\mu}$]{} (24.8,-3)[$1-q^{2{m_\mu}}$]{} (47.3,-3)[$1$]{} (65,-3)[$-q^{{m_\mu}+1}$]{} (87.3,-3)[$1$]{} Here ${m_\mu} \in \Z_{\ge 0}$ is a coordinate in $[m_1,\ldots, m_{n-1},m_{-n+1},\ldots, m_{-1}] \in {\mathcal M}$, meaning the number of color $\mu$ particles on the carrier. The top five diagrams are essentially the same as Fig. \[fig:Ki\] for $A^{(1)}_{n-1}$ case, where color $\mu$ particles on the carrier (horizontal line) behave according to the presence or absence of another color $\mu$ particle in a local box. (The empty box $-n$ here corresponds to $n$ in $A^{(1)}_{n-1}$ case.) The bottom four vertices are new. The second one there is the pair annihilation of a color $\mu$ particle on the carrier and the antiparticle $-\mu$ in the box to form the bound state $n$. The third one is the pair creation of $\mu$ and $-\mu$ from the bound state $n$. At $q=0$ the amplitudes for $P_\mu$ with $m > 0$, $R_\mu$ with $m=0$ and $P' _\mu$ vanish and the other ones become 1. As the result they reduce to the deterministic rule that agrees with the one in [@HKT3]. As a parallel result with Theorem \[th:facT\], the time evolution of the quantized $D^{(1)}_n$ automaton admits the factorization into the propagation operators. \[thd:facT\] $$T = {\mathcal K}_{-n+1}\cdots{\mathcal K}_{-1} {\mathcal K}_{1}\cdots{\mathcal K}_{n-1}.$$ This is a consequence of Proposition \[prd:facL\]. It extends a part of the earlier result at $q=0$ based on the crystal basis theory [@HKT2; @HKT3], where the time evolutions of a class of soliton cellular automata were factorized. Finally we state properties of the amplitude for $T$. Define the transposition ${}^tT$ of $T$, the subspace ${\mathcal P}_{\text{fin}}$ and the linear function ${\mathcal N}: {\mathcal P}_{\text{fin}} \rightarrow \C$ in the same manner as section \[subsec:norm\]. \[prod:matome\] Proposition \[pra:TT\] and Proposition \[pra:norm1\] are both valid also for the quantized $D^{(1)}_n$ automaton. In view of the factorization of $T$, it is enough to show the claim for any one of the propagation operators, say ${\mathcal K}_1$. Namely ${}^t{\mathcal K}_1 = {\mathcal K}^{-1}_1$ and ${\mathcal N}({\mathcal K}_1(p)) = {\mathcal N}(p)$. Then without a loss of generality one may restrict the space of states to ${\mathcal P}_{w_1,\ldots, w_{-1}}$ with all $w_\mu$ being zero except $w_{\pm 1}$ and $w_n$. Let $\pi$ be the map that embeds the local states into that for $A^{(1)}_1$ as $\pi(v_1) = \pi(v_n) = \bullet$ (a ball) and $\pi(v_{-1}) = \pi(v_{-n}) = \circ$ (an empty box), where we have used the notation in section \[subsec:norm\] for $A^{(1)}_1$. Let further $\phi$ be the map sending the pair of local states for $A^{(1)}_1$ and $D^{(1)}_n$ to that for the latter as $$\begin{aligned} {4} \phi(\circ,v_1) &= v_{-n}, & \;\;\phi(\circ,v_{-1}) &= v_{-1}, & \;\;\phi(\circ,v_{-n}) &= v_{-n}, & \;\;\phi(\circ,v_n) &= v_{-1}, \\ \phi(\bullet,v_{1}) &= v_{1}, & \phi(\bullet,v_{-1}) &= v_{n}, & \phi(\bullet,v_{-n}) &= v_{1}, & \phi(\bullet,v_{n}) &= v_{n}.\end{aligned}$$ The componentwise action of these maps will also be denoted by the same symbol. For example, if $p = \cdots \ot v_{-n} \ot v_{-1} \ot v_{-n} \ot \cdots$ and $p' = \cdots \ot \circ \ot \bullet \ot \circ \ot \cdots$ in the corresponding position, one has $\pi(p) = \cdots \ot \circ \ot \circ \ot \circ \ot \cdots$ and $\phi(p',p) = \cdots \phi(\circ, v_{-n}) \ot \phi(\bullet, v_{-1}) \ot \phi(\circ, v_{-n}) \ot \cdots = \cdots \ot v_{-n} \ot v_n \ot v_{-n} \ot \cdots$. Denoting the propagation operator for $A^{(1)}_1$ by ${\mathcal K}^A_1$, one has the embedding ${\mathcal K}_1(p) = \phi\bigl({\mathcal K}^A_1(\pi(p)),p\bigr)$. With the aid of this relation, the statements are reduced to the $A^{(1)}_1$ case established in section \[subsec:norm\]. Proof of Proposition \[prd:W\] {#appD:W} ============================== The simplifying feature of the limit $x_{-n} \rightarrow \infty$ (\[eqd:Wlim\]) is that one can decompose $w_{\mu\nu}[x \vert y]$ into three parts effectively. To see this suppose $x \in V_m$ is in normal order $$(v_{-1})^{\ot x_{-1}} \ot \cdots \ot (v_{-n+1})^{\ot x_{-n+1}} \ot (v_{-n})^{\ot x_{-n}} \ot (v_{n-1})^{\ot x_{n-1}} \ot \cdots \ot (v_{1})^{\ot x_{1}}.$$ Application of (\[eqd:Rcomp\]) for $D^{(1)}_n$ to this generates a variety of vectors $y = v_{j_1} \ot \cdots \ot v_{j_m}$. However in the limit $x_{-n} \rightarrow \infty$ under consideration, the vectors $v_1, \ldots, v_{n}$ are not allowed to appear in the left side of the segment $v_{-n} \ot \cdots \ot v_{-n}$ since they acquire the factor of order $q^{x_{-n}}$ in the course of normal ordering. See (\[eqd:no\]). Similarly, $v_{-1}, \ldots, v_{-n+1}$ are forbidden to show up in the right side of $v_{-n} \ot \cdots \ot v_{-n}$. In this way $W_{\mu\nu}[x \vert y]$ is effectively decomposed into the right, left and the infinitely large central parts, where the allowed indices are limited to $\{1,\ldots, n-1\}$, $\{-1,\ldots, -n+1\}$ and $-n$, respectively. Taking the situation into account, we derive $W_{\mu\nu}[x \vert y]$ (\[eqd:Wlim\]) in three steps. In [*Step 1*]{}, we compute all the matrix elements $w_{\mu\nu}[x \vert y]$ for $x$ of the form $x=i^m = \overbrace{v_i \ot \cdots \ot v_i}^m$, which serves as a building block for general $x$. In [*Step 2*]{}, we obtain the limits of $w_{\mu\nu}[x \vert y]$ that are relevant to the three parts separately. In [*Step 3*]{}, we glue the three parts together. [*Step 1*]{}. \[lemd:wm\] All the matrix elements of the form $w_{j,k}[i^m \vert y]$ are zero except the following: $$\begin{aligned} \label{w-1} &w_{i,i}[i^m \vert i^m] = (1-q^{m+1}z)(1-q^{m-1}\xi z) \quad \forall i, \\ \label{w-2} &w_{-i,-i}[i^m \vert i^m] = (q^{m-1}-z)(q^{m+1}-\xi z) \qquad \forall i, \\ \label{w-3} &w_{j,j}[i^m \vert i^m] = q(q^{m-1}-z)(1-q^{m-1}\xi z) \quad j \neq \pm i\, \; \forall i, \\ \label{w-4} &w_{j,i}[i^m \vert i^{m-1},j] = (1-q^{2m})(1-q^{m-1}\xi z) \times \begin{cases} 1 & i \succ j, \; j \neq \pm i\\ z & i \prec j, \; j \neq \pm i \end{cases} \quad \forall i, \\ \label{w-5} &w_{-i,j}[i^m \vert -j,i^{m-1}] = (-1)^{i+j+1}(1-q^{2m})(q^{m-1}-z)q^{\bar{j} + \bar{i}-2}\times \begin{cases} z & 1 \preceq j \prec -i \\ \xi^{-1} & -i \prec j \preceq -1 \end{cases}\quad \forall i, \\ \label{w-6} &w_{-i, i}[i^m \vert i^{m-2},j, -j] = (-1)^{i+j+1} q^{n-j-1}(1-q^{2m})(1-q^{2m-2}\xi)z \quad i = \pm n,\; 1 \le j \le n-1, \\ \label{w-7} \begin{split} &w_{-i,i}[i^m \vert -i,i^{m-1}] \\ &= \begin{cases} q^{m-1}(1-q^{2m})(1-q^{m-1}\xi z+q^{2i-1-m}(z-q^{m-1}))z & 1 \leq i \leq n-1\\ (1-q^{2m})(1-q^{m-1}\xi z+q^{2i+1+m}\xi(z-q^{m-1})) & -n+1 \preceq i \preceq -1 \end{cases}\quad i \neq \pm n, \end{split} \\ \label{w-8} \begin{split} &w_{-i,i}[i^m \vert i^{m-2},j,-j] \\ &= \begin{cases} (-1)^{i+j+1}q^{i-j-1}(1-q^{2m})(1-q^{2m-2})(1-q^{m-1}\xi z)z & 1 \leq i \leq n-1,\; 1 \le j < i\\ (-1)^{i+j}q^{i-j+1} \xi (1-q^{2m})(1-q^{2m-2})(q^{m-1}-z) & -n+1 \preceq i \preceq -1, \; 1 \le j < \vert i \vert. \end{cases} \end{split}\end{aligned}$$ In these formulas for $w_{j,k}[i^m \vert y]$, $y$ should be understood as a normal ordered vector in $V_m$ having the specified contents of the letters. [*Sketch of the proof*]{}. The first four, -, are straightforward to check. The other formulas – are shown in this order by induction on $m$. Here we illustrate it for . Let us write the $R$-matrix as $ R(z) = \sum_{i,j,k,l} r[i,k;j,l](z) E_{ji} \ot E_{lk}. $ For simplicity $w_{j,k}[i^m \vert y](z)$ will be denoted by $w_{j,k}[y](z)$. We treat the case $1 \leq j < i \leq n-1$. The result for $m=2$ can be checked directly. Assume – up to $m$. The fusion construction leads to the following recursion relation for $m \geq 3$: $$\begin{aligned} w_{-i,i}&[i^{m-1},j,-j](z) a(zq^{m-2}) \\ = & ~ \underline{q} ~r[i,i;i,i](z q^m) ~w_{-i,i}[i^{m-2},j,-j](z q^{-1}) \\ &+ \sum_{\alpha \neq i, \alpha=j+1}^{n-1} \underline{(-1)^{1+\alpha+j}(1-q^2) q^{\alpha-j+m-1}} ~ r[i,\alpha;\alpha,i](z q^m) ~ w_{-i,\alpha}[i^{m-1},-\alpha](z q^{-1}) \\ &+ \underline{(-1)^{i+j+1}(1-q^2) q^{i-j+m-1}}r[i,i;i,i](z q^m) ~ w_{-i,i}[i^{m-1},-i](z q^{-1}) \\ & + \underline{q^{m+1}}~ r[i,j;j,i](z q^m) ~ w_{-i,j}[i^{m-1},-j](z q^{-1}) \\ & + r[i,-j;-j,i](z q^m) ~ w_{-i,-j}[i^{m-1},j](z q^{-1}) \\ & + \underline{(-1)^{n+j+1}q^{n-j+m-1}} \bigl( ~r[i,-n;-n,i](z q^m) ~ w_{-i,-n}[i^{m-1},n](z q^{-1}) \\ & ~~~~~~~~~~~~~~~~~~~~~~~ + r[i,n;n,i](z q^m) ~ w_{-i,n}[i^{m-1},-n](z q^{-1}) ~ \bigr), \end{aligned}$$ where the underlined factors come from the normal ordering. To check that – satisfy this is easy. $\square$ [*Step 2*]{}. As explained in the beginning of the appendix, we investigate the three parts that constitute the limit $W_{\mu\nu}[x \vert y]$ separately. First we consider the right part. \[lemd:ue\] Set $w^\prime_{\mu\nu}[x \vert y] = w^\prime_{\mu\nu}[x \vert y](z) = w_{\mu\nu}[x \vert y]/a(z)$ and $m_1 = x_{1,n-1}$. Suppose $x$ and $y$ have the form $x=[x_1,\ldots,x_{n-1},0,\ldots,0]$ and $y=[y_1,\ldots,y_{n-1},y_{-n},0,\ldots,0]$, respectively. Then the nonzero case of the limit $\lim_{z \rightarrow \infty}w^\prime_{\mu\nu}[x \vert y]$ is given by $$\begin{split} &w^\prime_{\pm j,\pm j}[x \vert x] \to q^{-m_1 \pm x_j} ~(1 \leq j \leq n), \\ &w^\prime_{j,k}[x \vert x+(j)-(k)] \to -(1-q^{2 x_k})q^{-m_1-1+x_{k+1,j-1}} ~(1 \leq k < j \leq n-1), \\ &w^\prime_{-j,-k}[x \vert x-(j)+(k)] \to (-1)^{j+k}(1-q^{2 x_j})q^{-m_1+j-k-x_{j,k}} ~(1 \leq j < k \leq n-1), \\ &w^\prime_{-n,k}[x \vert x+(-n)-(k)] \to -(1-q^{2 x_k})q^{-1-x_{1,k}} ~(1 \leq k \leq n-1), \\ &w^\prime_{-j,n}[x \vert x-(j)+(-n)] \to (-1)^{j+n}(1-q^{2 x_j})q^{-m_1+j-n-x_{j,n-1}} ~(1 \leq j \leq n-1). \end{split}$$ [*Sketch of the proof*]{}. We illustrate the derivation of the second case. From the fusion construction one gets $$\begin{aligned} \begin{split} &w^\prime_{j,k}[x \vert x+(j)-(k)](z q^{m_1-1}) = q^{x_{k+1,j-1}} \frac{w_{j,k}[k^{x_k} \vert k^{x_k-1},j](zq^{x_k-1})} {a(z q^{2(x_k-1)})} \\ & ~~~~~ \times \Bigl( \prod_{i=1}^{k-1} \frac{w_{k,k}[i^{x_i}\vert i^{x_i}](z q^{2 x_k+2x_{1,i-1}+x_i-1})} {a(z q^{2x_k+2(x_{1,i}-1)})} \prod_{i=k+1}^{n-1} \frac{w_{k,k}[i^{x_i}\vert i^{x_i}](z q^{2x_{1,i-1}+x_i-1})} {a(z q^{2(x_{1,i}-1)})} \Bigr), \end{split} \end{aligned}$$ where the factor $q^{x_{k+1,j-1}}$ is due to normal ordering. Substituting and , one finds that this tends to the desired form in the limit $z \rightarrow \infty$. $\square$ Next we deal with the central part. \[lemd:naka\] Nonzero limit $q^{x_{-n}} \rightarrow 0$ of $w_{\mu \nu}[(-n)^{x_{-n}} \vert y]$ is given by $$\begin{split} &w_{n,n}[(-n)^{x_{-n}} \vert (-n)^{x_{-n}}] \to \xi z^2, \\ &w_{-n,-n}[(-n)^{x_{-n}} \vert (-n)^{x_{-n}}] \to 1, \\ &w_{n,-n}[(-n)^{x_{-n}} \vert -j,(-n)^{x_{-n}-2},j] \to (-1)^{j+n+1} q^{n-j-1}z, \\ &w_{n,j}[(-n)^{x_{-n}} \vert -j,(-n)^{x_{-n}-1}] \to (-1)^{j+n} q^{n+j-2}z^2, \\ &w_{n,-j}[(-n)^{x_{-n}} \vert (-n)^{x_{-n}-1},j] \to (-1)^{j+n} q^{n-j}z, \\ &w_{\pm j,\pm j}[(-n)^{x_{-n}} \vert (-n)^{x_{-n}}] \to -q z, \\ &w_{j,-n}[(-n)^{x_{-n}} \vert (-n)^{x_{-n}-1},j] \to 1, \\ &w_{-j,-n}[(-n)^{x_{-n}} \vert -j,(-n)^{x_{-n}-1}] \to z, \end{split}$$ where $1 \leq j \leq n-1$. Straightforward calculation based on Lemma \[lemd:wm\]. Finally for the left part, the following is verified similarly to Lemma \[lemd:ue\]. \[lemd:shita\] Suppose $x$ and $y$ have the form $x=[0,\ldots,0,x_{-n+1},\ldots,x_{-1}]$ and $y=[0,\ldots,0,y_{-n},y_{-n+1},\ldots,y_{-1}]$. Then the nonzero case of the limit $\lim_{z \rightarrow 0}w_{\mu\nu}[x \vert y]$ is given by $$\begin{aligned} \begin{split} &w_{\pm j,\pm j}[x \vert x] \to q^{m_2 \pm x_{-j}} ~(1 \leq j \leq n), \\ &w_{j,k}[x \vert x-(-j)+(-k)] \to (-1)^{j+k+1}(1-q^{2 x_{-j}})q^{m_2 +k-j-1+x_{-j-1,-k+1}} ~(1 \leq j < k \leq n), \\ &w_{-j,-k}[x \vert x+(-j)-(-k)] \to (1-q^{2 x_{-k}})q^{m_2 -x_{-k,-j}} ~(1 \leq k < j \leq n), \end{split}\end{aligned}$$ where $m_2 = x_{-1,-n+1}$. [*Step 3*]{}. We demonstrate the gluing procedure with two examples. First we derive the 4th case in Proposition \[prd:W\], $W_{i,l}[x \vert x+(i)-(j)-(-j)+(-l)]$. This is calculated as the simple product of the three parts: $$\begin{aligned} \begin{split} &w^\prime_{i,j}[x \vert x+(i)-(j)](z q^{-m+m_1}) w_{j,j}[(-n)^{x_{-n}} \vert (-n)^{x_{-n}}](z q^{m_1-m_2}) \\ & ~~~~~ \times w_{j,l}[x^\prime \vert x^\prime-(-j)+(-l)](z q^{m-m_2}), \end{split}\end{aligned}$$ which is nonzero for $1 \leq j \leq \min(i,l)$. For $j<i<l$, it is calculated by multiplying the second one in Lemma \[lemd:ue\], the 6th of Lemma \[lemd:naka\] and the second of Lemma \[lemd:shita\], leading to $$\begin{aligned} &-(1-q^{2 x_j})q^{-m_1-1+x_{j+1,i-1}}\times (-z q^{1+m_1-m_2}) \times (-1)^{j+l+1}(1-q^{2 x_{-j}})q^{m_2+l-j-1+x_{-j-1,-l+1}} \\ &~~ = (-1)^{j+l+1} z (1-q^{2 x_j}) (1-q^{2 x_{-j}}) q^{l-j-1+x_{j+1,i-1}+x_{-j-1,-l+1}}. \end{aligned}$$ This agrees with the sought result. Second we consider the 9th case in Proposition \[prd:W\], $W_{i,-k}[x \vert x+(i)-(-k)]$. This matrix element is obtained by collecting several contributions as $$\begin{aligned} \begin{split} &\Bigl( \underline{q^{x_{i+1,n-1}}} w^\prime_{i,i}[x \vert x](z q^{-m+m_1}) \, w_{i,-n}[(-n)^{x_{-n}} \vert (-n)^{x_{-n}-1},i](z q^{m_1-m_2}) \\ & ~~~~~ + \sum_{j=1}^{i-1} \underline{q^{x_{j+1,n-1}+1}} w^\prime_{i,j}[x \vert x+(i)-(j)](z q^{-m+m_1}) \, w_{j,-n}[(-n)^{x_{-n}} \vert (-n)^{x_{-n}-1},j](z q^{m_1-m_2}) \Bigr) \\ & ~~~~~ \times w_{-n,-k}[x^\prime \vert x^\prime-(-k)+(-n)](z q^{m-m_2}), \end{split}\end{aligned}$$ where we have set $x=[x_1,\ldots, x_{n-1},0,\ldots,0]$ and $x'=[0,\ldots,0,x_{-n+1},\ldots,x_{-1}]$. The underlined factors come from normal ordering. In the limit $x_{-n} \rightarrow \infty$, this is evaluated by using the first two of Lemma \[lemd:ue\], the 7th of Lemma \[lemd:naka\] and the last of Lemma \[lemd:shita\] as $$\begin{aligned} &\Bigl( q^{x_{i,n-1}} - \sum_{j=1}^{i-1}(1-q^{2x_j})q^{x_{j+1,i-1}+x_{j+1,n-1}} \Bigr) (1-q^{2x_{-k}}) q^{m_2-x_{-k,-n+1} -m_1}.\end{aligned}$$ The sum leads to the result $(1-q^{2x_{-k}}) q^{x_{1,i-1}+x_{-1,-k+1}}$. $\square$ Proof of Proposition \[prd:facL\] {#app:LK} ================================= Let $L_n\Bigl[\begin{matrix} P^\prime & R \\ Q & P \end{matrix}\Bigr]$ be the $L$ operator $L(z)$ for $A_{n-1}^{(1)}$ with $z=1$ defined in (\[eqa:L-elements\]). The $L$ operator with $P_i$ and $P'_{i}$ interchanged for all $i \in \{1,\ldots, n-1\}$ will be denoted by $L_n\Bigl[\begin{matrix} P & R \\ Q & P^\prime \end{matrix}\Bigr]$. A similar convention is applied also for the other interchanges like $R_i \leftrightarrow Q_i$, etc. A matrix $\bar{L}_n[\cdots]$ is the one obtained from $L_n[\cdots]$ by changing $X_i \,(X=P, P', Q, R)$ into $X_{-i}$ for all $i \in \{1,\ldots, n-1\}$. Matrices $L_n^+[\cdots]$ and $\bar{L}^+_n[\cdots]$ are the ones obtained from $L_n[\cdots]$ and $\bar{L}_n[\cdots]$ respectively by the replacement $X_{\pm i} \rightarrow X_{\pm(i+1)}$ for all $i \in \{1,\ldots, n-1\}$. For any square matrix $M$ we let $\Tilde{M}$ denote the one obtained by reversing the order of rows and columns simultaneously. \[lemd:ind\] $$\begin{aligned} &\begin{pmatrix} P_1^\prime & & R_1\\ & \openone_{n-1}\\ Q_1 & & P_1 \end{pmatrix} \begin{pmatrix} 1 \\ & L_n^+ \Bigl[\begin{matrix} P^\prime & R \\ Q & P \end{matrix}\Bigr]\\ \end{pmatrix} = L_{n+1}\Bigl[\begin{matrix} P^\prime & R \\ Q & P \end{matrix}\Bigr],\\ &\begin{pmatrix} P_1^\prime & & R_1\\ & \openone_{n-1}\\ Q_1 & & P_1 \end{pmatrix} \begin{pmatrix} \Tilde{L}_n^+ \Bigl[\begin{matrix} P & Q \\ R & P^\prime \end{matrix}\Bigr]\\ & 1 \end{pmatrix} = \Tilde{L}_{n+1}\Bigl[\begin{matrix} P & Q \\ R & P^\prime \end{matrix}\Bigr], \\ &\begin{pmatrix} 1 \\ & ^t\bar{L}_n^+ \Bigl[\begin{matrix} P & R \\ Q & P^\prime \end{matrix}\Bigr]\\ \end{pmatrix} \begin{pmatrix} P_{-1} & & Q_{-1}\\ & \openone_{n-1}\\ R_{-1} & & P_{-1}^\prime \end{pmatrix} = ~ ^t\bar{L}_{n+1}\Bigl[\begin{matrix} P & R \\ Q & P^\prime \end{matrix}\Bigr]. \\ &\begin{pmatrix} ^t\Tilde{\bar{L}}_n^+ \Bigl[\begin{matrix} P^\prime & Q \\ R & P \end{matrix}\Bigr]\\ & 1 \\ \end{pmatrix} \begin{pmatrix} P_{-1} & & Q_{-1}\\ & \openone_{n-1}\\ R_{-1} & & P_{-1}^\prime \end{pmatrix} = ~ ^t\Tilde{\bar{L}}_{n+1}\Bigl[\begin{matrix} P^\prime & Q \\ R & P \end{matrix}\Bigr]. \end{aligned}$$ Here ${}^t$ means the transposition. The first relation is just (\[eqa:kpro\]). The second relation is obtained from the first one by taking $\,\tilde{\;}\,$ and the interchanges $P \leftrightarrow P', Q \leftrightarrow R$. See Remark \[rema:com\]. The third relation follows from the first one by ${}^t\,\bar{\;}\,$ and $P \leftrightarrow P'$. The last one follows from the third one by $\,\tilde{\;}\,$ and $P \leftrightarrow P', Q \leftrightarrow R$. \[lemd:Kpro\] $$\begin{aligned} \label{DnAn-L1} &K_1 \cdots K_{n-1} = \rho \begin{pmatrix} L_n\Bigl[\begin{matrix} P^\prime & R \\ Q & P \end{matrix}\Bigr] & \\ & \Tilde{L}_n\Bigl[\begin{matrix} P & Q \\ R & P^\prime \end{matrix}\Bigr] \end{pmatrix} \rho, \\ \label{DnAn-L2} &K_{-n+1} \cdots K_{-1} = \begin{pmatrix} {}^t\bar{L}_n \Bigl[\begin{matrix} P & R \\ Q & P^\prime \end{matrix}\Bigr] & \\ & {}^t\Tilde{\bar{L}}_n \Bigl[\begin{matrix} P^\prime & Q \\ R & P \end{matrix}\Bigr] \end{pmatrix},\end{aligned}$$ where $\rho \in {\rm End}(V)$ denotes the interchange $v_n \leftrightarrow v_{-n}$. We use induction on $n$. The $n=3$ case is checked by a direct calculation. Assume and are fulfilled up to $n$. Then the left hand side of for $n+1$ is $$\begin{aligned} &K_1 K_2 \cdots K_{n} \nonumber \\ &~~ = \begin{pmatrix} P_1^\prime & & & R_1 & & \\ & \openone_{n-1}\\ & & P_1^\prime & & & R_1 \\ Q_1 & & & P_1 & & \\ & & & & \openone_{n-1}\\ & & Q_1 & & & P_1 \end{pmatrix} \rho \begin{pmatrix} 1 \\ & L_n^+ \Bigl[\begin{matrix} P^\prime & R \\ Q & P \end{matrix}\Bigr]\\ & & \Tilde{L}_n^+ \Bigl[\begin{matrix} P & Q \\ R & P^\prime \end{matrix}\Bigr]\\ & & & 1 \end{pmatrix} \rho \nonumber \\ & ~~ = \rho \begin{pmatrix} P_1^\prime & & R_1 & & & \\ & \openone_{n-1}\\ Q_1 & & P_1 & & & \\ & & & P_1^\prime & & R_1 \\ & & & & \openone_{n-1}\\ & & & Q_1 & & P_1 \end{pmatrix} \begin{pmatrix} 1 \\ & L_n^+ \Bigl[\begin{matrix} P^\prime & R \\ Q & P \end{matrix}\Bigr]\\ & & \Tilde{L}_n^+\Bigl[\begin{matrix} P & Q \\ R & P^\prime \end{matrix}\Bigr]\\ & & & 1 \end{pmatrix} \rho. \end{aligned}$$ Owing to the first two relations in Lemma \[lemd:ind\], this coincides with the right hand side of (\[DnAn-L1\]) for $n+1$. Similarly the induction assumption leads to the following expression for the left hand side of for $n+1$: $$\begin{aligned} &K_{-n} K_{-n+1} \cdots K_{-1} \nonumber \\ &~~ = \begin{pmatrix} 1 \\ & ^t\bar{L}_n^+ \Bigl[\begin{matrix} P & R \\ Q & P^\prime \end{matrix}\Bigr]\\ & & ^t\Tilde{\bar{L}}_n^+ \Bigl[\begin{matrix} P^\prime & Q \\ R & P \end{matrix}\Bigr]\\ & & & 1 \end{pmatrix} \begin{pmatrix} P_{-1}& & Q_{-1}\\ & \openone_{n-1} \\ R_{-1} & & P_{-1}^\prime\\ & & & P_{-1} & & Q_{-1}\\ & & & & \openone_{n-1}\\ & & & R_{-1} & & P_{-1}^\prime \end{pmatrix}.\end{aligned}$$ Again the product can be computed by using the latter two relations in Lemma \[lemd:ind\], yielding the right hand side of (\[DnAn-L2\]) for $n+1$. This completes the induction. [*Proof of Proposition \[prd:facL\]*]{} The product $K_{-n+1} \cdots K_{-1} D(z) K_{1} \cdots K_{n-1}$ can be calculated by using Lemma \[lemd:Kpro\], (\[eqd:dd\]) and (\[eqa:L-elements\]). The result agrees with the $L(z)$ defined in section \[subsec:Ld\]. $\square$ Acknowledgements {#acknowledgements .unnumbered} ================ The authors thank Taichiro Takagi and Yasuhiko Yamada for discussion. A.K. thanks Murray Batchelor, Vladimir Bazhanov, Vladimir Mangazeev and Sergey Sergeev for a warm hospitality at the Australian National University during his stay in March 2004. A.K. and M.O. are partially supported by Grand-in-Aid for Scientific Research JSPS No.15540363 and No.14540026, respectively from Ministry of Education, Culture, Sports, Science and Technology of Japan. [99]{} R. J. Baxter, Exactly solved models in statistical mechanics, Academic Press, London (1982). V. V. Bazhanov, [Integrable quantum systems and classical Lie algebras]{}, Comm. Math. Phys. [**113**]{} (1987) 471–503. V. V. Bazhanov and Yu. G. Stroganov, [Chiral Potts model as a descendant of the six-vertex model]{}, J. Stat. Phys. [**59**]{} (1990) 799–817. K. Fukuda, M. Okado, Y. Yamada, [Energy functions in box ball systems]{}, Int. J. Mod. Phys. A [**15**]{} (2000) 1379–1392. G. Hatayama, K. Hikami, R. Inoue, A. Kuniba, T. Takagi and T. Tokihiro, [The $A^{(1)}_M$ Automata related to crystals of symmetric tensors]{}, J. Math. Phys. [**42**]{} (2001) 274-308. G. Hatayama, A. Kuniba, and T. Takagi, [Soliton cellular automata associated with crystal bases]{}, Nucl. Phys. B[**577**]{}\[PM\] (2000) 619–645. G. Hatayama, A. Kuniba, and T. Takagi, [Factorization of combinatorial $R$ matrices and associated cellular automata]{}, J. Stat. Phys. [**102**]{} (2001) 843–863. G. Hatayama, A. Kuniba, and T. Takagi, [Simple algorithm for factorized dynamics of ${\ensuremath{\mathfrak{g}_n}}$-automaton]{}, J. Phys. A: Math. Gen.[**34**]{} (2001) 10697–10705. G. Hatayama, A. Kuniba, M. Okado, T. Takagi and Y. Yamada, [Scattering rules in soliton cellular automata associated with crystal bases]{}, Contemporary Math. [**297**]{} (2002) 151–182. K. Hikami, R. Inoue and Y. Komori, [Crystallization of the Bogoyavlensky lattice]{}, J. Phys. Soc. Jpn. [**68**]{} (1999) 2234–2240. M. Jimbo, [Quantum $R$ matrix for the generalized Toda system]{}, Comm. Math. Phys. [**102**]{} (1986) 537–547. S-J. Kang, M. Kashiwara and K. C. Misra, [Crystal bases of Verma modules for quantum affine Lie algebras]{}, Compositio Math. [**92**]{} (1994) 299–325. S. M. Khoroshkin and V. N. Tolstoy, [Universal $R$-matrix for quantized (super) algebras]{}, Commun. Math. Phys. [**141**]{} (1991) 599–617. A. N. Kirillov and N. Yu. Reshetikhin, [$q$-Weyl group and a multiplicative formula for universal R-matrices]{}, Commun. Math. Phys. [**134**]{} (1990) 421–431. P. P. Kulish, N. Yu. Reshetikhin and E. K. Sklyanin, [Yang-Baxter equations and representation theory. I]{}, Lett. Math. Phys. [**5**]{} (1981) 393–403. A. Kuniba, T. Takagi and A. Takenouchi, [Factorization, reduction and embedding in integrable cellular automata]{}, J. Phys. A [**37**]{} (2004) 1691–1709. A. Kuniba, M. Okado, T. Takagi and Y. Yamada, [Geometric crystal and tropical $R$ for $D^{(1)}_n$]{}, Int. Math. Res. Notices [**48**]{} (2003) 2565–2620. A. Kuniba, M. Okado, T. Takagi and Y. Yamada, [Tropical $R$ and tau functions]{}, Commun. Math. Phys. [**245**]{} (2004) 491–517. A. Nakayashiki and Y. Yamada, Kostka polynomials and energy functions in solvable lattice models, Selecta Mathematica, New Ser. [**3**]{} (1997) 547-599. Ya. S. Soibelman, [Quantum Weyl group and some of its applications]{}, Rend. Circ. Mat. Palermo Suppl. [**26**]{} (1991) 233–235. D. Takahashi, [On some soliton systems defined by using boxes and balls]{}, Proceedings of the International Symposium on Nonlinear Theory and Its Applications (NOLTA ’93), (1993) 555–558. D. Takahashi and J. Matsukidaira, [Box and ball system with a carrier and ultra-discrete modified KdV equation]{}, J. Phys. A [**30**]{} (1997) L733 – L739. D. Takahashi and J. Satsuma, [A soliton cellular automaton]{}, J. Phys. Soc. Jpn. [**59**]{} (1990) 3514–3519. T. Tokihiro, D. Takahashi, J. Matsukidaira and J. Satsuma, [From soliton equations to integrable cellular automata through a limiting procedure]{}, Phys. Rev. Lett. [**76**]{}, (1996) 3247–3250.
--- abstract: 'It is shown that in bilayer conducting structures in crossed electric and magnetic fields of a special configuration (the fields should have opposite signs in the adjacent layers) the dependence of the energy of a pair of equally charged carriers on the momentum of the pair has a local minimum. This minimum corresponds to a bound state of the pair. The local minimum is separated from the absolute minimum by a large energy barrier. That provides the stability of the bound state with respect to different scattering processes. If the number of the pairs is macroscopic, a phase transition into a metastable superconducting state may take place.' address: | B. I. Verkin Institute for Low Temperature Physics and Engineering National Academy of Sciences of Ukraine, Lenin av. 47 Kharkov 61103 Ukraine\ e-mail: [email protected] author: - 'S. I. Shevchenko, E. D. Vol' title: Coupling of spatially separated carriers in crossed electric and magnetic fields and a possibility of a metastable superconducting state in bilayer systems --- Introduction ============ Up to now a lot of mechanisms for the electron pairing in metals and different composite semi-metal structures have been proposed. In all that mechanisms the electron pairing is caused by an attraction between electrons due to an exchange by a quantum of a boson field - phonon, plasmon, magnon or their combination. After the discovery of the high-temperature superconductivity a number of the mechanisms proposed has grown significantly [@1; @2]. In this paper we show that a principally different mechanism for the binding of the electron pair into a bound state exists. This binding is caused by a special configuration of external electric and magnetic fields, in which the pair has to be situated. Let us consider a three layer sandwich consisting of two two-dimensional layers with an electron conductivity separated by a dielectric layer of the thickness $d$. Let the homogeneous parallel to the layers electric fields ${\bf E}=(E,0,0)$ and ${\bf E}=(-E,0,0)$, and normal to the layers magnetic fields ${\bf H}=(0,0,H)$ and ${\bf H}=(0,0,-H)$ are applied to the upper and the lower layer, correspondingly. The Schroedinger equation for the pair of electrons, one of which belongs to the upper layer (layer 1) and the other one - to the lower layer (layer 2), has the form $$\begin{aligned} \Biggl\{ \frac{1}{2m_{*}}\; \Biggl(-i\hbar \nabla _{1} + \frac{1}{2}\; \frac{e}{c} {\bf H} \times {\bf r}_{1}\Biggr)^{2}\; + \; \frac{1}{2m_{*}}\; \Biggl(-i\hbar \nabla _{2} - \frac{1}{2}\; \frac{e}{c} {\bf H} \times {\bf r}_{2}\Biggr)^{2}\; \cr + \; e{\bf E} \Bigl({\bf r}_{1} - {\bf r}_{2}\Bigr)\; + \; \frac{e^{2}}{\epsilon \sqrt{|{\bf r}_{1} - {\bf r}_{2}|^2+d^2}} \Biggr\}\; \Psi({\bf r}_1,{\bf r}_2) \; = \; \varepsilon \Psi({\bf r}_1,{\bf r}_2)\; . \label{1}\end{aligned}$$ In this equation we assume the same effective electron masses $m_{*}$ in both layers and use the symmetric gauge for the vector potential ${\bf A}=\frac{1}{2}{\bf H} \times {\bf r}$. The vectors ${\bf r}_1$ and ${\bf r}_2$ are two-dimensional ones. The Hamiltonian (\[1\]) differs from the Hamiltonian of an electron-hole pair in homogeneous crossed fields only by the sign of the electron-electron interaction. Therefore, as in the case of the electron-hole pair(compare with [@3]), the operator $${\bf P}\; = \; -i\hbar \nabla _{1}\; - \; i\hbar \nabla _{2}\; - \; \frac{1}{2}\; \frac{e}{c} {\bf H} \times \Bigl({\bf r}_{1} - {\bf r}_{2}\Bigr)\; \label{2}$$ commutes with the Hamiltonian (\[1\]) (hence, it conserves in time) and their components commute with each other. It allows to parameterize the energy of the electron pair by the momentum [**p**]{} and introduce the dispersion law for the pair. Using the new variables $${\bf R}\; = \; \frac{{\bf r}_{1} + {\bf r}_{2}}{2}\; , \qquad {\bf r}\; = \; {\bf r}_{1} - {\bf r}_{2}\; , \label{3}$$ and, as in [@3], rewriting the wave function in Eq.(\[1\]) in the form $$\Psi \Bigl({\bf r}_{1},{\bf r}_{2} \Bigr)\; = \; \exp\; \Biggl\{i\Biggl({\bf p} + \frac{e}{2c}{\bf H} \times {\bf r}\Biggr) \cdot \frac{{\bf R}}{\hbar}\Biggr\}\; \Phi ({\bf r}-{\bf r}_0)\; , \label{4}$$ where $${\bf r}_0\; = \;\frac{c}{e H^2} {\bf H}\times {\bf p}^\prime \ , \qquad {\bf p}^\prime\; ={\bf p}+ \frac{2 m_{*} c}{H^2} {\bf H} \times {\bf E} \ , \label{5}$$ one can easily check, that the function $\Phi({\bf r})$ should satisfy the equation $$\Biggl\{- \frac{\hbar ^{2}}{m_{*}}\; \frac{\partial ^{2}}{\partial {\bf r}^{2}}\; + \; \frac{e^{2}H^{2}}{4m_{*}c^{2}}\;r^{2}\; + \; \frac{e^{2}}{\epsilon\sqrt{|{\bf r}+{\bf r}_{0}|^{2}+d^{2}}}\; + \ \frac{p^{2}-p^{\prime\; 2}}{4m_{*}}\Biggr\}\; \Phi ({\bf r})\; = \; \varepsilon \Phi ({\bf r})\; . \label{6}$$ In strong magnetic fields $H$, for which the Larmour frequency $\omega_c =eH/m_*c$ multiplied by the Plank constant is much large then the Coulomb energy $e^2/\epsilon\ell$ (where $\ell=(c\hbar/eH)^{1/2}$ is the magnetic length), in zero order approximation one can neglect the energy of the Coulomb interaction. The solution of Eq.(\[6\]), in which the Coulomb energy is omitted, is $$\Phi({\bf r})\; = \; \frac{1}{\sqrt{2\pi} \ell}\; \exp\; \Biggl(-\frac{1}{2}\xi \Biggr)\; \xi ^{\frac{|m|}{2}}\; L_{n}^{|m|}\; (\xi)\; e^{-im\varphi}\; . \label{7}$$ Here $\xi = r^{2}/2\ell^{2}$, $L_{n}^{|m|}$ is the generalized Laguerre polynomial, $n$, $m$, the integer numbers ($n \geq 0$). The eigenvalue, which corresponds to the eigenfunction (\[7\]), minus the quantity ($p^{2}-p^{\prime \ 2})/4m_{*}$ reads as $$\varepsilon _{n,m}\; = \; \frac{eH}{m_{*}c}\; \hbar\; \Bigl(2n\; + \; |m|\; + \; 1\Bigr)\; . \label{8}$$ The ground state of the pair is realized at $n=m=0$. The first order correction in the Coulomb interaction is equal to $$\delta \varepsilon\; = \; \frac{e^{2}}{2\pi \ell^{2}}\; \int \; \frac{\exp\; [- ({\bf r}-{\bf r}_{0})^{2}/2\ell^{2}]}{ \epsilon\sqrt{r^{2}+d^{2}}}\; d^2 r\; . \label{9}$$ Below we restrict our consideration by the case of small thickness of the dielectric layer, when the inequality $d\ll \ell$ is fulfilled. In this case the integrals in (\[9\]) can be evaluated analytically. The result is $$\delta \varepsilon\; = \; \Biggl( \frac{\pi}{2} \Biggr)^{\frac{1}{2}}\; \frac{e^{2}}{\epsilon\ell}\; \exp \; \Biggl(-\frac{r_{0}^{2}}{4\ell^{2}} \Biggr)\; I_{0}\; \Biggl(\frac{r_{0}^{2}}{4\ell^{2}}\Biggr)\; . \label{10}$$ Here $I_0(z)$ is the modified Bessel function. If one introduces the electron drift velocity in crossed fields ${\bf u}\; =\; c{\bf E} \times {\bf H}/H^{2}$, then, in the ground state $n=m=0$ the total energy of the electron pair is equal to $$\varepsilon\; =\; \hbar \omega _{c}\; +\; {\bf u}\cdot {\bf p}\; -\; \frac{m_{*}c^{2}E^{2}}{H^{2}}\; +\; \delta \varepsilon \Bigl( {\bf p} - 2m_{*}{\bf u}\Bigr)\; . \label{11}$$ To investigate the dependence of the energy of the pair on its momentum, we take into account, that at the fields [**E**]{} and [**H**]{} specified the vector [**u**]{} has the form ${\bf u}=(0, -\frac{cE}{H},0)$. Therefore, we also consider, that the momentum of the pair is equal to ${\bf p}=(0, p, 0)$. Let us assume the field $E$ is such small, that the condition $$\frac{e^{2}}{\epsilon c\hbar}\; \gg\; \frac{E}{H}\; \label{12}$$ is satisfied. One can easily find that the function $\varepsilon(p)$ has a local minimum and a local maximum (both at $p<0$). If the inequality (\[12\]) is satisfied, the maximum is in the region, where $r_{0}(p)/\ell\ll 1$, while the minimum - in the region $r_{0}(p)/\ell\gg 1$. It allows one to use the well known approximations for the Bessel function $I_0$ in that regions. After simple calculations we find the minimum is reached at $p=p_{0}\equiv -\Bigl(e^{3}H^{2}/\epsilon c^{2}E\Bigr)^{1/2}$. At this point the energy is equal to $$\varepsilon _{min}\; =\; \hbar \omega _{c}\; -\; \frac{m_{*}c^{2}E^{2}}{H^{2}}\; +\; \frac{e^{2}}{\epsilon\; \sqrt{e/\epsilon E}}\; . \label{13}$$ The maximum is reached at $p=p_{m}\equiv -2 (2/\pi)^{1/2} \epsilon \hbar ^{2} c E/ e^{2}\ell H$, and the energy is equal to $$\varepsilon _{max}\; =\; \hbar \omega _{c}\; -\; \frac{m_{*}c^{2}E^{2}}{H^{2}}\; +\; \Biggl(\frac{\pi}{2}\Biggr)^{\frac{1}{2}}\; \frac{e^{2}}{\epsilon \ell}\; +\; \Biggl(\frac{2}{\pi}\Biggr)^{\frac{1}{2}}\; \frac{\epsilon \hbar ^{2}}{e^{2}\ell}\; \Biggl(c\; \frac{E}{H}\Biggr)^{2}\; . \label{14}$$ The function $\varepsilon(p)$ is plotted in Fig. \[fig\]. As it follows from Eq. (\[2\]), at the minimum the following relation between $p_0$ and the size of the pair $r_0$ (which is the average distance between electrons in the plane) is fulfilled: $|p_{0}|=eHr_{0}/c$. It means that $r_{0}=(e/\epsilon E)^{1/2}$. If the inequality (\[12\]) is satisfied, the size of the pair $r_0$ greatly exceeds the magnetic length $\ell$. The physical reason for the bounding of two particles with the same sign of the charges is the following. The Coulomb forces repulse the particles. The electric fields, directed oppositely in the adjacent layers, try to make the particles stand closer (at the right sign of the fields). Since the kinetic energy of the electrons is quenched by the strong magnetic field, the size of the pair $r_0$ is found from the minimum of the potential energy $$U\; \equiv \; eEr\; +\; \frac{e^{2}}{\epsilon r}\; \label{15}$$ (at ${\bf E}\parallel {\bf r}$). Although this minimum is a local one, but it is separated from the absolute minimum by a barrier of the height of order $e^2/\epsilon \ell$. Due to this reason the electron pairs with the momenta in the vicinity of $p_0$ are stable with respect to their collisions with each other and they cannot overcome the energy barrier. This circumstance allows to put a macroscopic number of the electron pairs into the state with a momentum $p_0$. Since the pairs are bosons, a transition into an unusual superconducting state is possible in a system with pure Coulomb repulsion. The presence of other pairs does not destruct this picture under conditions that the pairs do not overlap. Hence it follows that the density of the pairs $n$ should satisfy the inequality $nr_{0}^{2}\ll 1$. Substituting $r_0$ into that inequality, we arrive to a restriction from below on the value of the electric field $E$. The restriction from above on $E$ follows from (\[12\]). As a result, one can easily find that the theory is valid at $$\frac{e}{\epsilon \ell _{­}^{2}}\; \gg \; E\; \gg \; \frac{en}{\epsilon}\; . \label{16}$$ In conclusion, we discuss two important questions. How the magnetic structure required can be designed and how to get into the state with the momentum $p_0$? To answer the first question we note the following. For instance, it was reported in Ref. [@4] on the study of properties of an electron gas in a periodic magnetic field. Such a field was induced by dysprosium magnetic stripes sputtered on the surface of the conducting layer. One can expect the required configuration of the magnetic fields can be realized if the stripes with a special orientation of the magnetic moments are sputtered on both conducting layers. The one of the possibilities for obtaining the state with a large number of the pairs with the momenta lying in the vicinity of $p_0$ is the following. Let us assume that a bilayer system in the crossed fields of the special configuration considered consists of two subsystems separated by a partition with a hole. As it follows from the dispersion law (see Fig. \[fig\]), the right moving pairs have the momenta belonging to the branch 2, while the left moving pairs - the momenta belonging to the branches 1 and 3. As a result, an excess number of the branch 2 pairs will be accumulated in the right subsystem. Further interaction of the pairs with a thermostat will lower their energy and put them into states with the momenta close to $p_0$. At the temperature of order $\hbar^2 n/2 m_*$ the pairs may condense into a long-living superconducting state under condition that the height of the barrier $\sqrt{\pi/2 }e^2/\epsilon \ell$ is large in comparison with the temperature. [9]{} Tsuei C.C., Kirtley J.R., Rev. Mod. Physics. [**72**]{} 969 (2000). V. L. Ginzburg, Physics-Uspekhi, [**43**]{} 573 (2000). L. P. Gor’kov, I. E. Dzyaloshinskii, Zh. Eksp. Teor. Fiz. [**53**]{} 717 (1967).\[ Sov. Phys. JETP [**26**]{} 449 (1968)\] Ye P.D., et. all, Phys. Rev. Letters. [**74**]{} 3013 (1995).
--- abstract: 'The fields of artificial intelligence and neuroscience have a long history of fertile bi-directional interactions. On the one hand, important inspiration for the development of artificial intelligence systems has come from the study of natural systems of intelligence, the mammalian neocortex in particular. On the other, important inspiration for models and theories of the brain have emerged from artificial intelligence research. A central question at the intersection of these two areas is concerned with the processes by which neocortex learns, and the extent to which they are analogous to the back-propagation training algorithm of deep networks. Matching the data efficiency, transfer and generalization properties of neocortical learning remains an area of active research in the field of deep learning. Recent advances in our understanding of neuronal, synaptic and dendritic physiology of the neocortex suggest new approaches for unsupervised representation learning, perhaps through a new class of objective functions, which could act alongside or in lieu of back-propagation. Such local learning rules have implicit rather than explicit objectives with respect to the training data, facilitating domain adaptation and generalization. Incorporating them into deep networks for representation learning could better leverage unlabelled datasets to offer significant improvements in data efficiency of downstream supervised readout learning, and reduce susceptibility to adversarial perturbations, at the cost of a more restricted domain of applicability.' author: - | Eilif B. Muller\*\ Philippe Beaudoin\ Element AI\ 6650 Saint-Urbain \#500\ Montreal, QC H2S 3G9\ Canada\ \ \ `*Correspondence to: [email protected]`\ bibliography: - 'refs.bib' title: 'Neocortical plasticity: an unsupervised cake but no free lunch' --- Unsupervised neocortex {#unsupervised-neocortex .unnumbered} ====================== The neocortex is the canonically 6-layered sheet of cells forming the grey matter surface of the mammalian cerebrum. It is composed of a densely interconnected network of sub-regions responsible for learning sensory processing, speech and language, motor planning and many of the higher cognitive processes associated with rational thought. The human neocortex contains an estimated 100 trillion synapses, the points of communication between neurons which undergo persistent changes in strength and topology as a function of signals local to the synapse and a complex biochemical program [@holtmaat2009experience]. These processes, broadly known as synaptic plasticity, are thought to be the basis of learning and memory in the brain. An important task of synaptic plasticity in sensory neocortical areas is to learn disentangled invariant representations [@dicarlo2012does]. For example, the ventral stream of primate visual cortex, the collection of areas responsible for visual object recognition, computes hierarchically organized representations much like state-of-the art convolutional neural networks (CNNs) optimized for the task [@yamins2014performance]. While there are impressive similarities in the learned representations between the ventral stream and CNNs, there are important differences in *how* those representations are learned. While CNNs are trained in a supervised manner using a gradient descent optimization algorithm with an explicit global objective on large labelled datasets, the ventral stream learns from a much larger dataset (visual experience) but with only very sparse labelling. The latter property of cortical learning is attractive to emulate in CNNs, and more broadly across deep learning models. Attractive, not only because of the ability to make use of unlabelled data during learning, but also because it will impart the models with superior generalization and transfer properties, as discussed below. The monkey’s paw effect: the problem with specifying what without specifying how {#the-monkeys-paw-effect-the-problem-with-specifying-what-without-specifying-how .unnumbered} ================================================================================ A well known and often encountered pitfall of numerical optimization algorithms for high dimensional problems, such as evolutionary algorithms, simulated annealing and also gradient descent, is that they regularly yield solutions matching *what* your objective specifies to the letter, but far from *how* you intended [@lehman2018surprising]. The short story “The Monkey’s Paw” by W. W. Jacobs provides a compelling metaphor. In that story, the new owner of a magical mummified monkey’s paw of Indian origin is granted three wishes. The owner first wishes for \$200, and his wish is eventually granted to the penny, but with the grave side effect that it is granted through a goodwill payment from his son’s employer in response to his untimely death in a terrible machinery accident [@jacobs1910monkey]. The Monkey’s Paw effect is also applicable to gradient descent-based optimization of deep neural nets. The relative data-hungriness of current supervised learning strategies, and the use of data augmentation to improve generalization reflect the precarious position we are in of needing to micromanage the learning processes. Adversarial examples [@moosavi2016deepfool] are evidence that the monkey’s paw effect none-the-less persists. It is temping to continue with the current paradigm and re-inject adversarial examples back into the learning data stream. Extrapolating, this goes in the direction of specifying the negative space of the objective, all those things the optimization should not do to solve the problem, which is potentially infinite, and rather risky in production environments like self-driving cars. Adversarial examples represent an opportunity to address the issue in a more fundamental way [@yamins2016using]. It has been argued by @bengio2012deep that if we could design deep learning systems with the explicit objective of “disentangling the underlying factors of variation” in an unsupervised manner, then there is much to be gained for generalization and transfer. Such an approach offers a promising solution to the Monkey’s Paw effect, as there is an explicit objective of learning good representations, from which generalization and transfer follow by definition.[^1] One small challenge remains: how to express the objective of learning good representations? If we restrict ourselves to the subset of all possible inputs for which the neocortex learns good representations, the local processes of synaptic plasticity may provide valuable clues. Neocortical plasticity {#neocortical-plasticity .unnumbered} ====================== The neocognitron model [@fukushima1980neocognitron], the original CNN architecture, learned visual features through self-organization using local rules. Since its conception, our understanding of the neocortex and its neurons and synapses has progressed considerably. Recent insights into the local plasticity rules for learning in the neocortex offer new inspiration for deep representation learning paradigms that learn “disentangled representations” from large unlabelled datasets in an unsupervised manner. A selection of recent insights into the systems of plasticity of the neocortex is shown in Fig. \[fig:selection\]. A new dendrite-centric view of synaptic plasticity is emerging with the discovery of the NMDA spike, a non-linear mechanism hypothesized to associate co-activated synapses through potentiation or structural changes driven by the resulting calcium currents [@schiller2000nmda; @graupner2010mechanisms; @holtmaat2009experience] (Fig. \[fig:selection\]A-B). Such associations, in the form of co-coding clusters of synapses, have recently been experimentally observed using optical techniques [@wilson2016orientation] (Fig. \[fig:selection\]C). Moreover neurons in the neocortex are known to form small cliques of all-to-all connected neurons which drive co-coding [@reimann2017cliques], a process that would be self-reinforced through dendritic clustering by NMDA spikes (Fig. \[fig:selection\]D). Martinotti neurons, which are activated by such cliques of pyramidal neurons, and subsequently inhibit pyramidal dendrites [@silberberg2007disynaptic] provide well-timed inhibition to block further NMDA spikes [@doron2017timed], and put a limit on the maximal pyramidal clique size, but also suppress activation of competing cliques (e.g. Winner-take-all (WTA) dynamics). Together, such plasticity mechanisms appear to form basic building blocks for representation learning in the feed-forward pathway of the neocortex using local learning rules. While long known competitive strategies for unsupervised representation learning indeed rely on WTA dynamics [@fukushima1980neocognitron; @rumelhart1985feature], deep learning approaches incorporating these increasingly apparent dendritic dimensions of learning processes have yet to be proposed [@poirazi2001impact; @kastellakis2015synaptic]. ![**A selection of recent insights into the dendritic mechanisms of plasticity of the neocortex.** (**A**) Concurrent activation of &gt; 10 nearby synapses in pyramidal neuron dendrites (red) triggers NMDA plateau potentials in dendrites (left). (**B**) Calcium drives synaptic plasticity. Synapses are bi-stable, and can be added or removed in the weak state (above). NMDA plateau potentials drive potentiation of synapses through their associated large calcium currents. (source: @graupner2010mechanisms) (**C**): Clusters of co-coding synapses are captured through these mechanisms. (**D**) Co-coding neurons form small cliques, reinforced through cluster capture. These cliques activate Martinotti cells which block further capture, implementing opposing competition. (**E**) Neocortical areas are organized in a hierarchy with top-down input arriving in layer 1 (the top-most layer) at the apical tufts of pyramidal dendrites, and at layer 6 and lower layer 5. (**F**). Temporal association of top-down and bottom-up drives cliques and plasticity.[]{data-label="fig:selection"}](puzzle_pieces.png) Unlike CNNs, the neocortex also has a prominent feedback pathway down the hierarchy, whereby top-down input from upper layers innervate the apical tufts of pyramidal cells in layer 1 of a given cortical region [@felleman1991distributed]. Associations between top-down and feed-forward (bottom-up) activation are known to trigger dendritic calcium spikes and dendritic bursting [@larkum1999new], which again specifically activates the WTA dynamics of the Martinotti neurons [@murayama2009dendritic], but disinhibitory VIP neurons can also modulate their impact [@karnani2016cooperative]. These feed-back pathways have been proposed to implement *predictive coding* [@rao1999predictive], and error back-propagation for supervised learning algorithms [@guerguiev2017towards; @sacramento2018dendritic]. While their importance for rapid object recognition has been recently demonstrated, their computational role remained inconclusive [@kar2019evidence]. Cake but no free lunch {#cake-but-no-free-lunch .unnumbered} ====================== With the demonstrated applicability of supervised learning for a broad range of problems and data distributions, and an ever expanding toolbox of optimized software libraries, it is unlikely that supervised learning, back-propagation and gradient descent will be dethroned as the work horses of AI for many years to come. Nonetheless, as applications of deep networks are moving into regions where sparse data, generalization and transfer are increasingly important, unsupervised approaches designed with the explicit goal of learning good representations from mere observation may find an important place in the AI ecosystem. Quoting Yann LeCun[^2] > “If intelligence is a cake, the bulk of the cake is unsupervised learning, the icing on the cake is supervised learning, and the cherry on the cake is reinforcement learning.” A promising strategy would be to assume learning with sparse labels, overcoming adversarial examples, transfer learning, and few-shot learning together as the success criteria for the further development of the powerful unsupervised approaches we seek. Recent advances in our understanding of the processes of neocortical plasticity may well offer useful inspiration, but let’s close with some words of moderation. Biology’s solutions also show us there will be no free lunch, i.e. neocortical unsupervised learning algorithms will be less general than supervised learning by gradient descent. Neocortex relies on structure at specific spatial and temporal scales in its input streams to learn representations. Evolution has had millions of years to configure the sensory organs to provide signals to the neocortex in ways that it can make sense of them, and that serve the animal’s ecological niche. We should not expect, for example, cortical unsupervised learning algorithms to cluster frozen white noise images. A neocortical solution requires a neocortical problem (e.g. from the so-called “Brain set” [@richards2019framework]), so if we are to successfully take inspiration from it, we must also work within its limitations. ### Acknowledgments {#acknowledgments .unnumbered} Thanks to Giuseppe Chindemi, Perouz Taslakian, Pau Rodriguez, Isabeau Prémont-Schwarz, Hector Palacios Verdes, Pierre-André Noël, Nicolas Chapados, Blake Richards, and Guillaume Lajoie for helpful discussions. [^1]: For some input spaces, such as white noise, a good representation may be undefined. [^2]: <https://medium.com/syncedreview/yann-lecun-cake-analogy-2-0-a361da560dae>
--- abstract: 'Let $k\in \mathbb{N}\setminus\{0\}$. For a commutative ring $R$, the ring of dual numbers of $k$ variables over $R$ is the quotient ring $R[x_1,\ldots,x_k]/ I $, where $I$ is the ideal generated by the set $\{x_ix_j: i,j=1,\ldots,k\}$. This ring can be viewed as $R[\alpha_1,\ldots,\alpha_k]$ with $\operatorname{\alpha}_i \operatorname{\alpha}_j=0$, where $\operatorname{\alpha}_i=x_i+I$ for $i,j=1,\ldots,k$. We investigate the polynomial functions of $R[\alpha_1,\ldots,\alpha_k]$ whenever $R$ is a finite local ring. We derive counting formulas for the number of polynomial functions and polynomial permutations on $R[\alpha_1,\ldots,\alpha_k]$ depending on the order of the pointwise stabilizer of the subring of constants $R$ in the group of polynomial permutations of $R[\alpha_1,\ldots,\alpha_k]$. Moreover, we show that the stabilizer group of $R$ is independent from the number of variables $k$.' address: | Department of Analysis and Number Theory (5010)\ Technische Universität Graz\ Kopernikusgasse 24/II\ 8010 Graz, Austria author: - 'Amr Ali Abdulkader Al-Maktry' bibliography: - 'PolyFunSev.bib' title: Polynomial functions over dual numbers of several variables --- .3em Introduction ============ Let $R$ be a finite commutative ring with unity. Then a function $F:R\longrightarrow R$ is said to be a polynomial function on (over) $R$ if there exists a polynomial $f\in R[x]$ such that $f(a)=F(a)$ for every $a\in R$. In this case we say that $F$ is the induced function of $f$ on $R$ and $f$ represents (induces) $F$. Moreover, if $F$ is a bijection we say that $F$ is a polynomial permutation and $f$ is a permutation polynomial. If $R$ is a finite field, it can be shown easily by using the Lagrange’s interpolation that every function on $R$ is a polynomial function. Unfortunately, this is not the situation when $R$ is not a field and it is somewhat more complicated to study the properties of polynomial functions on such a ring. We denote by ${\mathcal{F}(R)}$ the set of polynomial functions on $R$, which is evidently a monoid under the composition of functions. Moreover, its subset of polynomial permutations forms a group and we denote it by ${\mathcal{P}(R)}$. Kempner [@Residue] was the first mathematician who studied polynomial functions on a finite ring which is not a field. He studied extensively the polynomial functions on $\mathbb{Z}_m$, the ring of integers modulo $m$. However, his arguments and results were somewhat lengthy and sophisticated. So, for a long time some researchers [@pol1; @pol2; @pol3] followed his work, obtained simpler proofs and contributed to the subject as well. Meanwhile, some others were interested on the group of permutation polynomials modulo $p^n$ [@per1]. Other mathematicians have generalized the concepts of polynomial functions on $\mathbb{Z}_m$ into other rings, for examples, local principal ideal rings [@Nechaev1980] and Galois rings [@gal]. Later, Frisch [@suit] characterized the polynomial functions of general class of local rings. Surprisingly, all rings examined in  [@gal; @Nechaev1980; @Residue] are contained in this class. It should be mentioned that around forty years ago some mathematicians studied the properties of polynomial functions on weaker structures such as semi groups [@semi] and monoids [@mon]. In a recent paper [@Haki], the authors considered the polynomial functions of the ring of dual numbers modulo $m$. Dual numbers are not contained in the class of rings covered in [@suit], except for some trivial cases. In this paper, we are interested in the polynomial functions of the ring of dual numbers of several variables over a finite local ring $R$, that is, the ring $R[x_1,\ldots,x_k]/ I $, where $I$ is the ideal generated by the set $\{x_ix_j: i,j\in\{1,\ldots,k\}\}$. We find that the construction of the polynomial functions over such a ring depends on the polynomial functions over $R$. Furthermore, we show that the order of a subgroup of polynomial permutations on ${{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]}$ plays an essential role in the counting formulas of the polynomial functions and the polynomial permutations on ${{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]}$. Here is a summary of the paper. Section \[sc2\] contains some basics and notations. In Section \[sc3\], we characterize null polynomials and permutation polynomials on ${{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]}$, and we develop the ideas needed in the last section. Finally, in Section \[sc4\] we introduce the stabilizer group with some of its properties and obtain some counting formulas. Basics {#sc2} ====== In this section, we introduce some definitions and facts that appear in the paper frequently. Throughout this paper let $k$ be a positive integer and for $f\in R[x]$, $f'$ denotes the first formal derivative of $f$. \[equvfun\] Let $S$ be a commutative ring, $R$ an $S$-algebra and $f\in S[x]$. Then: 1. The polynomial $f$ gives rise to a polynomial function on $R$. We use the notation $[f]_R$ for this function. We just write $[f]$ instead of $[f]_R$, when there is no confusion. 2. If $[f]_R$ is a permutation on $R$, then we call $f$ a permutation polynomial on $R$. 3. If $g\in S[x]$ and $[f]_R=[g]_R$, this means that $f$ and $g$ induce the same function on $R$ and we abbreviate this with $f \operatorname{\xspace{ }\triangleq\xspace{ }}g$ on $R$. Clearly, $\operatorname{\xspace{ }\triangleq\xspace{ }}$ is an equivalence relation on $R[x]$. For the case when $S=R$, there is a bijective correspondence between equivalence classes of $\operatorname{\xspace{ }\triangleq\xspace{ }}$ and the polynomial functions on $R$. In particular, if $R$ is finite, then the number of different polynomial functions on $R$ equals the number of equivalence classes of $\operatorname{\xspace{ }\triangleq\xspace{ }}$ on $R[x]$. \[001\] Throughout this paper, when $R$ is a commutative ring, then ${{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]}$ designates the result of adjoining $\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k$ to $R$ with $\operatorname{\alpha}_i\operatorname{\alpha}_j=0$ for $i,j\in \{1,\ldots,k\}$; that is, ${{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]}$ is $R[x_1,\ldots,x_k]/I$, where $I$ is the ideal generated by the set $\{x_ix_j: i,j\in\{1,\ldots,k\}\}$, and $\operatorname{\alpha}_i$ denotes $x_i+I$ for $i=1,\ldots,k$. The ring ${{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]}$ is called the ring of dual numbers of $k$ variables (degree $k$) over $R$. Note that $R$ is canonically embedded as a subring in ${{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]}$. Furthermore, ${{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]}$ is an $R$-algebra. The following proposition summarizes some properties of ${{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]}$, which is straightforward from Definition \[001\]. \[0\] Let $R$ be a commutative ring. Then the following hold. 1. For $a_0,\ldots,a_k,b_0,\ldots,b_k\in R$, we have: 1. $(a_0+\sum\limits_{i=1}^{k}a_i\operatorname{\alpha}_i)(b_0+\sum\limits_{i=1}^{k}b_i\operatorname{\alpha}_i)=a_0b_0+\sum\limits_{i=1}^{k}(a_0b_i+b_0a_i)\operatorname{\alpha}_i$; 2. $a_0+\sum\limits_{i=1}^{k}a_i\operatorname{\alpha}_i$ is a unit in ${{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]}$ if and only if $a_0$ is a unit in $R$. In this case\ $(a_0+\sum\limits_{i=1}^{k}a_i\operatorname{\alpha}_i)^{-1}=a_0^{-1}-\sum\limits_{i=1}^{k}a_0^{-2}a_i\operatorname{\alpha}_i$. 2. ${{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]}$ is a local ring if and only if $R$ is a local ring. 3. If $R$ is a local ring with a maximal ideal $ \mathfrak{m}$ of nilpotency $n$, then ${{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]}$ is a local ring whose maximal ideal $ \mathfrak{m}+\sum\limits_{i=1}^{k}\operatorname{\alpha}_i R$ has nilpotency $n+1$. We use the following lemma frequently. \[02\] \[3\] \[21\] Let $R$ be a commutative ring and $a_0,\ldots,a_k\in R$. 1. If $f\in R[x]$, then $$f(a_0+\sum\limits_{i=1}^{k}a_i\operatorname{\alpha}_i)=f(a_0)+\sum\limits_{i=1}^{k} a_if'(a_0)\operatorname{\alpha}_i.$$ 2. If $f\in {{{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]}}[x]$, then there exist $f_0,\ldots, f_k \in R[x]$ such that $f =f_0 +\sum\limits_{i=1}^{k}f_i \operatorname{\alpha}_i$ and $$f(a_0+\sum\limits_{i=1}^{k}a_i\operatorname{\alpha}_i)=f_0(a_0)+ \sum\limits_{i=1}^{k}(a_if_0'(a_0)+f_i(a_0))\operatorname{\alpha}_i.$$ \(1) Follows from Taylor expansion and the fact that $\alpha_i\alpha_j=0$ for $i,j=1,\ldots,k$.\ (2) Let $f\in {{{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]}}[x]$. Then $f(x)=\sum\limits_{j=0}^{n}(c_{0\,j}+\sum\limits_{i=1}^{k}c_{i\,j}\operatorname{\alpha}_i)x^j$, where $c_{i\,j}\in R$ for $i=0,\ldots,k$; $j=0,\ldots,n$. So set $f_i=\sum\limits_{j=0}^{n}c_{i\,j}x^j\in R[x]$ for $i=0,\ldots, k$. Hence $f =f_0 +\sum\limits_{i=1}^{k}f_i \operatorname{\alpha}_i$. The other part follows from (1). The above lemma yields a necessary condition for a function $F:{{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]}\longrightarrow {{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]}$ to be a polynomial function. Let $F:{{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]}\longrightarrow {{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]}$. If $F$ is a polynomial function over ${{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]}$, then for every $a_i,b_{j},c_i,d_i\in R, i=0,\ldots,k;j=1,\ldots,k$, such that\ $F(a_0+\sum\limits_{i=1}^{k}a_i \operatorname{\alpha}_i)=c_0+\sum\limits_{i=1}^{k}c_i \operatorname{\alpha}_i$ and $F(a_0+\sum\limits_{i=1}^{k}b_i \operatorname{\alpha}_i)=d_0+\sum\limits_{i=1}^{k}d_i \operatorname{\alpha}_i$, we must have $c_0=d_0$. [@suit].\[1\] Let $R$ be a finite commutative local ring with a maximal ideal $\mathfrak{m}$ and $ {L}\in\mathbb{N}$ minimal with $\mathfrak{m}^{L}=(0)$. We call $R$ *suitable*, if for all $a, b\in R$ and all $l\in\mathbb{N}$, $ab\in\mathfrak{m}^l\Rightarrow a\in \mathfrak{m}^i$ and $b\in \mathfrak{m}^j$ with $i+j\geq$ min$(L,l)$. Let $R$ be a finite local ring. Then ${{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]}$ is suitable if and only if $R$ is a finite field. Since $R$ is a local ring with a maximal ideal $\mathfrak{m}$ and nilpotency $n$, then ${{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]}$ is a local ring with maximal ideal $\mathfrak{m}_1=\mathfrak{m}+\sum\limits_{i=1}^{k}\operatorname{\alpha}_i R$ and nilpotency $L=n+1$ by Proposition \[0\]. Now, if $R$ is a field the result follows easily since $\mathfrak{m}_1^2=(0)$. If $n\geq 2$, we notice that $L=n+1>2$, then $\operatorname{\alpha}_1 \in \mathfrak{m}_1$, $\operatorname{\alpha}_1 \notin \mathfrak{m}_1^j$ for $j>1$ and $\operatorname{\alpha}_1^2=0\in\mathfrak{m}_1^{n+1}$. Hence ${{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]}$ is not suitable, when $R$ is not a field. Polynomial Functions and Permutation Polynomials on ${{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]}$ {#sc3} =================================================================================================================== From now on, let $R$ be a finite commutative ring with unity. A polynomial $f\in R[x]$ is called a null polynomial on $R$ if $f$ induces the zero function; in this case we write $f \operatorname{\xspace{ }\triangleq\xspace{ }}0$ on $R$. In this section we determine when a given polynomial is a null polynomial on ${{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]}$, and whether two polynomials induce the same function on ${{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]}$. Then we apply these results to obtain a counting formula, for the number of polynomial functions on ${{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]}$, depending on the indices of the ideals ${N_{R}},{N'_{R}}$ in $R[x]$ (defined below). Later, we dedicate the last part of this section to the group of polynomial permutations on ${{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]}$, characterize permutation polynomials and provide supplementary results about this group. \[nulldef\] We define ${N_{R}}, {N_{R}}'$ as: 1. ${N_{R}}=\{ f\in R[x]: f \operatorname{\xspace{ }\triangleq\xspace{ }}0 \text{ on } R\}$; 2. ${N_{R}}'=\{ f\in R[x]: f \operatorname{\xspace{ }\triangleq\xspace{ }}0\text{ and } f' \operatorname{\xspace{ }\triangleq\xspace{ }}0 \text{ on } R\}$. It is evident that ${N_{R}}$ and ${N'_{R}}$ are ideals of $R[x]$ with ${N'_{R}}\subseteq {N_{R}}$. \[31\] Let $f\in R[x]$. Then: 1. $f$ is a null polynomial on ${{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]}$ if and only if $f\in{N'_{R}}$; 2. $f\operatorname{\alpha}_i$ is a null polynomial on ${{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]}$ for every $1\le i\le k$ if and only if $f\in{N_{R}}$. \(1) By Lemma \[02\], for every $a_0,\ldots,a_k \in R$, $f(a_0+\sum\limits_{i=1}^{k}a_i \operatorname{\alpha}_i)=f(a_0)+ \sum\limits_{i=1}^{k}a_if'(a_0)\operatorname{\alpha}_i$. Thus, the fact that $f$ is a null polynomial on ${{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]}$ is equivalent to $f(a_0+\sum\limits_{i=1}^{k}a_i \operatorname{\alpha}_i)=f(a_0)+ \sum\limits_{i=1}^{k}a_if'(a_0)\operatorname{\alpha}_i= 0$ for all $a_0,\ldots,a_k\in R$. This is equivalent to $f(a_0)=0$ and $a_if'(a_0)= 0$ for all $a_0,a_i\in R$ and $i=1,\ldots,k$, which implies that $f(a_0)=0$ and $f'(a_0)= 0$ for all $a_0\in R$. Hence $f$ and $f'$ are null polynomials on $R$, which means that $f\in{N'_{R}}$.\ (2) Follows immediately from Lemma \[02\]. \[4\] Let $f\in {{{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]}}[x]$. We write $f =f_0 +\sum\limits_{i=1}^{k}f_i \operatorname{\alpha}_i$, where $f_0,\ldots, f_k \in R[x]$. Then $f$ is a null polynomial on ${{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]}$ if and only if $f_0\in{N'_{R}}$ and $f_i\in{N_{R}}$ for $i=1,\ldots,k$. By Lemma \[3\], $f(a_0+\sum\limits_{i=1}^{k}a_i\operatorname{\alpha}_i)=f_0(a_0)+ \sum\limits_{i=1}^{k}(a_if_0'(a_0)+f_i(a_0))\operatorname{\alpha}_i$ for all $a_0,\ldots,a_k\in R$. This immediately implies the “if” direction. To see the “only if”, suppose that $f$ is a null polynomial on ${{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]}$. Then $$f_0(a_0)+ \sum\limits_{i=1}^{k}(a_if_0'(a_0)+f_i(a_0))\operatorname{\alpha}_i=0 \text{ for all } a_0,\ldots,a_k\in R.$$ Clearly, $f_0$ is a null polynomial on $R$. Substituting first $0$, then $1$, for $a_i$, $i=1,\ldots,k$, we find that $f_i$ and $f_0'$ are null polynomials on $R$. Therefore $f_0\in {N'_{R}}$ and $f_i\in{N_{R}}$ for $i=1,\ldots,k$. Combining Lemma \[31\] with Theorem \[4\] gives the following criterion. \[Nulleqc\] Let $f =f_0 +\sum\limits_{i=1}^{k}f_i \operatorname{\alpha}_i$, where $f_0,\dots, f_k \in R[x]$. Then $f$ is a null polynomial on ${{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]}$ if and only if $f_0$ and $f_i\operatorname{\alpha}_i$ are null polynomials on ${{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]}$ for $i=1,\dots,k$. Theorem \[4\] implies the following corollary, which determines whether two polynomials $f,g\in{{{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]}}[x]$ induce the same function on ${{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]}$. \[6\] \[Gencount\] Let $f =f_0 +\sum\limits_{i=1}^{k}f_i \operatorname{\alpha}_i$ and $g=g_0+\sum\limits_{i=1}^{k}g_i \operatorname{\alpha}_i $, where $f_0,\ldots, f_k, g_0,\ldots g_k \in R[x]$. Then $f \operatorname{\xspace{ }\triangleq\xspace{ }}g$ on ${{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]}$ if and only if the following conditions hold: 1. $[f_i]_R= [g_i]_R$ for $i=0,\dots,k$; 2. $[f_0']_R = [g_0']_R$. In other words, $f \operatorname{\xspace{ }\triangleq\xspace{ }}g$ on ${{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]}$ if and only if the following congruences hold: 1. $f_i \equiv g_i \mod {N_{R}}$ for $i=1,\ldots,k$; 2. $f_0 \equiv g_0 \mod {N'_{R}}$. It is sufficient to consider the polynomial $h=f-g$ and notice that $f \operatorname{\xspace{ }\triangleq\xspace{ }}g$ on ${{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]}$ if and only if $h \operatorname{\xspace{ }\triangleq\xspace{ }}0$ on ${{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]}$. Recall that ${\mathcal{F}({{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]})}$ denotes the set of polynomial functions over ${{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]}$. In the following proposition we derive a counting formula for ${\mathcal{F}({{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]})}$ depending on the indices of the ideals ${N_{R}}, {N'_{R}}$. \[firstcountfor\] The number of polynomial functions over ${{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]}$ is given by $$|{\mathcal{F}({{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]})}|= \big[R[x]:{N'_{R}}\big]\big[R[x]:{N_{R}}\big]^k.$$ Moreover, $\left[R[x]:{N'_{R}}\right]$ is the number of pairs of functions $(F,E)$ with $F\colon R\rightarrow R$, $G\colon R\rightarrow R$, arising as $([f]_R, [f']_R)$ for some $f\in R[x]$, and $\left[R[x]:{N_{R}}\right]$ is the number of polynomial functions on $R$. Let $f =f_0 +\sum\limits_{i=1}^{k}f_i \operatorname{\alpha}_i$ and $g=g_0+\sum\limits_{i=1}^{k}g_i \operatorname{\alpha}_i $ where $f_0,\ldots ,f_k,g_0,\ldots ,g_k \in R[x]$. Then by Corollary \[6\], $f \operatorname{\xspace{ }\triangleq\xspace{ }}g$ on ${{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]}$ if and only if $f_0 \equiv g_0 \mod {N'_{R}}$ and $f_i \equiv g_i \mod {N_{R}}$ for $i=1,\ldots,k$. Define $\varphi:\bigtimes\limits_{i=0}^{k} R[x] \longrightarrow \mathcal{F}({{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]})$ by $\varphi(f_0,\ldots,f_k)= [f]$, where $[f]$ is the function induced on ${{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]}$ by $f =f_0 +\sum\limits_{i=1}^{k}f_i \operatorname{\alpha}_i$. Then $\varphi$ is a group epimorphism with\ $\ker\varphi={N'_{R}}\times\bigtimes\limits_{i=1}^{k} {N_{R}}$ by Theorem \[4\]. Hence $$|\mathcal{F}({{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]})|= [\bigtimes\limits_{i=0}^{k} R[x]:{N'_{R}}\times \bigtimes\limits_{i=1}^{k} {N_{R}}]= [R[x]:{N'_{R}}][R[x]:{N_{R}}]^k.$$ Next, we set $$\mathcal{A}=\{(F,E)\in \mathcal{F}(R)\times \mathcal{F}(R): \exists f\in R[x] \text{ such that }f,f'\text{ induce } F,E \text{ respectively}\}.$$ Define $\psi:R[x] \longrightarrow \mathcal{A}$ by $\psi(f)=([f]_R,[f']_R)$. It is a routine verification to show that $\psi$ is a group epimorphism with $\ker\psi=N'_R$. Hence by the First Isomorphism Theorem of groups we get $[R[x]:N'_R]=|\mathcal{A}|$. A similar argument proves that $|{\mathcal{F}(R)}|=\left[R[x]:{N_{R}}\right]$. The following proposition gives an upper bound for the degree of a representative of a polynomial function on ${{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]}$. \[sur\] Let $h_1\in { {{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]}}{[x]}$ and $ h_2 \in R[x]$ be monic null polynomials on ${{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]}$ and $R$, respectively, such that $\deg h_1= d_1$ and $\deg h_2=d_2$. Then every polynomial function $F:{{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]}\longrightarrow {{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]}$ is induced by a polynomial $f =f_0 +\sum\limits_{i=1}^{k}f_i\operatorname{\alpha}_i$, where $f_0,\ldots,f_k \in R[x]$ such that $\deg f_0 <d_1$ and $\deg f_i < d_2$ for $i=1,\ldots,k$. Moreover, if $F$ is induced by a polynomial $f\in R[x]$ and $h_1\in R[x]$ (rather than in ${{{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]}}[x]$), then there exists a polynomial $g\in R[x]$ with $\deg g<d_1$, such that $[g]_R=[f]_R$ and $[g']_R=[f']_R$. Suppose that $h_1\in {{{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]}}[x]$ is a monic null polynomial on ${{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]}$ of degree $d_1$. Let $g \in {{{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]}}[x]$ be a polynomial that represents $F$. By the division algorithm, we have $g(x) =q(x)h_1(x)+r(x)$ for some $r,q \in {{{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]}}[x]$, where $\deg r \le d_1 -1$. Then clearly, $r(x)$ represents $F$. By Lemma \[21\], $r=f_0+\sum\limits_{i=1}^{k}r_i\operatorname{\alpha}_i$ for some $f_0,r_1,\ldots,r_k\in R[x]$, and it is obvious that $\deg f_0,\deg r_i \le d_1-1$ for $i=1,\dots,k$. Now, let $h_2\in R[x]$ be a monic null polynomial on $R$ of degree $d_2$. Again, by the division algorithm, we have for $i=1,\ldots,k$, $r_i(x) =q_i(x)h_2(x)+f_i(x)$ for some $f_i ,q_i \in R[x]$, where $\deg f_i \le d_2 -1$. Then by Corollary \[6\], $r_i\operatorname{\alpha}_i \operatorname{\xspace{ }\triangleq\xspace{ }}f_i\operatorname{\alpha}_i$ on ${{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]}$. Thus $f =f_0 +\sum\limits_{i=1}^{k}f_i \operatorname{\alpha}_i$ is the desired polynomial.\ For the second part, the existence of $g\in R[x]$ with $\deg g<d_1$ such that $f \operatorname{\xspace{ }\triangleq\xspace{ }}g $ on ${{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]}$ follows by the same argument given in the previous part. By Corollary \[Gencount\], $[g]_R=[f]_R$ and $[g']_R=[f']_R$. \[existmonicn\] Let $h(x)=\prod\limits_{r\in R}(x-r)^2$. Then $h$ is a monic polynomial in $R[x]$, and by Lemma \[31\], it is a null polynomial on ${{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]}$. This shows that the polynomial mentioned in the last part of Proposition \[sur\] always exists. We devote the rest of this section to the group of polynomial permutations over ${{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]}$. \[Genper\] Let $R$ be a finite ring. Let $f =f_0+ \sum\limits_{i=1}^{k}f_i \operatorname{\alpha}_i$, where $f_0,\ldots,f_k \in R[x]$. Then $f$ is a permutation polynomial over ${{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]}$ if and only if the following conditions hold: 1. $f_0$ is a permutation polynomial on $R$; 2. for all $a\in R$, $f_0'(a)$ is a unit in $R$. $(\Rightarrow)$ Let $c\in R$. Then $c\in {{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]}$. Since $f$ is a permutation polynomial over ${{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]}$, there exist $a_0,\ldots,a_k \in R$ such that $f(a_0+ \sum\limits_{i=1}^{k}a_i \operatorname{\alpha}_i)= c$. Thus $f_0(a_0)+ \sum\limits_{i=1}^{k}(a_if_0'(a_0)+f_i(a_0))\operatorname{\alpha}_i = c$ by Lemma \[3\]. So $f_0(a_0)=c$, therefore $f_0$ is onto, and hence a permutation polynomial on $R$. Let $a\in R$ and suppose that $f_0'(a)$ is a non-unit in $R$. Then $f_0'(a)$ is a zerodivisor of $R$. Let $b\in R$, $b\ne 0$, such that $bf_0'(a)=0$. Then\ $f(a+\sum\limits_{i=1}^{k}b\operatorname{\alpha}_i)=f_0(a)+\sum\limits_{i=1}^{k}(bf_0'(a)+f_i(a))\operatorname{\alpha}_i =f_0(a)+\sum\limits_{i=1}^{k}f_i(a)\operatorname{\alpha}_i=f(a)$. So $f$ is not one-to-one, which is a contradiction. This proves (2).\ ($\Leftarrow$) It is enough to show that $f$ is one-to-one. Let $a_0,\ldots,a_k,b_0,\ldots,b_k \in R$ such that $f(a_0+\sum\limits_{i=1}^{k}a_i\operatorname{\alpha}_i)=f(b_0+\sum\limits_{i=1}^{k}b_i\operatorname{\alpha}_i)$, that is, $f_0(a_0)+\sum\limits_{i=1}^{k}(a_if_0'(a_0)+f_i(a_0))\operatorname{\alpha}_i = f_0(b_0)+\sum\limits_{i=1}^{k}(b_if_0'(b_0)+f_i(b_0))\operatorname{\alpha}_i$ by Lemma \[02\]. Then we have $f_0(a_0)= f_0(b_0)$ and $a_if_0'(a_0)+f_i(a_0)= b_if_0'(b_0)+f_i(b_0)$ for $i=1,\ldots,k$. Hence $a_0= b_0$ since $f_0$ is a permutation polynomial on $R$. Then, since $f_0'(a_0)$ is a unit in $R$, $a_i=b_i$ follows for $i=1,\ldots,k$. Theorem \[Genper\] shows that the criterion to be a permutation polynomial on ${{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]}$ depends only on $f_0$, and implies the following corollary. \[PPfirstcoordinate\] Let $f =f_0 +\sum\limits_{i=1}^{k}f_i \operatorname{\alpha}_i$, where $f_0,\dots,f_k \in R[x]$. Then the following statements are equivalent: 1. $f$ is a permutation polynomial over ${{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]}$; 2. $f_0+f_i\operatorname{\alpha}_i$ is a permutation polynomial over $R[\operatorname{\alpha}_i]$ for every $i\in \{1,\ldots,k\}$; 3. $f_0$ is a permutation polynomial over ${{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]}$; 4. $f_0$ is a permutation polynomial over $R[\operatorname{\alpha}_i]$ for every $i\in \{1,\ldots,k\}$. Recall that, for any finite commutative ring $A$, ${\mathcal{P}(A)}$ denotes the group of polynomial permutations on $A$. The group ${\mathcal{P}({R[\operatorname{\alpha}_i)}}]$ is embedded in ${\mathcal{P}({{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]})}$ for every $i= 1,\ldots,k$. Fix $i\in \{1,\ldots,k\}$ and let $F\in {\mathcal{P}({R[\operatorname{\alpha}_i)}}]$. Then $F$ is induced by $f=f_0+f_i\operatorname{\alpha}_i$ for some $f_0,f_i \in R[x]$. Furthermore, $f_0+f_i\operatorname{\alpha}_i$ is permutation polynomial on ${{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]}$ by Corollary \[PPfirstcoordinate\]. Define a function $\psi:{\mathcal{P}({R[\operatorname{\alpha}_i)}}] \longrightarrow {\mathcal{P}({{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]})}$ by $\psi(F)=[f]_{{{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]}} $, where $[f]_{{{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]}}$ denotes the function induced by $f$ on ${{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]}$. By Corollary \[Gencount\], $\psi$ is well defined and one-to-one. Now, if $F_1\in {\mathcal{P}({R[\operatorname{\alpha}_i)}}]$ is induced by $g\in R[\operatorname{\alpha}_i][x]$, then $f\circ g$ induces $F\circ F_1$ on $R[\operatorname{\alpha}_i]$. Hence, $$\begin{aligned} \psi(F\circ F_1) & =[f\circ g]_{{{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]}}\\ & =[f]_{{{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]}} \circ[g]_{{{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]}} \text{ since }f,g\in {{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]}[x]\\ & =\psi(F)\circ\psi(F_1). \end{aligned}$$ This completes the proof. We will show in Proposition \[movdreivat\] that the condition on the derivative in Theorem \[Genper\] is redundant, when $R$ is a direct sum of local rings none of which is a field. [@finiterings Thm. XIII.17]\[Mac\] Let $R$ be a finite local ring with a maximal ideal $M\ne \{0\}$ and suppose that $f\in R[x]$. Then $f$ is a permutation polynomial on $ R$ if and only if the following conditions hold: 1. $f$ is a permutation polynomial on $R/M$; 2. for all $a\in R$, $f'(a)\ne 0\mod{M}$. \[directper\] Let $R$ be a finite ring and suppose that $R=\oplus_{i=1}^{n}R_i$, where $R_i$ is local for $i=1,\dots,n$. Let $f=(f_1,\ldots,f_n)\in R[x]$, where $f_i\in R_i[x]$. Then $f$ is a permutation polynomial on $R$ if and only if $f_i$ is a permutation polynomial on $R_i$ for $i=1,\dots,n$. $(\Rightarrow)$ Suppose that $f$ is a permutation polynomial on $R$ and fix an $i$. Let $b_i \in R_i$. Then $(0,\dots,b_i,\ldots,0)\in R$. Thus, there exists $a=(a_1,\dots,a_i,\ldots,a_n)\in R$, where $a_j \in R_j$, $j=1,\dots,n$ such that $f(a)=(f_1(a_1),\ldots,f_i(a_i),\ldots,f_n(a_n))=(0,\dots,b_i,\dots,0)$. Hence $f_i(a_i)=b_i$, and therefore $f_i$ is surjective, whence $f_i$ is a permutation polynomial on $R_i$.\ $(\Leftarrow)$ Easy and left to the reader. From now on, let $R^\times$ denote the group of units of $R$. \[movdreivat\] Let $R$ be a finite ring which is a direct sum of local rings which are not fields, and let $f=f_0+\sum\limits_{i=1}^{k}f_i \operatorname{\alpha}_i$, where $f_0,\ldots,f_k\in R[x]$. Then $f$ is a permutation polynomial on ${{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]}$ if and only if $f_0$ is a polynomial permutation on $R$. ($\Rightarrow$) Follows by Theorem \[Genper\].\ ($\Leftarrow$) Assume that $f_0$ is a permutation polynomial on $R$. By Theorem \[Genper\], we only need to show that $f_0'(r)\in R^\times$ for every $r\in R$. Write $f_0=(g_1,\ldots,g_n)$, where $g_i\in R_i[x]$ for $i=1,\ldots,n$. Then $g_i$ is a permutation polynomial on $R_i$ for $i=1,\ldots,n$ by Lemma \[directper\]. Now, let $r\in R$, so $r=(r_1,\ldots,r_n)$, where $r_i\in R_i$. Hence $f'_0(r)=(g'_1(r_1),\ldots,g'_n(r_n))$ but $g'_i(r_i)\in R_i^{\times}$ by Lemma \[Mac\] for $i=1,\dots,n$. Therefore $f'_0(r)=(g'_1(r_1),\ldots,g'_n(r_n))\in R^{\times}$, i.e, $f'_0(r)$ is a unit in $R$ for every $r\in R$. Thus $f_0$ satisfies the conditions of Theorem \[Genper\]. Therefore $f$ is a permutation polynomial on ${{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]}$. \[persumnug\] Let $R$ be a finite ring which is a direct sum of local rings which are not fields. Let $f\in R[x]$ be a permutation polynomial on ${{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]}$. Then $f+h$ is a permutation polynomial on ${{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]}$ for every $h\in {N_{R}}$. In particular, $x+h$ is a permutation polynomial on ${{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]}$ for every $h\in {N_{R}}$. Recall that ${\mathcal{P}({{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]})}$ denotes the group of permutation polynomials on ${{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]}$. \[Geperncount\] Let $R$ be a finite ring. Let $B$ denote the number of pairs of functions $(H,G)$ with $$H:R\longrightarrow R \text{ bijective and } G:R\longrightarrow R^\times$$ that occur as $([g],[g'])$ for some $g\in R[x]$. Then the number of polynomial permutations on ${{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]}$ is given by $$|{\mathcal{P}({{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]})}|=B\cdot |{\mathcal{F}(R)}|^k.$$ Let $F\in {\mathcal{P}({{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]})}$. Then by definition $F$ is induced by a polynomial $f$, where by Lemma \[02\] $f =f_0 +\sum\limits_{i=1}^{k}f_i \operatorname{\alpha}_i$ for $f_0,\ldots,f_k\in R[x]$. By Theorem \[Genper\], $$[f_0]:R\longrightarrow R \text{ bijective, } [f'_0]:R\longrightarrow R^\times \text{ and } [f_i]\text{ is arbitrary in }{\mathcal{F}(R)}\text{ for }i=1,\ldots,k.$$ The rest follows by Corollary \[Gencount\]. In the next section we show that the number $B$ of Proposition \[Geperncount\] depends on the order of a subgroup of ${\mathcal{P}({{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]})}$, which fixes every element of $R$. However, when $R$ is a finite field, we can find explicitly this number. For this we need the following lemma from [@Haki]. [@Haki Lemma 2.11]\[Perf\] Let $\mathbb{F}_q$ be a finite field with $q$ elements. Then for all functions $$F,G:\mathbb{F}_q\longrightarrow \mathbb{F}_q,$$ there exists $f\in\mathbb{F}_q[x]$ such that $$(F, G)=([f] ,[f']) \text{ and } \deg f<2q.$$ Let $f_0,f_1\in\mathbb{F}_q[x]$ such that $[f_0] =F$ and $[f_1] =G$ and set $$f(x) = f_0(x) + (f'_0(x) - f_1(x))(x^q-x).$$ Then $$f'(x) =(f''_0(x) - f'_1(x))(x^q-x)+f_1(x).$$ Thus $[f]=[f_0]=F$ and $[f']=[f_1]=G$ since $(x^q-x)$ is a null polynomial on $\mathbb{F}_q$. Moreover, since $(x^q-x)$ is a null polynomial on $\mathbb{F}_q$, we can choose $f_0,f_1$ such that $\deg f_0,\deg f_1<q$. Hence $\deg f<2q$. \[13.111\] Let $\mathbb{F}_q$ be a finite field with $q$ elements. The number of polynomial permutations on $\mathbb{F}_q[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]$ is given by $$|\mathcal{P}(\mathbb{F}_q[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k])|=q!(q-1)^qq^{kq}.$$ Let $\mathcal{B}$ be the set of pairs of functions $(F,G)$ such that $$F:\mathbb{F}_q\longrightarrow \mathbb{F}_q \text{ bijective and } G:\mathbb{F}_q\longrightarrow \mathbb{F}_q\setminus\{0\}.$$ By Lemma \[Perf\], each $(F,G)\in \mathcal{B}$ arises as $([f],[f'])$ for some $f\in \mathbb{F}_q[x]$. Thus by Proposition \[Geperncount\], $|\mathcal{P}(\mathbb{F}_q[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k])|=|\mathcal{B}|\cdot|{\mathcal{F}(\mathbb{F}_q)}|^k$. Clearly $|\mathcal{B}|=q!(q-1)^q$ and $|{\mathcal{F}(\mathbb{F}_q)}|^k=q^{kq}$. The stabilizer of $R$ in the group of polynomial permutations of ${{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]}$ {#sc4} ================================================================================================================================ The main object of this section is to describe the order of the subgroup of polynomial permutations on ${{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]}$ that fixes pointwise each element of $R$, and then to use this order to find a counting formula for the number of polynomial permutations on ${{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]}$. \[std\] Let ${Stab_{\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k}(R)}=\{F\in {\mathcal{P}({{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]})}: F(a)=a \text{ for every } a\in R\}$. Evidently, ${Stab_{\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k}(R)}$ is a subgroup of ${\mathcal{P}({{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]})}$. \[exnul\] Let $f,g\in R[x]$ with $f\operatorname{\xspace{ }\triangleq\xspace{ }}g$ on $R$. There exists $h \in {N_{R}}$ such that $f=g+h$. Let $h=f-g$. Then $h$ has the desirable property. \[firststab\] Let $R$ be a finite commutative ring. Then $${Stab_{\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k}(R)}=\{F\in {\mathcal{P}({{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]})}:F \textnormal{ is induced by } x+h(x), h \in {N_{R}} \}.$$ It is obvious that $${Stab_{\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k}(R)}\supseteq\{F\in {\mathcal{P}({{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]})}:F \textnormal{ is induced by } x+h(x), h \in {N_{R}} \}.$$ For the other inclusion, let $F\in \mathcal{P}({{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]})$ such that $F(a)=a$ for every $a\in R$. Then $F$ is represented by $f_0+\sum\limits_{i=1}^{k}f_i\operatorname{\alpha}_i$, where $f_0,\ldots, f_k\in R[x]$, and $a=F(a)=f_0(a)+\sum\limits_{i=1}^{k}f_i(a)\operatorname{\alpha}_i$ for every $a\in R$. It follows that $f_i(a)=0$ for every $a\in R$, i.e., $f_i$ is a null polynomial on $R$ for $i=1,\dots,k$. Thus, $f_0+\sum\limits_{i=1}^{k}f_i\operatorname{\alpha}_i\operatorname{\xspace{ }\triangleq\xspace{ }}f_0$ on ${{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]}$ by Corollary \[Gencount\], that is, $F$ is represented by $f_0$. Also, $f_0\operatorname{\xspace{ }\triangleq\xspace{ }}id_{R}$ on $R$, where $id_{R}$ is the identity function on $R$, and therefore $f_0(x)=x+h(x)$ for some $h\in {N_{R}}$ by Lemma \[exnul\]. We have the following theorem, when $R$ is a finite field, which describes the order of ${Stab_{\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k}(\mathbb{F}_q)}$. \[1301\] Let $\mathbb{F}_q$ be a finite field with $q$ elements. Then: 1. $|{Stab_{\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k}(\mathbb{F}_q)}| =|\{[f']_{\mathbb{F}_q}: f\in {N_{\mathbb{F}_q}} \text{ and for every } a\in\mathbb{F}_q, f'(a)\ne -1 \}|$; 2. $|{Stab_{\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k}(\mathbb{F}_q)}| =|\{[f']_{\mathbb{F}_q}: f\in {N_{\mathbb{F}_q}}, \deg f<2q \text{ and for every } a\in\mathbb{F}_q, f'(a)\ne -1 \}|$; 3. $|{Stab_{\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k}(\mathbb{F}_q)}| =(q-1)^q$. \[st1\] We begin with the proof of (1) and (2). Set\ $A=\{[f']_{\mathbb{F}_q}: f\in {N_{\mathbb{F}_q}} \text{ such that for every } a\in\mathbb{F}_q, f'(a)\ne -1 \}$. We define a bijection $\varphi$ from ${Stab_{\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k}(\mathbb{F}_q)}$ to the set $A$. If $F\in {Stab_{\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k}(\mathbb{F}_q)}$, then it is represented by $x+h(x)$, where $h\in \mathbb{F}_q[x]$ is a null polynomial on $\mathbb{F}_q$, by Proposition \[firststab\]. Now $h'(a)\ne -1 $ for every $a\in \mathbb{F}_q$, by Theorem \[Genper\], whence $[h']_{\mathbb{F}_q}\in A$. Then we set $\varphi(F)=[h']_{\mathbb{F}_q}$. Corollary \[6\] shows that $\varphi$ is well-defined and injective, and Theorem \[Genper\] shows that it is surjective. Moreover, by Lemma \[Perf\], $h$ can be chosen such that $\deg h<2q$.\ Next, we prove (3). By (1), $$|{Stab_{\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k}(\mathbb{F}_q)}| =|\{[f']_{\mathbb{F}_q}: f\in {N_{\mathbb{F}_q}} \text{ and for every } a\in\mathbb{F}_q, f'(a)\ne -1 \}|.$$ It is clear that $|{Stab_{\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k}(\mathbb{F}_q)}|\le|\{G :\mathbb{F}_q\longrightarrow \mathbb{F}_q\setminus\{-1\} \}|= (q-1)^q$.\ Now, for every function $G :\mathbb{F}_q\longrightarrow \mathbb{F}_q\setminus\{-1\}$ there exists a polynomial $f\in {N_{\mathbb{F}_q}}$ such that $[f']_{\mathbb{F}_q}=G$ by Lemma \[Perf\]. Thus $f(x)+x$ is a permutation polynomial on ${{\mathbb{F}_q}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]}$ by Theorem \[Genper\]. Obviously, $x+f(x)$ induces the identity on $\mathbb{F}_q$, and hence $[x+f(x)]_{{{\mathbb{F}_q}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]}}\in {Stab_{\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k}(\mathbb{F}_q)}$. Therefore every element of the set $\{G :\mathbb{F}_q\longrightarrow \mathbb{F}_q\setminus\{-1\} \}$ corresponds to an element of ${Stab_{\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k}(\mathbb{F}_q)}$, from which we conclude that $| {Stab_{\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k}(\mathbb{F}_q)}|\ge (q-1)^q$. This completes the proof. Let\ $ \mathcal{P}_R({{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]})=\{F\in {\mathcal{P}({{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]})}: F=[f]_{{{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]}} \textnormal{ for some } f\in R[x]\} $.\ In similar manner, let $\mathcal{P}_R(R[\operatorname{\alpha}_i])=\{F\in {\mathcal{P}({R[\operatorname{\alpha}_i)}}]: F=[f]_{R[\operatorname{\alpha}_i]} \textnormal{ for some } f\in R[x]\}.$ Now, we show that $\mathcal{P}_R({{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]})$ is a group of ${\mathcal{P}({{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]})}$, and later we prove that ${Stab_{\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k}(R)}$ is a normal subgroup of $\mathcal{P}_R({{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]})$. \[kthrelation\] The set $\mathcal{P}_R({{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]})$ is a subgroup of ${\mathcal{P}({{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]})}$ and\ $\mathcal{P}_R({{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]})\cong \mathcal{P}_R(R[\operatorname{\alpha}_i])$ for $i=1,\ldots,k$. It is clear that $\mathcal{P}_R({{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]})$ is closed under composition. Since it is finite, it is a subgroup of ${\mathcal{P}({{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]})}$. Let $F\in \mathcal{P}_R({{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]})$ and suppose that $F$ is induced by $f\in R[x]$. Define $$\psi:\mathcal{P}_R({{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]}) \longrightarrow \mathcal{P}_R(R[\operatorname{\alpha}_i]),\quad F\mapsto [f]_{R[\operatorname{\alpha}_i]}.$$ Then $\psi$ is well defined by Corollary \[Gencount\], and evidently it is a homomorphism. By Corollary \[PPfirstcoordinate\], $\psi$ is surjective. To show that $\psi$ is one-to-one, let $F_1\in \mathcal{P}_R({{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]})$ be induced by $g\in R[x]$ with $F\ne F_1$. Then either $f\operatorname{\xspace{}\not\triangleq\xspace{}}g$ on $R$ or $f'\operatorname{\xspace{}\not\triangleq\xspace{}}g'$ on $R$ by Corollary \[Gencount\]. Thus $\psi(F)=[f]_{R[\operatorname{\alpha}_i]}\ne \psi(F_1)=[g]_{R[\operatorname{\alpha}_i]}$. \[perun\] Let $R$ be a finite ring. Then for every $F\in \mathcal{P}(R)$ there exists a polynomial $f\in R[x]$ such that $F$ is induced by $f$ and $f'(r)\in R^{\times}$ for every $r\in R$. Set $\mathcal{P}_u(R)=\{F \in \mathcal{P}(R): F\text{ is induced by }f\in R[x], f':R\longrightarrow R^{\times} \}$. By definition $\mathcal{P}_u(R)\subseteq\mathcal{P}(R)$. Let $F \in \mathcal{P}(R)$. Then $F$ is induced by $f\in R[x]$. Since $R$ is finite, $R=\oplus_{i=1}^{n}R_i$, where $R_i$ are local rings. We distinguish two cases. For the first case, we suppose that every $R_i$ is not a field. Then $f$ is a permutation polynomial on ${{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]}$ by Proposition \[movdreivat\]. Hence $f'(a)\in R^{\times}$ for every $a\in R$ by Theorem \[Genper\]. So $F\in \mathcal{P}_u(R)$. For the second case, we assume without loss of generality that $R_1,\ldots,R_r$ are fields and none of $R_{r+1}, \ldots,R_n$ is a field for some $r\ge1$. Then write $f=(f_1,\ldots,f_n)$ where $f_i\in R_i$ for $i=1,\ldots,n$. By Lemma \[directper\], $f _i $ is a permutation polynomial on $ R_i$, for $i=1,\dots,n$. Now, a similar argument like the one given in the first case shows that $f'_i(a_i)\in R_i^{\times}$ for every $a_i\in R_i$ for $i=r+1,\dots,n$. On the other hand, there exists $g_j\in R_j[x]$ such that $g_j \operatorname{\xspace{ }\triangleq\xspace{ }}f_j$ on $R_j$ and $g'_j(a_j)\in R_j^{\times}$ for every $a_j\in R_j$, $ j=1,\ldots,r$ by Lemma \[Perf\]. Then take $g=(g_1,\dots,g_r,f_{r+1},\ldots,f_n)$. Thus $g\operatorname{\xspace{ }\triangleq\xspace{ }}f$ on $R$ and $g'(r)\in R^{\times}$ for every $r\in R$. Therefore $g$ induces $F$ and $F\in \mathcal{P}_u(R)$. \[PZlemma\] Let $R$ be a finite ring. Then: 1. \[firstPZ\] every element of $\mathcal{P}(R)$ occurs as the restriction to $R$ of some $F\in \mathcal{P}_R({{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]})$; 2. \[scndPZ\] $\mathcal{P}_R({{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]})$ contains ${Stab_{\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k}(R)} $ as a normal subgroup and $$\raise2pt\hbox{$\mathcal{P}_R({{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]})$} \big/ \lower2pt\hbox{${Stab_{\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k}(R)}$} \cong \mathcal{P}(R).$$ (\[firstPZ\]) This is obvious from Proposition \[perun\]. (\[scndPZ\]) ${Stab_{\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k}(R)}$ is contained in $\mathcal{P}_R({{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]})$, because every element of ${Stab_{\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k}(R)}$ can be represented by a polynomial with coefficients in $R$ by Proposition \[firststab\].\ Let $F\in \mathcal{P}_R({{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]})$ be represented by $f\in R[x]$. Then define\ $\varphi:\mathcal{P}_R({{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]}) \longrightarrow \mathcal{P}(R)$ by $\varphi(F)= [f]_R$. Now, $\varphi$ is well defined by Corollary \[Gencount\], and it is a group homomorphism with $\ker\varphi = {Stab_{\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k}(R)}$. By Proposition \[perun\], $\varphi$ is surjective. \[mshi\] For any fixed $F\in {\mathcal{P}(R)}$, $$\left|{Stab_{\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k}(R)}\right|=\left|\{([f]_R,[f']_R): f\in R[x], [f]\in \mathcal{P}_R({{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]}) \text{ and } [f]_ R= F \}\right|.$$ Let $f\in R[x]$ be a permutation polynomial on ${{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]}$ with $[f]_R=F$. Such an $f$ exists by Lemma \[PZlemma\] (\[firstPZ\]). We denote by $[f]$ the permutation induced by $f$ on ${{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]}$. Then the coset of $[f]$ with respect to ${Stab_{\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k}(R)}$ has $|{Stab_{\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k}(R)}|$ elements. By Lemma \[PZlemma\] (\[scndPZ\]), this coset consists of all polynomial permutations $G\in \mathcal{P}_R({{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]})$ with $[f]_R=G_{\big|R}$, where $G_{\big|R}$ is the restriction of the function $G$ to $R$. Let $g\in R[x]$ with $[g]=G$. By Corollary \[Gencount\], $G\ne [f]$ if and only if the pair $([f]_R,[f']_R)$ differs from $([g]_R,[g']_R)$. Thus we have a bijection between the coset of $[f]$ with respect to ${Stab_{\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k}(R)}$ and the set of pairs $([g]_R,[g']_R)$ occurring for $g\in R[x]$ such that $[g]=G$ permutes ${{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]}$ and $[f]_R=[g]_R$. We employ Corollary \[mshi\] to find the number of permutation polynomials on ${{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]}$ in terms of $|{Stab_{\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k}(R)}|$ in the following theorem. \[14\] For any integer $k\ge 1$, $$|{\mathcal{P}({{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]})}|=|{\mathcal{F}({R})}|^k\cdot |{\mathcal{P}(R)}|\cdot |{Stab_{\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k}(R)}|.$$ For $f\in R[x]$, let $[f]$ be the function induced by $f$ on ${{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]}$.\ Set $ B = \bigcup\limits_{\rlap{$\scriptstyle{F\in {\mathcal{P}(R)}}$}} \{([f]_R,[f']_R): f\in R[x], [f]\in \mathcal{P}_R({{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]}) \text{ and } [f]_R= F \}$. Then $|B|=|{\mathcal{P}(R)}|\cdot |{Stab_{\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k}(R)}|$ by Corollary \[mshi\].\ Now we define a function $ \Psi:{\mathcal{P}({{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]})} \longrightarrow B\times \bigtimes\limits_{i=1}^{k} {\mathcal{F}({R})}$ as follows: if $G\in{\mathcal{P}({{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]})}$ is induced by $g=g_0+\sum\limits_{i=1}^{k}g_i\operatorname{\alpha}_i$, where $g_0,\ldots,g_k \in R[x]$, we let $\Psi(G)=(([g_0]_{R},[g'_0]_R),[g_1]_R,\ldots,[g_k]_R)$. By Theorem \[Genper\] and Corollary \[6\], $\Psi$ is well-defined and one-to-one. The surjectivity of $\Psi$ follows by Proposition \[PZlemma\] and Theorem \[Genper\]. Therefore $$|{\mathcal{P}({{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]})}|=|B\times\bigtimes\limits_{i=1}^{k} {\mathcal{F}(R)}|= |{\mathcal{P}(R)}|\cdot |{Stab_{\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k}(R)}|\cdot| {\mathcal{F}(R)}|^k.$$ \[11.12\] Let ${N_{R}(<n)}=\{f\in R[x]: f\in {N_{R}} \textnormal{ with }\deg f < n\}$, and $${N'_{R}(<n)}=\{f\in R[x]: f\in {N'_{R}} \textnormal{ with }\deg f <n\}.$$ In the following theorem we obtain several descriptions for the order of the group ${Stab_{\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k}(R)}$ whenever $R$ is a direct sum of local rings which are not fields. \[12\] Let $R$ be a finite ring which is a direct sum of local rings which are not fields. Then the following hold. 1. \[scndstab\] $ |{Stab_{\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k}(R)}| =|\{[f']_R: f\in {N_{R}} \}|. $ 2. \[thirdstab\] Let $h\in R[x]$ be a monic polynomial null polynomial on ${{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]}$ of degree $n$. Then: 1. \[thirdstaba\] $ |{Stab_{\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k}(R)}| = |\{[f']_R: f\in {N_{R}} \textnormal{ with }\deg f<n\}|; $ 2. \[fourthstab\] $ |{Stab_{\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k}(R)}| = [{N_{R}}:{N'_{R}}]= \frac{|{N_{R}(<n)}|}{|{N'_{R}(<n)}|}. $ (\[scndstab\]) We define a bijection $\varphi$ from ${Stab_{\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k}(R)}$ to the set of different functions induced on $R$ by the first derivative of the null polynomials on $R$. By Proposition \[firststab\], every $F\in {Stab_{\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k}(R)}$ is represented by $x+f(x)$, where $f\in R[x]$ is a null polynomial on $R$. We set $\varphi(F)=[f']_R$. Then Corollary \[6\] shows that $\varphi$ is well-defined and injective, and Corollary \[persumnug\] shows that it is surjective.\ (2) Such a null polynomial $h\in R[x]$ exists by Remark \[existmonicn\].\ (\[thirdstaba\]) If $g\in {N_{R}}$, then by Proposition \[sur\], there exists $f\in R[x]$ with $\deg f<n$ such that $[f]_R=[g]_R$ and $[f']_R=[g']_R$. Evidently, $f\in {N_{R}}$. (\[fourthstab\]) For the index, define $\varphi:{N_{R}} \longrightarrow {\mathcal{F}(R)}$ by $\varphi(f)=[f']_R$. Clearly, $\varphi$ is a homomorphism of additive groups. Furthermore, $$\ker\varphi={N'_{R}} \textnormal{ and } \operatorname{Im}\varphi=\{[f']_R: f\in {N_{R}}\},$$ and hence $\raise1.5pt\hbox{${N_{R}}$} \big/ \lower1.5pt\hbox{${N'_{R}}$} \cong \{[f']_R: f\in {N_{R}}\}$. Therefore $|{Stab_{\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k}(R)}|=[{N_{R}}:{N'_{R}}]$ by (\[scndstab\]).\ For the ratio, consider the sets ${N_{R}(<n)}$ and ${N'_{R}(<n)}$ as defined in Definition \[11.12\]. The equivalence relation in Definition \[equvfun\] restricted to these two additive subgroups and the analogous proof to the previous part show that $$|{Stab_{\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k}(R)}|=[{N_{R}(<n)}:{N'_{R}(<n)}].$$ When $R=\mathbb{F}_q$ is a finite field, we have shown in Theorem \[1301\] (\[st1\]) that $|{Stab_{\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k}(\mathbb{F}_q)}|=(q-1)!$. But we will see later that $$[{N_{\mathbb{F}_q}}:{N'_{\mathbb{F}_q}}]=[{N_{\mathbb{F}_q}(<2q)}:{N'_{\mathbb{F}_q}(<2q)}]=q^q.$$ The following theorem shows that the stabilizer group ${Stab_{\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k}(R)}$ does not depend on the number of variables $k$. \[stabiso\] Let $k$ be a positive integer. Then ${Stab_{\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k}(R)} \cong Stab_{\operatorname{\alpha}_i}(R)$ for $i=1,\ldots,k$. Fix $i\in \{1,\ldots,k\}$. Then by the definition of dual numbers (for the case $k=1$), $R[\operatorname{\alpha}_1]\cong R[\operatorname{\alpha}_i]$. Thus, by Theorem \[12\] (\[fourthstab\]), $|Stab_{\operatorname{\alpha}_1}(R)|=|Stab_{\operatorname{\alpha}_i}(R)|=[{N_{R}}:{N'_{R}}]=|{Stab_{\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k}(R)}|$.\ Let $F\in \mathcal{P}_R({{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]})$ and suppose that $F$ is induced by $f\in R[x]$. Define $$\psi:\mathcal{P}_R({{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]}) \longrightarrow \mathcal{P}_R(R[\operatorname{\alpha}_i]),\quad F\mapsto [f]_{R[\operatorname{\alpha}_i]}.$$ The proof of Proposition \[kthrelation\] shows that $\psi$ is an isomorphism. If $\phi$ denotes the restriction of $\psi$ to ${Stab_{\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k}(R)}$, then ${Stab_{\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k}(R)}\cong \phi({Stab_{\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k}(R)})$. Since $|{Stab_{\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k}(R)}|=|Stab_{\operatorname{\alpha}_i}(R)|$, we need only to show that $\phi({Stab_{\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k}(R)})\subseteq Stab_{\operatorname{\alpha}_i}(R) $. Let $F\in {Stab_{\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k}(R)}$. Then $F$ is induced by $x+h(x)$ for some $h\in {N_{R}}$ by Proposition \[firststab\]. Again by Proposition \[firststab\], $\phi(F)=\psi(F)=[x+h(x)]_{R[\operatorname{\alpha}_i]}\in Stab_{\operatorname{\alpha}_i}(R)$. This completes the proof. \[2ndisoapli\] Let $R$ be a finite ring. Then $[R[x]:{N'_{R}}]=[R[x]:{N_{R}}][{N_{R}}:{N'_{R}}]$. It is clear that $R[x]$ is an additive abelian group with subgroups ${N_{R}},{N'_{R}}$ such that $ {N'_{R}}<{N_{R}}$. Then by the Second Isomorphism Theorem of groups, $$\raise2pt\hbox{ $(R[x]\big/ {N'_{R}})$ } \big/ \lower2pt\hbox{ ( ${N_{R}}\big/ {N'_{R}}$) } \cong (R[x]\big/ {N_{R}}),$$ from which the result follows. \[Counttheor\] Let $R$ be a finite ring. Then $$|{\mathcal{F}({{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]})}|=[{N_{R}}:{N'_{R}}] |{\mathcal{F}(R)}|^{k+1}.$$ Moreover, when $R$ is a direct some of local rings which are not fields, we have $$|{\mathcal{F}({{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]})}|=|{Stab_{\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k}(R)}|\cdot|{\mathcal{F}(R)}|^{k+1}.$$ By Proposition \[firstcountfor\], $|{\mathcal{F}({{R}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]})}|=[R[x]:{N'_{R}}] [R[x]:{N_{R}}]^{k}$.\ Then it is enough to notice that $|{\mathcal{F}(R)}|=[R[x]:{N_{R}}]$ and substituting the formula for $[R[x]:{N'_{R}}]$ from Lemma \[2ndisoapli\] to get the required equation.\ The second part follows from the above and Theorem \[12\] (\[fourthstab\]). We turn now to find explicitly the number of polynomial functions on ${{\mathbb{F}_q}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]}$. To do this we need the following lemma, and we leave its proof to the reader. \[NidFid\] Let $\mathbb{F}_q$ be a finite field. Then: 1. ${N_{\mathbb{F}_q}}=(x^q-x)\mathbb{F}_q[x]$; 2. ${N'_{\mathbb{F}_q}}=(x^q-x)^2\mathbb{F}_q[x]$. \[contfld\] Let $\mathbb{F}_q$ be a finite field. Then $|{\mathcal{F}({{\mathbb{F}_q)}}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]}|=q^{(k+2)q}$. Set\ $\mathcal{A}=\{f: f=f_0+\sum\limits_{i=1}^{k}f_i\operatorname{\alpha}_i, \text{ where } f_0,f_i \in \mathbb{F}_q[x], \deg f_0<2q, \deg f_i<q \text{ for }i=1,\ldots,k\} $. Then it is clear that $|\mathcal{A}|=q^{(k+2)q}$. To complete the proof we show that if $f,g \in \mathcal{A}$ with $f\ne g$, then $[f]\ne [g]$, or equivalently if $[f]=[g]$, then $f=g$. Suppose that $f,g \in \mathcal{A}$, where $f_0+\sum\limits_{i=1}^{k}f_i\operatorname{\alpha}_i$ and $g_0+\sum\limits_{i=1}^{k}g_i\operatorname{\alpha}_i$, such that $[f]=[g]$. Thus $[f-g]$ is the zero function on ${{\mathbb{F}_q}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]}$. Hence $f-g= (f_0-g_0)+\sum\limits_{i=1}^{k}(f_i-g_i)\operatorname{\alpha}_i$ is a null polynomial on ${{\mathbb{F}_q}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]}$, whence $f_0-g_0\in {N'_{\mathbb{F}_q}}$ and $f_i-g_i\in {N_{\mathbb{F}_q}}$ for $i=1,\ldots,k$ by Theorem \[4\]. Then by Lemma \[NidFid\], we have $(x^q-x)^2\mid (f_0-g_0)$ and $(x^q-x)\mid (f_i-g_i)$ for $i=1,\ldots,k$. Therefore $f_0-g_0=0$, $f_i-g_i=0$ for $i=1,\ldots,k$ since $\deg (f_0-g_0)<2q$ and $\deg (f_i-g_i)<q$ for $i=1,\ldots,k$. Thus $f=g$. The following corollary shows that, when $R=\mathbb{F}_q$, $[{N_{\mathbb{F}_q}}:{N'_{\mathbb{F}_q}}]\ne |{Stab_{\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k}(\mathbb{F}_q)}|$(See Theorem \[1301\] and Theorem \[12\]). Let $\mathbb{F}_q$ be a finite field. Then $[{N_{\mathbb{F}_q}}:{N'_{\mathbb{F}_q}}]=[{N_{\mathbb{F}_q}(<2q)}:{N'_{\mathbb{F}_q}(<2q)}]=q^q$. By Theorem \[Counttheor\], $|{\mathcal{F}({{\mathbb{F}_q)}}[\operatorname{\alpha}_1,\ldots,\operatorname{\alpha}_k]}|=[{N_{\mathbb{F}_q}}:{N'_{\mathbb{F}_q}}] |{\mathcal{F}(\mathbb{F}_q)}|^{k+1}$, whence $[{N_{\mathbb{F}_q}}:{N'_{\mathbb{F}_q}}]=q^q$ by Proposition \[contfld\]. On the other hand, Lemma \[NidFid\] gives $|{N_{\mathbb{F}_q}(<2q)}|=q^q$ and\ $|{N'_{\mathbb{F}_q}(<2q)}|=1$. Thus $[{N_{\mathbb{F}_q}(<2q)}:{N'_{\mathbb{F}_q}(<2q)}]=\frac{|{N_{\mathbb{F}_q}(<2q)}|}{ |{N'_{\mathbb{F}_q}(<2q)}|}=q^q$. [**Acknowledgment.**]{} This work was supported by the Austrian Science Fund FWF: P 27816-N26 and P 30934-N35. The author would like to thank Kwok Chi Chim and Paolo Leontti for the valuable suggestions and comments on earlier versions of the manuscript.
--- abstract: 'We numerically study the possibilities for improved large-mode area endlessly single mode photonic crystal fibers for use in high-power delivery applications. By carefully choosing the optimal hole diameter we find that a triangular core formed by three missing neighboring air holes considerably improves the mode area and loss properties compared to the case with a core formed by one missing air hole. In a realized fiber we demonstrate an enhancement of the mode area by $\sim 30\,\%$ without a corresponding increase in the attenuation.' author: - 'N. A. Mortensen,$^*$ M. D. Nielsen,$^{*\dagger}$ J. R. Folkenberg,$^*$ A. Petersson,$^*$ and H. R. Simonsen$^*$' title: 'Improved large-mode area endlessly single-mode photonic crystal fibers' --- Applications requiring high-power delivery call for single-mode large-mode area (LMA) optical fibers. While standard-fiber technology has difficulties in meeting these requirements the new class[@knight1996] of all-silica photonic crystal fibers (PCF) has a big potential due to their endlessly single-mode properties [@birks1997] combined with (in principle) unlimited large effective areas.[@knight1998el] For recent reviews we refer to Refs. . The cladding structure of these PCFs consists of a triangular array of air holes of diameter $d$ and pitch $\Lambda$ corresponding to an air-filling fraction $f=\pi/(2\sqrt{3})(d/\Lambda)^2$. The presence of the air holes results in a strongly wavelength dependent effective index $n_{\rm eff}$ of the cladding and in the short and long wavelength limits we have $$\label{limits} \lim_{\lambda \ll \Lambda}n_{\rm eff}=n_{\rm si}\;,\; \lim_{\lambda \gg \Lambda}n_{\rm eff}=f\times n_{\rm air}+ (1-f)\times n_{\rm si}\equiv \bar{n}.$$ The numerical results in the intermediate regime can be reasonably fitted by [*e.g.*]{} $$\label{fit} n_{\rm eff}\approx \bar{n}+(n_{\rm si}-\bar{n})\cosh^{-2}(\alpha \lambda/\Lambda)$$ with $\alpha$ of order unity and only weakly dependent on $d/\Lambda$, see Fig. \[cladding\]. It is these unusual dispersion properties of the cladding which facilitate design of large-mode area endlessly single-mode optical fibers.[@birks1997; @knight1998el] In order to confine the light to a core region of high index a defect in the triangular air-hole array is introduced. Normally this is done by leaving out one of the air holes. In the stack-and-pull approach [@knight1996] one of the capillaries is replaced by a silica rod, see left insert of Fig. \[attenuation\]. By choice the index of the defect can be raised by various doping and depressed-index core has also been studied recently.[@mangan2001] The single-rod PCF can in principle be kept endlessly single-mode no matter how large a core diameter.[@knight1998el] However, when scaling-up the fibre-structure the mode area is increased at the cost of an increased susceptibility to longitudinal modulations [@mortensen_ptl] such as [*e.g.*]{} micro-bending [@nielsen2002] and macro-bending [@sorensen2001] induced scattering loss. The reason is that in order to increase the mode area the pitch $\Lambda$ is scaled to a large value, but this also implies that $\lambda/\Lambda \ll 1$ and in this limit the core index approaches the cladding index, see Eq. (\[limits\]). Fig. \[cladding\] suggests that the decreasing index step may be compensated by increasing the air hole diameter, which can be done up to $d/\Lambda\sim 0.45$ which is the upper limit for endlessly single-mode operation. For a discussion of the particular mumber see [*e.g.*]{} Refs. . For LMA PCFs working in the UV and visible regimes this sets an upper limit on the mode areas that can be realized with a reasonable loss and many applications call for an improved LMA PCF design. The inclusion of more than a single solid rod in the stacking has been used to form multiple-core [@mangan2000] and highly birefringent PCFs.[@hansen2001] In this work we demonstrate how inclusion of more neighboring solid rods can be used for improved LMA endlessly single-mode PCFs. Intuitively this may not seem to be a promising direction since a reduced value of $d/\Lambda$ is needed to keep the PCF endlessly single-mode. For the birefringent case with two neighboring rods[@hansen2001] the limit is $d/\Lambda \sim 0.30$ and for a triangular core formed by three neighboring rods (see right insert of Fig. \[attenuation\]) we have found $d/\Lambda \sim 0.25$ as the upper limit for endlessly single-mode operation. However, for a given desired mode area this decrease in $d/\Lambda$ is compensated for by a corresponding smaller value of $\Lambda$. In fact, the edge-to-edge separation $\Lambda-d$ of the holes turns out to be the important length scale rather than the pitch $\Lambda$ itself. In introducing a multiple number of rods an important question about possible birefringence arises. The structure with a single rod has a six-fold symmetry and though group theory clearly excludes any intrinsic birefringence [@white2001] there has been quite some debate based on numerical studies, see [*e.g.*]{} Ref.  and references therein. More generally, group theory predicts that for $m$-fold rotational symmetry and $m>2$ a mode with a preferred direction is one of a pair, see Ref.  and references therein. PCFs with a triangular core formed by three neighboring rods have a $3$-fold symmetry and thus no intrinsic birefringence. The non-birefringent property is also confirmed numerically using a fully-vectorial plane-wave method [@johnson2001] and any small numerical birefringence originates from a numerical grid with symmetry different from the dielectric structure being studied. In order to compare the single-rod and three-rod PCFs we study two quantities; [*i)*]{} the mode-field diameter $\rm MFD$ and [*ii)*]{} the coupling length $\zeta$ to the cladding. We relate the $\rm MFD$ to the effective area[@mortensen2002a] $$\label{Aeff} A_{\rm eff}= \Big[\int d{\boldsymbol r}_\perp I({\boldsymbol r}_\perp)\Big]^2\Big[\int d{\boldsymbol r}_\perp I^2({\boldsymbol r}_\perp)\Big]^{-1},$$ by $A_{\rm eff}=\pi ({\rm MFD}/2)^2$. Here, $I({\boldsymbol r}_\perp)$ is the transverse intensity distribution of the fundamental mode. For a Gaussian mode of width $w$ Eq. (\[Aeff\]) gives ${\rm MFD}=2w$ and the intensity distribution in the types of PCF studied in this work can be considered close to Gaussian[@mortensen2002a; @mortensen2002b] as we also confirm experimentally. The coupling length (beat length) $$\label{zc} \zeta=2\pi/(\beta-\beta_{\rm cl})$$ between the fundamental mode and the cladding (radiation field) can be used in formulating a low-loss criterion.[@love] The additional competing length scales consist of the wavelength and the length scale $L_n$ (or as set $\{L_n\}$ of length scales) for nonuniformity along the fiber and loss will be significant when $$\label{high-loss} \lambda \lesssim L_n \lesssim \zeta$$ and otherwise loss can be expected to be small. Thus, the shorter a coupling length the lower susceptibility to longitudinal modulations. We emphasize that this criterion does not quantify loss, but it gives a correct parametric dependence of loss for various loss mechanisms. For PCFs the relevance of this criteria was recently confirmed experimentally in the case of macro-bending [@mortensen_ptl] and micro-bending [@nielsen2002] induced nonuniformities and also in a study of PCFs with structural long-period gratings.[@kakarantzas2002] In Fig. \[numerics\] we compare the single-rod and three-rod PCFs with $d/\Lambda = 0.45$ and $0.25$, respectively. All numerical results are based on a fully-vectorial solution of Maxwell’s equations in a plane-wave basis[@johnson2001] and for silica we have for simplicity used $n_{\rm si}=1.444$. Panel (a) shows the coupling length versus wavelength. The normalization by the edge-to-edge separation $\Lambda-d$ of the air holes makes the two curves coincide at short wavelengths ($\lambda \ll \Lambda-d$) which clearly demonstrates that $\Lambda-d$ is the length scale of the fiber structure which determines the susceptibility to longitudinal modulations. Panel (b) shows the mode-field diameter as a function of wavelength and as seen the three-rod PCF provides a larger $\rm MFD$ compared to the single-rod PCF for fixed $\lambda/\Lambda$. Panel (c) combines the results of panels (a) and (b) in a plot of mode-field diameter versus coupling length. At ${\rm MFD}\sim 7\times \lambda$ there is a clear cross over and for ${\rm MFD}\gg \lambda$ the three-rod PCF is thus seen to be less susceptible to longitudinal modulations compared to the single-rod PCF. Fig. \[attenuation\] shows experimental results for the attenuation of both a single-rod PCF and a three-rod PCF with hole diameters ($d/\Lambda\simeq 0.45$ and $0.25$, respectively) close to the endlessly single-mode limits. The pitches are $\Lambda \simeq 10\,{\rm \mu m}$ and $\Lambda \simeq 6\,{\rm \mu m}$, respectively, so that core sizes are approximately the same. The two PCFs were fabricated by aid of the stack-and-pull method under comparable conditions and both PCFs were found to be endlessly single-mode in a wavelength range of at least $400\,{\rm nm}$ to $1600\,{\rm nm}$. As seen the two PCFs have similar spectral attenuation even though the mode area of the three-rod PCF is enhanced by $\sim 30\,\%$ compared to the single-rod PCF. This demonstrate the improvement by the three-rod PCF. In conclusion we have found that a triangular core formed by three missing neighboring air holes considerably improves the mode area and/or loss properties compared to the case with a core formed by one missing air hole. This new improved large-mode area endlessly single-mode PCF is important for high-power delivery applications and in a realized fiber we have been able to demonstrate an enhancement of the mode area by $\sim 30\,\%$ without a corresponding change in the loss level. We acknowledge A. Bjarklev (Research Center COM, Technical University of Denmark) and J. Broeng (Crystal Fibre A/S) for useful discussions. M. D. N. is financially supported by the Danish Academy of Technical Sciences. [10]{} J. C. Knight, T. A. Birks, P. S. J. Russell, and D. M. Atkin, Opt. Lett. [ **21**]{}, 1547 (1996). T. A. Birks, J. C. Knight, and P. S. J. Russell, Opt. Lett. [**22**]{}, 961 (1997). J. C. Knight, T. A. Birks, R. F. Cregan, P. S. J. Russell, and J.-P. [De Sandro]{}, Electron. Lett. [**34**]{}, 1347 (1998). J. C. Knight and P. S. J. Russell, Science [**296**]{}, 276 (2002). T. A. Birks, J. C. Knight, B. J. Mangan, and P. S. J. Russell, IEICE Trans. Electron. [**E84-C**]{}, 585 (2001). B. J. Mangan, J. Arriaga, T. A. Birks, J. C. Knight, and P. S. J. Russell, Opt. Lett. [**26**]{}, 1469 (2001). N. A. Mortensen and J. R. Folkenberg, preprint. M. D. Nielsen, G. Vienne, J. R. Folkenberg, and A. Bjarklev, Opt. Lett. in press (2002). T. S[ø]{}rensen, J. Broeng, A. Bjarklev, E. Knudsen, and S. E. B. Libori, Electron. Lett. [**37**]{}, 287 (2001). J. Broeng, D. Mogilevstev, S. E. Barkou, and A. Bjarklev, Opt. Fiber Technol. [**5**]{}, 305 (1999). N. A. Mortensen, Opt. Express [**10**]{}, 341 (2002). B. T. Kuhlmey, R. C. McPhedran, and C. M. [de Sterke]{}, Opt. Lett. [**27**]{}, 1684 (2002). B. J. Mangan, J. C. Knight, T. A. Birks, and P. S. J. Russell, Electron. Lett. [**36**]{}, 1358 (2000). T. P. Hansen, J. Broeng, S. E. B. Libori, E. Knudsen, A. Bjarklev, J. R. Jensen, and H. Simonsen, IEEE Photon. Tech. Lett. [**13**]{}, 588 (2001). T. P. White, R. C. McPhedran, C. M. [de Sterke]{}, L. C. Botton, and M. J. Steel, Opt. Lett. [**26**]{}, 1660 (2001). M. Koshiba and K. Saitoh, IEEE Photon. Tech. Lett. [**13**]{}, 1313 (2001). S. G. Johnson and J. D. Joannopoulos, Opt. Express [**8**]{}, 173 (2001). N. A. Mortensen and J. R. Folkenberg, Opt. Express [**10**]{}, 475 (2002). J. D. Love, IEE Proc.-J [**136**]{}, 225 (1989). G. Kakarantzas, T. A. Birks, and P. S. J. Russell, Opt. Lett. [**27**]{}, 1013 (2002).
ECM-UB-03/12\ IFUM-756-FT\ KUL-TF-2003/07\ hep-th/0304210 \ \ [**Abstract**]{} [We show that the supertube configurations exist in all supersymmetric type IIA backgrounds which are purely geometrical and which have, at least, one flat direction. In other words, they exist in any spacetime of the form $\CR^{1,1} \times \CM_8$, with M any of the usual reduced holonomy manifolds. These generalised supertubes preserve 1/4 of the supersymmetries preserved by the choice of the manifold M. We also support this picture with the construction of their corresponding family of IIA supergravity backgrounds preserving from 1/4 to 1/32 of the total supercharges.]{} ------------------------------------------------------------------------ width 3.cm [E-mail: `[email protected], [email protected], [email protected], [email protected] ` ]{} Introduction and Results ======================== The fact that D-branes couple to background fluxes can allow, under the appropriate circumstances, a collection of D-branes to expand into another brane of higher dimension. Also the inverse process is observed, where higher dimensional D-branes collapse into smaller dimensional ones or even into fundamental strings. Non-supersymmetric examples of such configurations are the expansion of Born-Infeld strings [@Emparan:1998rt], the dielectric branes [@Myers:1999ps] and the matrix string theory calculations of [@Schiappa:2000dv; @Silva:2001ja]. More recent supersymmetric cases have also been constructed, like the giant gravitons in AdS spaces [@McGreevy:2000cw; @Grisaru:2000zn]. All these configurations share the handicap that the perturbative quantisation of string theory is still not possible due to the presence of Ramond-Ramond fluxes. Supertubes [@Mateos:2001qs] are very different from the former cases because they are expanded configurations that live in a completely flat space, with all other background fields turned off. They correspond to a bound state of D0-branes and fundamental strings that expand into a D2 with tubular shape due to the addition of angular momentum. Remarkably, they also preserve 1/4 of the 32 supersymmetries of the Minkowski vacuum, unlike some other similar (but non-supersymmetric) configurations that were constructed in [@Harmark:2000na]. Furthermore, the simplicity of the background allowed for a perturbative string-theoretical study of the supertube, beyond the probe or the supergravity approximations [@Mateos:2001pi]. The purpose of this paper is to show that it is possible to generalise the construction of the original supertube configurations to other purely geometrical backgrounds, while still preserving some supersymmetry. This generalisation consists on choosing a type IIA background of the form $\CR^{1,1} \times \CM_8$, with $\CM_{8}$ a curved manifold. Since we do not turn on any other supergravity field, supersymmetry restricts M to be one of the usual manifolds with reduced holonomy [@Berger]: M & Fraction of the 32 supersymmetries preserved\ $\CR^4 \times CY_2$ & 1/2\ $CY_2 \times CY_2$ & 1/4\ $\CR^2 \times CY_3$ & 1/4\ $CY_4$ & 1/8\ $\CR \times G_2$ & 1/8\ Spin(7) & 1/8\ Sp(2) & 3/8\ We will show that it is possible to supersymmetrically embed the supertube in these backgrounds in such a way that its time and longitudinal directions fill the $\CR^{1,1}$ factor, while its compact direction can describe an arbitrary curve $\CC$ in M. The problem will be analysed in two different descriptions. In the first one, we will perform a worldvolume approach by considering a D2 probe in these backgrounds with the mentioned embedding and with an electromagnetic worldvolume gauge field corresponding to the threshold bound state of D0/F1. With the knowledge of some general properties of the Killing spinors of the M manifolds, it will be shown, using its $\k$-symmetry, that the probe bosonic effective action is supersymmetric. As in flat space supertubes, the only charges and projections involved correspond to the D0-branes and the fundamental strings, while the D2 ones do not appear anywhere. This is why, in all cases, the preserved amount of supersymmetry will be 1/4 of the fraction already preserved by the choice of background. Note that, in particular, this allows for configurations preserving a single supercharge, as is shown in one of the examples of this work.[^1] In the other example that we present, we exploit the fact that the curve $\CC$ can now wind around the non-trivial cycles that the M manifolds have, and construct a supertube with cylindrical shape $\CR \times S^1$, with the $S^1$ wrapping one of the non-trivial $S^2$ cycles of an ALE space. In the absence of D0 and F1 charges, $q_0$ and $q_s$ respectively, the $S^1$ is a collapsed point in one of the poles of the $S^2$. As $|q_0 q_s|$ is increased, the $S^1$ slides down towards the equator. Unlike in flat space, here $|q_0 q_s|$ is bounded from above and it acquires its maximum value precisely when the $S^1$ is a maximal circle inside the $S^2$. The second approach will be a spacetime description, where the back-reaction of the system will be taken into account, and we will be able to describe the configuration by means of a supersymmetric solution of type IIA supergravity, the low-energy effective theory of the closed string sector. Such solutions can be obtained from the original ones, found in [@Emparan:2001ux], by simply replacing the 8-dimensional Euclidean space that appears in the metric by M. We will show that this change is consistent with the supergravity equations of motion as long as the various functions and one-forms that were harmonic in $\euc^8$ are now harmonic in M. It will also be shown that the supergravity solution preserves the same amount of supersymmetry that was found by the probe analysis. Physically, the construction of these generalised supertubes is possible because the cancellation of the gravitational attraction by the angular momentum is a [*local*]{} phenomenon. By choosing the worldvolume electric field $E$ such that $E^2=1$, and an arbitrary non-zero magnetic field $B$, the Poynting vector automatically acquires the required value to prevent the collapse [*at every point of $\CC$*]{}. This remains true even after the replacement of the space where $\CC$ lives from $\euc^8$ by a curved M. This paper is organised as follows: in section \[ss:WVanal\] we analyse the system where the D2-supertube probes the $\CR^{1,1} \times \CM_8$, and prove that the effective worldvolume action for the D2 is supersymmetric using the $\k$-symmetry. In section \[ss:Hamilanal\] we perform the Hamiltonian analysis of the system. We show that the supersymmetric embeddings minimise the energy for given D0 and F1 charges, showing that gravity is locally compensated by the Poynting vector. In section \[ss:examples\] we give to examples in order to clarify and illustrate these constructions. Section \[ss:SGanal\] is devoted to the supergravity analysis of the generalised supertubes. We prove there the supersymmetry from a spacetime point of view. Conclusions are given in section \[ss:concl\]. Probe worldvolume analysis {#ss:WVanal} ========================== In this section we will prove that the curved direction of a supertube can live in any of the usual manifolds with reduced holonomy, while still preserving some amount of supersymmetry. The analysis will be based on the $\k$-symmetry properties of the bosonic worldvolume action, and its relation with the supersymmetry transformation of the background fields. The setup --------- As announced, we consider a general IIA background of the form $\CR^{1,1} \times \CM_8$, with $\CM_{8}$ a possibly curved manifold. In the absence of fluxes, the requirement that the background preserves some supersymmetry[^2] implies that $\CM_8$ must admit covariantly constant spinors and, therefore, a holonomy group smaller than ${\mathop{\rm SO}}(8)$. The classification of such manifolds is well-known [@Berger], and the only possible choices for M are shown in the table of the introduction. Let us write the target space metric as $$\rmd s^2_{IIA}=-(\rmd x^0)^2+(\rmd x^1)^2+ \ei \ej \delta_{\underline{ij}}\,, \espai \ei=\rmd y^j e_j{}^{\underline{i}} \,, \espai i,j = 2,3,...,9\,,$$ where $\ei$ is the vielbein of a Ricci-flat metric on M. Underlined indices refer to tangent space objects. We will embed the supertube in such a way that its time and longitudinal directions live in $\CR^{1,1}$ while its curved direction describes an arbitrary curve $\CC$ in $\CM_8$. By naming the D2 worldvolume coordinates $\{\s^0,\s^1,\s^2\}$, such an embedding is determined by $$\label{embed} x^0=\s^0, \espai\espai x^1=\s^1, \espai \espai y^i=y^i(\s^2)\,,$$ where $y^i$ are arbitrary functions of $\s^2$. The assignment of $\sigma ^0$ and $\sigma ^1$, i.e. the fact that $y^i$ is independent of $\sigma ^0$ and $\sigma ^1$, is a choice of parametrization.[^3] Let us remark that, in general, the curve $\CC$ will be contractible in M. As a consequence, due to gravitational self-attraction, the compact direction of the D2 will naturally tend to collapse to a point. Following [@Mateos:2001qs], we will stabilise the D2 by turning on an electromagnetic flux in its worldvolume $$\label{fluxes} F_{\it 2}=E\, \rmd \sigma^0 \wedge \rmd \sigma^1 + B\, \rmd \sigma^1 \wedge \rmd \sigma^2\,,$$ which will provide the necessary centrifugal force to compensate the gravitational attraction. In this paper we will restrict to static configurations. The effective action of the D2 is the DBI action (the Wess-Zumino term vanishes in our purely geometrical backgrounds), $$\label{delta} S= \int_{\CR^{1,1} \times C} \rmd \sigma^0 \rmd \sigma^1 \rmd \sigma^2 {\cal L}_{DBI}\,, \espai \espai {\cal L}_{DBI}=-\Delta\equiv -\sqrt{-\det[g+F]}\,,$$ where $g$ is the induced metric determined by the embedding $x^M(\sigma ^\mu )$, and $F_{\mu \nu }$ is the electromagnetic field strength. $M$ denotes the spacetime components $0,1,\ldots ,9$, and $\mu $ labels the worldvolume coordinates $\mu =0,1,2$. The $\k$-symmetry imposes restrictions on the background supersymmetry transformation when only worldvolume bosonic configurations are considered. Basically we get $\Gamma _\kappa \epsilon =\epsilon$ (see e.g. [@Bergshoeff:1997kr]), where $\epsilon$ is the background Killing spinor and $\Gamma_\kappa$ (see e.g. [@Bergshoeff:1997tu]) is a matrix that squares to 1: $$\rmd^3\sigma \; \Gamma _{\kappa }=\Delta ^{-1}\left[ \gamma _{\it 3}+\gamma _{\it 1} \Gamma _*\wedge F_{\it 2}\right]. \label{Gammakappa}$$ Here $\G_*$ is the chirality matrix in ten dimensions (in our conventions it squares to one), and the other definitions are $$\begin{aligned} \gamma _{\it 3} & = & \rmd\sigma ^0\wedge \rmd\sigma ^1\wedge \rmd\sigma ^2 \,\partial _0x^M \partial _1x^N\partial _2x^P e_M{}^{\underline{M}}e_N{}^{\underline{N}} e_P{}^{\underline{P}}\Gamma_{\underline{MNP}}\,, \nonumber\\ \gamma _{\it 1} & = & \rmd \sigma ^\mu \partial _\mu x^Me_M{}^{\underline{M}}\Gamma_{\underline{M}} \,. \label{hulp}\end{aligned}$$ where $e_M{}^{\underline{M}}$ are the vielbeins of the target space and $\Gamma_{\underline{M}}$ are the flat gamma matrices. We are using Greek letters for worldvolume indices and Latin characters for the target space. We are now ready to see under which circumstances can the configuration (\[embed\]), (\[fluxes\]) be supersymmetric. This is determined by the condition for $\k$-symmetry, which becomes $$\label{ksym} [\G_{\0u\1u}\g_{2} + E \g_2 \G_* +B \G_{\underline{0}}\Gamma _*-\Delta]\e=0\,,$$ where $$\Delta^2=B^2+y'^{\underline{i}}y'^{\underline{i}}(1-E^2)\,, \qquad y'^{\underline{i}}=y'^i e_i{}^{\underline{i}}\,,\qquad \gamma _2=y'^{\underline{i}}\Gamma _{\underline{i}}\,, \espai\espai y'^i :=\partial_2y^i\,. \label{gamma2}$$ The solutions of (\[ksym\]) for $\e$ are the Killing spinors of the background, determining the remaining supersymmetry. Proof of worldvolume supersymmetry {#ss:proof} ---------------------------------- In this section we shall prove that the previous configurations always preserve $1/4$ of the remaining background supersymmetries preserved by the choice of M. We will show that the usual supertube projections are necessary and sufficient in all cases except when we do not require that the curve $\CC$ is arbitrary and it lies completely within the flat directions that M may have. Therefore we first discuss the arbitrary case, and after that, we deal with the special situation. [**Arbitrary Curve:**]{} If we demand that the configuration is supersymmetric for any arbitrary curve in M, then all the terms in (\[ksym\]) that contain the derivatives $y'^i(\s^2)$ must vanish independently of those that do not contain them. The vanishing of the first ones (those containing $\gamma _2$) give $$\label{F1} \G_{\0u \1u }\Gamma _*\e=-E \e \espai \Longrightarrow \espai E^2=1\,, \espai \textrm{and} \espai \G_{\0u \1u}\G_{*}\e=-\sign(E) \epsilon\,,$$ which signals the presence of fundamental strings in the longitudinal direction of the tube. Now, when $E^2=1$, then $\Delta=|B|$, and the vanishing of the terms independent of $y'^i(\s^2)$ in (\[ksym\]) give $$\label{D0} \G_{\0u}\Gamma _*\e=\sign(B) \epsilon\,,$$ which signals the presence of D0 branes dissolved in the worldvolume of the supertube. Since both projections, (\[F1\]) and (\[D0\]), commute, the configuration will preserve $1/4$ of the background supersymmetries [*as long as they also commute with all the projections imposed by the background itself.*]{} It is easy to prove that this will always be the case. Since the target space is of the form 2 the only nontrivial conditions that its Killing spinors have to fulfil are $$\label{constant} \nabla_i \e=\left(\partial_i + {1\over 4} w_i{}^{\underline{jk}} \G_{\underline{jk}} \right)\e=0\,,$$ with all indices only on M (which in our ordering, means $2 \leq i \leq 9$). If one prefers, the integrability condition can be written as $$\label{integrability} [\nabla_i,\nabla_j]\e = {1 \over 4} R_{ij}{}^{\underline{kl}} \G_{\underline{kl}}\e =0\,.$$ In either form, all the conditions on the background spinors involve only a sum of terms with two (or none) gamma matrices of M. It is then clear that such projections will always commute with the F1 and the D0 ones, since they do not involve any gamma matrix of M. To complete the proof, one must take into account further possible problems that could be caused by the fact that the projections considered so far are applied to background spinors which are not necessarily constant. To see that this does not change the results, note that (\[constant\]) implies that all the dependence of $\e$ on the M coordinates $y^i$ must be of the form $$\e=M(y)\e_0\,,$$ with $\e_0$ a constant spinor, and $M(y^i)$ a matrix that involves only products of even number of gamma matrices on M (it may well happen that $M(y)={\mathord{\!\usebox{\uuunit}}}$). Now, any projection on $\e$ can be translated to a projection on $\e_0$ since $$P\e=\e\,, \espai \mbox{with} \espai P^2={\mathord{\!\usebox{\uuunit}}}\,,\qquad \trace P=0\,, \espai \Longrightarrow$$ $$\tilde{P}\e_0 =\e_0\,, \espai \mbox{with} \espai \tilde{P}\equiv M^{-1}(y) P M(y)\,, \quad \tilde{P}^2={\mathord{\!\usebox{\uuunit}}}\,,\quad \trace \tilde{P}=0\,.$$ The only subtle point here is that, if some of the $\e_0$ have to survive, the product of $ M^{-1}(y) P M(y)$ must be a constant matrix[^4]. But this is always the case for all the projections related to the presence of M, since we know that such spaces preserve some Killing spinors. Finally, it is also the case for the F1 and D0 projections, since they commute with any even number of gamma matrices on M. The conclusion is that, for an arbitrary curve in M to preserve supersymmetry, it is necessary and sufficient to impose the F1 and D0 projections. In all cases, it will preserve $1/4$ of the background supersymmetry. We will illustrate this with particular examples in section \[ss:examples\]. [**Non-Arbitrary Curve:**]{} If we now give up the restriction that the curve must be arbitrary, we can still show that the F1 and D0 projection are necessary and sufficient, except for those cases in which the curve lies entirely in the flat directions that M may have. Of course, the former discussion shows that such projections are always sufficient, so we will now study in which cases they are necessary as well. In order to proceed, we need to prove an intermediate result. [*Lemma: If the velocity of the curve does not point in a flat direction of M, then the background spinor always satisfies at least one projection like $$\label{esquematic} P\e=Q\e \,, \qquad \mbox{such that}\qquad [P,\gamma_2]=0 \,,\qquad \{Q,\gamma_2\}=0\,,$$ with $P$ and $Q$ a non-vanishing sum of terms involving only an even number of gamma matrices, and $Q$ invertible*]{}. To prove this, we move to a point of the curve that lies in a curved direction of M, i.e. a point where not all components of $R_{ij}{}^{\underline{kl}}$ are zero. We perform a rotation in the tangent space such that the velocity of the curve points only in one of the curved directions, [*e.g.*]{} $$y'^{\underline{9}}\neq 0 \,, \espai \espai y'^{\underline{a}}=0 \,, \espai \espai a=2,...,8 \,,\qquad R_{ij}{}^{a9}\neq 0\,, \label{9direction}$$ at least one choice of $i$, $j$ and $a$, and where we use the definitions of (\[gamma2\]). With this choice, $\g_2$ becomes simply $\gamma_2=y'^{\underline{9}} \G_{\underline{9}}$. Therefore, at least one of the equations in (\[integrability\]) can be split in $$\left( R_{ij}{}^{\underline{ab}}\G_{\underline{ab}} + R_{ij}{}^{\underline{a9}}\G_{\underline{a9}} \right) \e=0 \,,$$ with the definitions $$\label{define} P= R_{ij}{}^{\underline{ab}}\G_{\underline{ab}} \,, \espai\espai Q= -R_{ij}{}^{\underline{a9}}\G_{\underline{a9}} \,.$$ The assumption (\[9direction\]) implies that $Q$ is nonzero and invertible, as the square of $Q$ is a negative definite multiple of the unit matrix. This implies that also $P$ is non-zero since, otherwise, $\e$ would have to be zero and this is against the fact that all the listed M manifolds admit covariantly constant spinors. It is now immediate to check that $\gamma_2$ commutes with $P$ while it anticommutes with $Q$, which completes the proof. ------------------------------------------------------------------------ We can now apply this lemma and rewrite one of the conditions in (\[integrability\]) as an equation of the kind (\[esquematic\]). We then multiply the $\k$-symmetry condition (\[ksym\]) by $P-Q$. Clearly only the first two terms survive, and we can write $$0=\left[ \Gamma _{\underline{01}}-E \Gamma _*\right](P-Q)\gamma _2 \epsilon =- 2 \left[ \Gamma _{\underline{01}}-E \Gamma _*\right]\gamma _2 Q\epsilon=-2\gamma _2 Q\left[ \Gamma _{\underline{01}}+E \Gamma _*\right]\epsilon\,. \label{Pkappa}$$ Since $(\gamma_2)^2=y'_iy'^i$ cannot be zero if the curve is not degenerate, we just have to multiply with $Q^{-1}\gamma _2$ to find again (\[F1\]). Plugging this back into (\[ksym\]) gives the remaining D0 condition (\[D0\]). Summarising, the usual supertube conditions are always necessary and sufficient except for those cases where the curve is not required to be arbitrary and lives entirely in flat space; then, they are just sufficient. For example, one could choose $\CC$ to be a straight line in one of the $\CR$ factors that some of the M have, and take a constant $B$, which would correspond to a planar D2-brane preserving $1/2$ of the background supersymmetry. Hamiltonian analysis {#ss:Hamilanal} ==================== We showed that in order for the supertube configurations (\[embed\]), (\[fluxes\]) to be supersymmetric we needed $E^2=1$, but we found no restriction on the magnetic field $B(\s^1,\s^2)$. We shall now check that some conditions must hold in order to solve the equations of motion of the Maxwell fields. We will go through the Hamiltonian analysis which will enable us to show that these supertubes saturate a BPS bound which, in turn, implies the second-order Lagrange equations of the submanifold determined by the constraints. We will restrict to time-independent configurations, which we have checked to be compatible with the full equations of motion. The Lagrangian is then given by (\[delta\]) $${\cal L}=-\Delta =-\sqrt{B^2+R^2(1-E^2)}\,, \label{Lgeneral}$$ where we have defined $R^2=y'^{\underline{i}}y'_{\underline{i}}$, and $R>0$. To obtain the Hamiltonian we first need the displacement field, $$\Pi= \frac{\partial {\cal L}}{\partial E}\,=\frac{E R^2}{\sqrt{B^2+(1-E^2) R^2}}\,, \label{valuePi}$$ which can be inverted to give $$E=\frac{\Pi}{R} \sqrt{\frac{B^2+R^2}{R^2+\Pi ^2}}\,,\qquad \Delta=R \sqrt{\frac{B^2+R^2}{ R^2+\Pi ^2}}\,. \label{E2}$$ The Lagrange equations for $A_0$ and $A_2$ give two constraints $$\partial_1 \Pi=0 \,, \espai\espai \partial _1\left(\frac{B}{ R}\sqrt{\frac{ R^2+\Pi ^2}{B^2+ R^2}}\right)=0\,, \label{gausslaw}$$ the first one being the usual Gauss law. Together, they imply that $\partial _1B=0$, i.e., the magnetic field can only depend on $\sigma^2$. Finally, the equations for $A_1$ and $y^{\underline{i}}$ give, respectively, $$\partial _2\left(\frac{B}{R}\sqrt{\frac{ R^2+\Pi ^2}{B^2+ R^2}}\right)=0 \,, \qquad \partial _2\left[2y'^{\underline{i}} \frac{ R^4-\Pi ^2B^2} { R^2\sqrt{( R^2+\Pi ^2)( R^2+B^2)}} \right]=0 \,. \label{eom}$$ The Hamiltonian density is given by $$\mathcal{H}=E\Pi- \mathcal{L}= \frac{1}{R}\sqrt{(R^2+\Pi ^2)(B^2+R^2)}\,. \label{valueH}$$ In order to obtain a BPS bound [@Gauntlett:1998ss], we rewrite the square of the Hamiltonian density as $$\mathcal{H}^2= \left(\Pi \pm B\right)^2+ \left(\frac{\Pi B}{ R}\mp R\right)^2\,, \label{reH}$$ from which we obtain the local inequality $$\label{inequality} \mathcal{H}\geq |\Pi\pm B|\,,$$ which can be saturated only if $$\label{satura} R^2=y'^{\underline{i}}y'_{\underline{i}}=\pm \Pi B \espai \Leftrightarrow \espai E^2=1 \,.$$ It can be checked that the configurations saturating this bound satisfy the remaining equations of motion (\[eom\]). Note that the Poynting vector generated by the electromagnetic field is always tangent to the curve $\CC$ and its modulus is precisely $|\Pi B|$. We can then use exactly the same arguments as in [@Mateos:2001pi]. Equation (\[satura\]) tells us that, once we set $E^2=1$, and regardless of the value of $B(\s^2)$, the Poynting vector is automatically adjusted to provide the required centripetal force that compensates the gravitational attraction at every point of $\CC$. The only difference with respect to the original supertubes in flat space is that the curvature of the background is taken into account in (\[satura\]), through the explicit dependence of $R^2$ on the metric of M. Finally, the integrated version of the BPS bound (\[inequality\]) is $$\tau \geq |q_0 \pm q_s|\,, \espai \textrm{with} \espai \tau\equiv \int_{\CC} {\rmd \sigma^2} \, \mathcal{H} \,,\qquad q_0 \equiv \int_{\CC} {\rmd \sigma^2}\, B \,, \qquad q_s \equiv \int_{\CC} {\rmd \sigma^2} \, \Pi \,.$$ and the normalisation $0\leq \s^2 < 1$. Similarly, the integrated bound is saturated when $$\label{length} L(\CC)=\int_{\CC} {\rmd \sigma^2} \sqrt{ g_{22}} = \int_{\CC} {\rmd \sigma^2} \sqrt{y'^{\underline{i}}y'_{\underline{i}}} = \int_{\CC} {\rmd \sigma^2} \sqrt{|\Pi B|} =\sqrt{|q_s\,q_0|}\,, \label{integratedbps}$$ where $L(\CC)$ is precisely the proper length of the curve $\CC$, and the last equality is only valid when both $\Pi$ and $B$ are constant, as will be the case in our examples. Examples {#ss:examples} ======== After having discussed the general construction of supertubes in reduced holonomy manifolds, we shall now present two examples in order to illustrate some of their physical features. Supertubes in ALE spaces: 4 supercharges ---------------------------------------- Let us choose M$=\CR^4 \times CY_2$, i.e. the full model being $\mathbb{R}^{1,5}\times CY_2$. We take the $CY_2$ to be an ALE space provided with a multi-Eguchi–Hanson metric [@Eguchi:1978xp] $$\begin{aligned} &&\rmd s^2_{(4)}=V^{-1}(\yvec) \rmd\yvec \cdot \rmd\yvec + V(\yvec)\left(\rmd\psi + \vec{A}\cdot \rmd\yvec\right)^2 \,, \nonumber\\ &&V^{-1}(\yvec)=\sum_{r=1}^{N} {Q\over |\yvec-\yvec_r|} \,, \espai\espai\espai \vec{\nabla}\times \vec{A}=\vec{\nabla}V^{-1}(\yvec) \,, \end{aligned}$$ with $\yvec \in \CR^3$. These metrics describe a ${\mathop{\rm {}U}}(1)$ fibration over $\CR^3$, the circles being parametrized by $\psi\in [0,1]$. They present $N$ removable bolt singularities at the points $\yvec_r$, where the ${\mathop{\rm {}U}}(1)$ fibres contract to a point. Therefore, a segment connecting any two such points, together with the fibre, form (topologically) an $S^2$. For simplicity, we will just consider the two-monopoles case which, without loss of generality, can be placed at $\yvec=\vec{0}$ and $\yvec=(0,0,b)$. Therefore, the complete IIA background is $$\label{ALEIIA} \rmd s^2_{IIA}=-(\rmd x^0)^2+(\rmd x^1)^2+...+(\rmd x^5)^2+ \rmd s^2_{(4)} \,,$$ with $$V^{-1}(\yvec)={ Q \over |\yvec|} +{Q\over |\yvec-(0,0,b)|} \,.$$ Let us embed the D2 supertube in a way such that its longitudinal direction lies in $\CR^5$ while its compact one wraps and $S^1$ inside the $S^2$ that connects the two monopoles. More explicitly, $$\label{firstcase} X^0=\sigma^0\,, \espai X^1=\sigma_1\,,\espai \psi=\sigma^2\,, \qquad y^3=\textrm{const.} \,, \espai y^1=y^2=0 \,.$$ ![image](ALE.eps){width="10.5cm" height="4.5cm"} Since any $S^1$ is contractible inside an $S^2$, the curved part would tend collapse to the nearest pole, located at $y^3=0$ or $y^3=b$. As in flat space, we therefore need to turn on a worldvolume flux as in (\[fluxes\]), with $E$ and $B$ constant for the moment. According to our general discussion, this configuration should preserve $1/4$ of the 16 background supercharges already preserved by the $ALE$ space. In this case, the $\k$-symmetry equation is simply $$\label{ALEkappa} \left(\G_{\underline{01\psi}}+E \G_{\underline{\psi}}\Gamma _*+B\G_{\underline{0}}\Gamma _* - \Delta \right)\e=0 \,,$$ where $\e$ are the Killing spinors of the background (\[ALEIIA\]). They can easily be computed and shown to be just constant spinors subject to the projection $$\label{PALE} \G_{\underline{y^1y^2y^3\psi}}\e=-\e \,.$$ Then, the $\kappa $-symmetry equation can be solved by requiring (\[F1\]) and (\[D0\]), which involve the usual D0/F1 projections of the supertube. Since they commute with (\[PALE\]), the configuration preserves a total of $1/8$ of the 32 supercharges. It is interesting to see what are the consequences of having $E^2=1$ for this case. Note that, from our general Hamiltonian analysis, we saw that, for fixed D0 and F1 charges, the energy is minimised for $E^2=1$. When applied to the present configuration, (\[length\]) reads $$\label{selects} V(y^3)=|q_0 q_s| \,.$$ which determines $y^3$, and therefore selects the position of the $S^1$ inside the $S^2$ that is compatible with supersymmetry. Since $V(y^3)$ is invariant under $y^3 \leftrightarrow (b-y^3)$, the solutions always come in mirror pairs with respect to the equator of the $S^2$. The explicit solutions are indeed $$y^3_{\pm}={b\over 2} \left(1 \pm \sqrt{1-{4Q\over b}|q_0q_s|}\right) \;.$$ Note that a solution exists as long as the product of the charges is bounded from above to $$\label{solu} |q_0 q_s| \leq {b\over 4Q} \;.$$ The point is that this will always happen due to the fact that, contrary to the flat space case, the $S^1$ cannot grow arbitrarily within the $S^2$. As a consequence, the angular momentum acquires its maximum value when the $S^1$ is precisely in the equator. To see it more explicitly, setting $E^2=1$ and computing $q_0$ and $q_s$ for our configuration gives $$|q_0 q_s| = V(y^3) \leq V(y^3 \rightarrow {b\over 2}) ={b\over 4Q} \;,$$ which guarantees that (\[solu\]) is always satisfied. Finally, note that we could have perfectly chosen, for instance, a more sophisticated embedding in which $y^3$ was not constant. This would be the analogue of taking a non-constant radius in the original flat space supertube. Again, by the general analysis of the previous sections, this would require the Poynting vector to vary in order to locally compensate for the gravitational attraction everywhere, and no further supersymmetry would be broken. Supertubes in $CY_4$ spaces: 1 supercharge ------------------------------------------ The purpose of the next example is to show how one can reach a configuration with one single surviving supercharge in a concrete example. One could take any of the $1/8$-preserving backgrounds of the M Table. Many metrics for these spaces have been recently found in the context of supergravity duals of non-maximally supersymmetric field theories. Let us take the $CY_4$ that was found in [@Gomis:2001vg; @Cvetic:2000db] since the Killing spinors have been already calculated explicitly [@Brugues:2002ff]. This space is a $C^2$ bundle over $S^2\times S^2$, and the metric is $$\begin{aligned} \rmd s^2_{(CY_4)}&=&A(r)\left[ \rmd \theta _1^2+\sin^2\theta _1\rmd \phi _1^2+ \rmd \theta _2^2+\sin^2\theta _2\rmd \phi _2^2 \right] +U^{-1}\rmd r^2+ {r^2 \over 4}\left( \rmd\theta^2+ \sin^2\theta \rmd\phi^2\right)+ \nonumber\\ && +{1\over 4}U r^2\left( \rmd\psi+\cos\theta \rmd\phi +\cos\theta _1\rmd \phi _1+\cos\theta _2\rmd \phi _2\right) ^2\,, \label{metric11}\end{aligned}$$ where $$A(r)={3\over 2}(r^2+l^2)\,, \qquad U(r)={3 r^4 + 8 l^2 r^2 + 6 l^4 \over {6(r^2+l^2)^2}} \,,\qquad C(r)=\frac14 U\,r^2\,. \label{defA}$$ By writing the complete IIA background metric as $$\rmd s^2_{IIA}=-(\rmd x^0)^2 +(\rmd x^1)^2 + \rmd s^2_{(CY_4)}\,,$$ and using the obvious vielbeins, with the order $$\begin{array}{cccccccc} 2& 3 & 4 & 5 & 6 & 7 & 8 & 9 \\ \theta _1 & \theta _2 & \phi _2 & \phi _1 & r & \theta & \phi & \psi \end{array} \label{variables}$$ the corresponding Killing spinors are $$\epsilon =\rme^{-{{\textstyle\frac{1}{2}}}\psi \Gamma _{\underline{78}}}\epsilon _0\,,$$ with $\e_0$ a constant spinor subject to $$ \Gamma _{\underline{25}}\epsilon_0 =\Gamma _{\underline{34}}\epsilon_0\,,\qquad \Gamma _{\underline{25}}\epsilon_0 =\Gamma_{\underline{78}}\epsilon_0 \,,\qquad \Gamma _{\underline{67}}\epsilon_0 =\Gamma _{\underline{98}}\epsilon_0 \,. \label{projCY4}$$ To analyse $\k$-symmetry, let us take the compact part of the supertube to lie along, say, the $\phi_1$ direction, while setting to constant the rest of the $CY_4$ coordinates. As in the previous example, this would have the interpretation of an $S^1$ embedding in one of the two $S^2$ in the base of the $CY_4$. Imposing $\k$-symmetry: $$\label{CY4kappa} \left(\G_{\underline{015}}+E \G_{\underline{5}}\Gamma _*+B\G_{\underline{0}}\Gamma _* - \Delta \right)\e=0 \,.$$ Now, the first projection of (\[projCY4\]) happens to anticommute with the $\gamma_2$ defined in (\[gamma2\]) $$\gamma _2=y'^ie_i{}^{\underline{i}}\Gamma _{\underline{i}}\, =\, A^{1\over 2}(r)\, \sin{\theta_1} \, \Gamma_{\underline{5}} \,.$$ In other words, this just illustrates a particular case of (\[esquematic\]) for which the direction $\underline{5}$ plays the role of $\underline{9}$, and for which $P=\Gamma _{\underline{34}}$ and $Q=\Gamma _{\underline{25}}$. We can now follow the steps in section \[ss:proof\] and multiply (\[CY4kappa\]) by $P-Q$. This yields again the usual supertube conditions (\[F1\]) and (\[D0\]). Since all the gamma matrices appearing in (\[projCY4\]), (\[F1\]) and (\[D0\]) commute, square to one and are traceless, the configuration preserves only one of the 32 supercharges of the theory. Of course, this is not in contradiction with the fact that the minimal spinors in 2+1 dimensions have 2 components, since the field theory on the worldvolume of the D2 is not Lorentz invariant because of the non-vanishing electromagnetic field. Supergravity analysis {#ss:SGanal} ===================== In this section we construct the supergravity family of solutions that correspond to all the configurations studied before. We start our work with a generalisation of the ansatz used in [@Emparan:2001ux; @Mateos:2001pi] to find the original solutions. Our analysis is performed in eleven dimensional supergravity, mainly because its field content is much simpler than in IIA supergravity. Once the eleven-dimensional solution is found, we reduce back to ten dimensions, obtaining our generalised supertube configurations. The first step in finding the solutions is to look for supergravity configurations with the isometries and supersymmetries suggested by the worldvolume analysis of the previous sections. Then, we will turn to the supergravity field equations to find the constraints that the functions of our ansatz have to satisfy in order that our configurations correspond to minima of the eleventh dimensional action. Finally, we choose the correct behaviour for these functions so that they correctly describe the supertubes once the reduction to ten dimensions is carried on. Supersymmetry analysis ---------------------- Our starting point is the supertube ansatz of [@Emparan:2001ux; @Mateos:2001pi] $$\begin{aligned} \rmd s^2_{\it 10} &=& - U^{-1} V^{-1/2} \, ( \rmd t - A)^2 + U^{-1} V^{1/2} \, \rmd x^2 + V^{1/2} \, \delta_{ij}\rmd y^i\rmd y^j \,, \nn B_{\it 2} &=& - U^{-1} \, (\rmd t - A) \wedge \rmd x + \rmd t\wedge \rmd x\,, \nn C_{\it 1} &=& - V^{-1} \, (\rmd t - A) + \rmd t \,, \nn C_{\it 3} &=& - U^{-1} \rmd t\wedge \rmd x \wedge A \,, \nn e^\phi &=& U^{-1/2} V^{3/4} \,, \label{ds10} \end{aligned}$$ where the Euclidean space ($\mathbb{E}_8$) coordinates are labelled by $y^i$, with $i,j,\cdots = (2,\ldots,9)$, $V=1+K$, $A=A_i\,\rmd y^i$ and $B_{\it 2}$ and $C_{\it p}$ are respectively, the Neveu-Schwarz and Ramond-Ramond potentials. $V,U,A_i$ depend only on the $\mathbb{E}_8$ coordinates. To up-lift this ansatz, we use the normal Kaluza-Klein form of the eleven dimensional metric and three-form, $$\begin{aligned} \rmd s^2_{\it 11} &=& e^{-2\phi/3}\rmd s^2_{\it 10}+e^{4\phi/3}(\rmd z+C_{\it 1})^2 \,,\nonumber\\ N_{\it 3 } &=& C_{\it 3}+B_{\it 2} \wedge \rmd z \,,\label{uplift}\end{aligned}$$ where $N_{\it 3 }$ is the eleventh dimensional three-form. The convention for curved indices is $M=(\mu;i)=(t,z,x\,;\,2,3,...9)$ and for flat ones $A=(\alpha;a)=(\underline{t},\underline{z},\underline{x}\,;\,\underline{2},\underline{3}...,\underline{9})$. The explicit form of the eleven-dimensional metric is given by, $$\begin{aligned} &&\rmd s^2_{\it 11} = U^{-2/3}\left[ -\rmd t^2 + \rmd z^2 + K(\rmd t+\rmd z)^2 + 2(\rmd t+\rmd z)A +\rmd x^2\right]+U^{1/3}\rmd s^2_{\it 8}\,, \nonumber \\ &&F_{\it 4} = \rmd t \wedge \rmd(U^{-1})\wedge \rmd x\wedge \rmd z - (\rmd t+\rmd z)\wedge \rmd x \wedge \rmd(U^{-1}A)\;, \label{ansatz}\end{aligned}$$ where $F_{\it 4} = \rmd N_{\it 3}$. This background is a solution of the equations of motion in eleven dimensions derived from the action $$S_{11d}=\int \,\left[ R *1 \, - \, {1\over 2} F_{\it 4} \wedge * F_{\it 4} \, + \, {1 \over 3} F_{\it 4} \wedge F_{\it 4} \wedge N_{\it 3} \right] \,,$$ when the two functions $K$ and $U$, as well as the one-form $A_{\it 1}$, are harmonic in $\mathbb{E}_8$, i.e., $$(\rmd *_8\rmd)U=0 \,, \zespai (\rmd *_8\rmd)K=0 \,, \zespai (\rmd *_8\rmd)A_{\it 1}=0 \,, \label{harmonic}$$ where $*_8$ is the Hodge dual with respect to the Euclidean flat metric on $\mathbb{E}^8$. It describes a background with an M2 brane along the directions $\{t,z,x\}$, together with a wave traveling along $z$, and angular momentum along $\euc^8$ provided by $A_{\it 1}$. Next, we generalise the ansatz above by replacing $\euc^8$ by one of the eight dimensional M manifolds of the table, and by allowing $K$, $U$ and $A_{\it 1}$ to have an arbitrary dependence on the M coordinates $y^i$. We therefore replace the previously flat metric on $\euc^8$ by a reduced holonomy metric on M, with vielbeins $\te^a$. Hence, in (\[ansatz\]), we replace $$U^{1/3} \delta_{ij} \rmd y^i \rmd y^j \espai \longrightarrow \espai U^{1/3} \delta_{ab}\te^a \te^b \,.\label{E8byM8}$$ We use a null base of the cotangent space, defined by $$\begin{aligned} \label{secondbase} && e^+=-U^{-2/3}(\rmd t+\rmd z) \,, \zespai e^-=\undos (\rmd t-\rmd z) - {K\over 2} (\rmd t+\rmd z) - A \,,\nonumber\\ && e^x=U^{-1/3} \rmd x \,, \zespai e^a=U^{1/6} \te^a \,.\end{aligned}$$ This brings the metric and $F_{\it 4}$ into the form $$\label{flat} \rmd s^2_{\it 11}=2 e^+ e^- + e^x e^x + \delta_{ab}e^a e^b \,, \zespai F_{\it 4}=-U^{-1} \, \rmd U\we e^x \we e^+ \we e^- \, -\, \rmd A \we e^x \we e^+ \,.$$ As customary, the torsion-less condition can be used to determine the spin connection 1-form $\omega _{AB}$. In our null base, the only non-zero components are $$\begin{aligned} &&\omega_{+-}=-{U_a \over 3U} e^a \,, \zespai \omega_{+a}=\frac{1}{2} U^{1/2}\tilde K_a e^+-{U_a \over 3U}e^- -\frac12a_{ab} e^b\,, \zespai \omega_{-a}=-{U_a \over 3U} e^+ \,, \nonumber\\ &&\omega_{xa}=-{U_a \over 3U} e^x \,,\qquad \omega_{ab}={U_b \over 6U}e^a -{U^a \over 6U}e^b +\tilde \omega _{ab} +\frac12a_{ab} e^+ \,, \label{spin2}\end{aligned}$$ were we have defined various tensor quantities through the relations $$\rmd U = U_a e^a \,, \zespai \rmd K=\tilde K_a \tilde e^a \,, \zespai \rmd A={{\textstyle\frac{1}{2}}}a_{ab} e^a \we e^b\,,$$ and $\tilde{\omega}^{bc}$ are the spin connection one-forms corresponding to $\tilde{e}^a$, i.e. $\rmd \tilde e^a+\tilde \omega ^a{}_b\tilde e^b=0$. We now want to see under which circumstances our backgrounds preserve some supersymmetry. Since we are in a bosonic background i.e. all the fermions are set to zero, we just need to ensure that the variation of the gravitino vanishes when evaluated on our configurations. In other words, supersymmetry is preserved if there exist nonzero background spinors $\e$ such that[^5] $$\label{gravitino} \left(\p_A+{1\over 4}\omega_A{}^{BC}\G_{BC} -{1\over 288}\G_A{}^{BCDE}F_{BCDE}+{1\over 36} F_{ABCD}\G^{BCD}\right)\e=0 \,.$$ We will try an ansatz such that the spinor depends only on the coordinates on M. It is straightforward to write down the eleven equations (\[gravitino\]) for each value of $A=\{+,-,x,a\}$. The equation for $A=x$ is $${U_a \over 6U} \Gamma _a\left(\Gamma_{x}-\Gamma_{+-}\right) \epsilon -{a_{ab}\over 12}\Gamma_{ab}\Gamma _-\epsilon =0\,. \label{SUSYx}$$ Assuming that $a_{ab}$ and $\alpha _a$ are arbitrary and independent we find $$\label{proj1} \G_- \, \e=0 \,, \zespai \textrm{and} \zespai \G_{x}\e=-\e \,.$$ Using these projections, it is a straightforward algebraic work to see that the equation for $A=+$ and $A=-$ are automatically satisfied. Finally, the equations for $A=a$ simplify to $$\label{proj2} \nabla_{i}\e \, \equiv \, \left(\p_{i}+{1\over 4} \tilde{\omega}_i{}^{bc}\G_{bc}\right)\e =0 \,.$$ By the same arguments as in the previous sections, the projections (\[proj1\]) preserve 1/4 of the 32 real supercharges. On the other hand, (\[proj2\]) is just the statement that M must admit covariantly constant spinors. Depending on the choice of M, the whole 11d background will preserve the expected total number of supersymmetries that we indicated in the table written in the introduction. To reduce back to IIA supergravity, we first go to another flat basis $$e^+= -U^{-1/3}V^{-1/2}\left( e^0+e^z\right)\,,\qquad e^-={{\textstyle\frac{1}{2}}} U^{1/3}V^{1/2}\left( e^0-e^z\right)\,, \label{e+-tz}$$ which implies that $$\Gamma _-=U^{-1/3}V^{-1/2}\left( \Gamma _0-\Gamma _z\right). \label{Gamma-zt}$$ We reduce along $z$, i.e. replace $\Gamma _z$ by $\Gamma _*$. The projections (\[proj1\]) become the usual D0/F1 projections, with the fundamental strings along the $x$-axis. $$\G_0\Gamma _*\e=-\e \,, \zespai \textrm{and} \zespai \epsilon =-\Gamma _x\epsilon =\G_{x0}\Gamma _*\e \,.$$ Equations of motion ------------------- Now that we have proved that the correct supersymmetry is preserved (matching the worldvolume analysis), we proceed to determine the equations that $U$, $K$ and $A_{\it 1}$ have to satisfy in order that our configurations solve the field equations of eleven-dimensional supergravity. Instead of checking each of the equations of motion, we use the analysis of [@Gauntlett:2002fz] that is based on the integrability condition derived from the supersymmetry variation of the gravitino (\[gravitino\]). The result of this analysis is that when at least one supersymmetry is preserved, and the Killing vector $\mathcal{K}_\mu \equiv \bar \epsilon \Gamma_\mu \epsilon$ is null, all of the second order equations of motion are automatically satisfied, except for 1. The equation of motion for $F_{\it 4}$, 2. The Einstein equation $E_{++}=T_{++}$, where $E_{++}$ and $T_{++}$ are the Einstein and stress-energy tensors along the components $++$ in a base where $\mathcal{K}_\mu=\delta _\mu ^+\mathcal{K}_+$. Let us explain why the above statement is correct. The integrability conditions give no information about the field equation for the matter content, therefore the equation of motion for $F_{\it 4}$ has to be verified by hand. Also, in most cases all of the Einstein equations are automatically implied by the existence of a non-trivial solution of (\[gravitino\]). With (\[proj1\]) and in the base where the metric takes the form (\[flat\]), and thus $\Gamma _+\Gamma _-+\Gamma _-\Gamma _+=2$, we have $$\mathcal{K}_\mu =\bar \epsilon \Gamma_\mu \epsilon={{\textstyle\frac{1}{2}}}\bar \epsilon \Gamma_\mu \Gamma _-\Gamma _+\epsilon\,. \label{Kmu}$$ This vanishes for all $\mu $ except $\mu =+$, implying that our configuration falls into the classification of those backgrounds that admit a null Killing spinor and as a consequence the associated Einstein equations escape the analysis. We thus have to check the two items mentioned above. Let us start with the equation for $F_{\it 4}$, which is $$\label{fourform} \rmd *F_{\it 4} + F_{\it 4} \we F_{\it 4} = 0 \,.$$ Using the fact that the Hodge dual of a p-form with respect to $e^a$ is related to the one with respect to $\te^a$ by $$*_8 C_p = U^{(4-p)/3} \t*_8 C_{p} \,,$$ where $$C_p=\frac{1}{p!}C_{a_1\ldots a_p}\tilde e^{a_1}\wedge \ldots \wedge \tilde e^{a_p}\ \rightarrow \tilde *_8 C_{p}= \frac{1}{p!(8-p)!}C_{a_1\ldots a_p} \varepsilon ^{a_1\ldots a_8}\tilde e^{a_{p+1}}\wedge \ldots \wedge \tilde e^{a_8}\,,$$ it is easy to see that (\[fourform\]) becomes $$0= (\rmd\t*_8\rmd) U + (\rmd t+\rmd z)\we (\rmd\t*_8\rmd)A \,.$$ This implies that $U$ and $A_{\it 1}$ must be harmonic with respect to the metric of M, i.e., $$(\rmd\t*_8\rmd) U = 0 \,, \zespai (\rmd\t*_8\rmd)A_{\it 1}=0 \,. \label{harmonicUA}$$ Finally, using (\[flat\]) and (\[spin2\]), one can explicitly compute the $\{++\}$ components of the Einstein and stress-energy tensors, and obtain $$\begin{aligned} E_{++}&=&R_{++}=-{{\textstyle\frac{1}{2}}}U^{1/3}(\t*_8\rmd\t*_8\rmd)K + {{\textstyle\frac{1}{2}}} *_8\left(\rmd A\we *_8 \rmd A\right) \,,\nonumber\\ T_{++}&=&{{\textstyle\frac{1}{12}}}F_{+ABC}F_+{}^{ABC} = {{\textstyle\frac{1}{2}}} *_8\left(\rmd A\we *_8 \rmd A\right) \,,\end{aligned}$$ Therefore, the last non-trivial equation of motion tells us that also $K$ must be harmonic on M, $$(\rmd\t*_8\rmd)K=0 \,. \label{harmonicK}$$ Constructing the supertube -------------------------- In order to construct the supergravity solutions that properly describe supertubes in reduced holonomy manifolds, we reduce our eleven-dimensional background to a ten-dimensional background of type IIA supergravity, using (\[uplift\]) again. We obtain (\[ds10\]) with the replacement (\[E8byM8\]), and the constraints (\[harmonicUA\]) and (\[harmonicK\]). At this point we have to choose $U$, $K$ and $A_{\it 1}$ so that they describe a D2-brane with worldvolume $\CR^{1,1} \times \CC$, with $\CC$ an arbitrary curve in M. As it was done in [@Emparan:2001ux; @Mateos:2001pi], one should couple IIA supergravity to a source with support along $\CR^{1,1} \times \CC$, and solve the M Laplace equations (\[harmonicUA\]) and (\[harmonicK\]) with such a source term in the right hand sides. If this has to correspond to the picture of D0/F1 bound states expanded into a D2 by rotation, the boundary conditions of the Laplace equations must be such that the solution carries the right conserved charges. In the appropriate units, $$q_0=\int_{\p \CM_8} \t*_8 \rmd C_{\it 1} \,, \zespai q_s=\int_{\p \CM_8} \t*_8 \rmd B_{\it 2} \,, \zespai A_{\it 1} \stackrel{\p \CM_8}{\longrightarrow}L_{ij}y^j \rmd y^i \,.$$ Here, as in [@Emparan:2001ux; @Mateos:2001pi], $L_{ij}$ would have to match with the angular momentum carried by the electromagnetic field that we considered in the worldvolume approach. The Laplace problem in a general manifold can be very complicated and, in most cases, it cannot be solved in terms of ordinary functions. We will not intend to do so, but rather we will just claim that, once $U$, $K$ and $A_{\it 1}$ have been determined, they can be plugged back into (\[ds10\]), with (\[E8byM8\]), and the background will describe the configurations that we have been discussing in this paper. It will have the expected isometries, supersymmetries and conserved charges. Conclusions {#ss:concl} =========== We have shown that the expansion of the D0/F1 system into a D2 can happen supersymmetrically in all the backgrounds of the form $\CR^{1,1} \times \CM_8$, with M the manifolds of the table. We have shown this in the worldvolume as well as in the supergravity setting. By a Hamiltonian analysis, we connected the result to a BPS bound on charges that are also well defined in the curved background. We remark that our research is different from [@Grandi:2002gt], where it was shown that [*the supertube itself*]{}, after some T-dualities, can be described by a special Lorentzian-holonomy manifold in eleven dimensions. Acknowledgments. {#acknowledgments. .unnumbered} ================ We are grateful to Roberto Empar[á]{}n, David Mateos, Guillermo A. Silva, Joan Sim[ó]{}n and Paul Townsend for useful discussions. Work supported in part by the European Community’s Human Potential Programme under contract HPRN-CT-2000-00131 Quantum Spacetime, in which P. Silva is associated with Torino Universit[à]{}. This work is also supported by MCYT FPA, 2001-3598 and CIRIT GC 2001SGR-00065. T. Mateos is supported by the grant FI from the Generalitat de Catalunya. The work of A.V.P. is supported in part by the Federal Office for Scientific, Technical and Cultural Affairs through the Inter-university Attraction Pole P5/27. [10]{} R. Empar[á]{}n, *Born-Infeld strings tunneling to D-branes*, Phys. Lett. [**B423**]{} (1998) 71–78, [[hep-th/9711106]{}](http://www.arXiv.org/abs/hep-th/9711106) R. C. Myers, *Dielectric-branes*, JHEP [**12**]{} (1999) 022, [[hep-th/9910053]{}](http://www.arXiv.org/abs/hep-th/9910053) R. Schiappa, *Matrix strings in weakly curved background fields*, Nucl. Phys. [**B608**]{} (2001) 3–50, [[hep-th/0005145]{}](http://www.arXiv.org/abs/hep-th/0005145) P. J. Silva, *Matrix string theory and the Myers effect*, JHEP [**02**]{} (2002) 004, [[hep-th/0111121]{}](http://www.arXiv.org/abs/hep-th/0111121) J. McGreevy, L. Susskind and N. Toumbas, *Invasion of the giant gravitons from anti-de Sitter space*, JHEP [**06**]{} (2000) 008, [[hep-th/0003075]{}](http://www.arXiv.org/abs/hep-th/0003075) M. T. Grisaru, R. C. Myers and O. Tafjord, *SUSY and Goliath*, JHEP [ **08**]{} (2000) 040, [[hep-th/0008015]{}](http://www.arXiv.org/abs/hep-th/0008015) D. Mateos and P. K. Townsend, *Supertubes*, Phys. Rev. Lett. [**87**]{} (2001) 011602, [[hep-th/0103030]{}](http://www.arXiv.org/abs/hep-th/0103030) T. Harmark and K. G. Savvidy, *Ramond-Ramond field radiation from rotating ellipsoidal membranes*, Nucl. Phys. [**B585**]{} (2000) 567–588, [[hep-th/0002157]{}](http://www.arXiv.org/abs/hep-th/0002157) D. Mateos, S. Ng and P. K. Townsend, *Tachyons, supertubes and brane/anti-brane systems*, JHEP [**03**]{} (2002) 016, [[hep-th/0112054]{}](http://www.arXiv.org/abs/hep-th/0112054) M. Berger, *Sur les groupes d’holonomie homog[è]{}ne des vari[é]{}t[é]{}s [à]{} connexion affine et des vari[é]{}t[é]{}s riemanniennes*, Bull. Soc. Math. France [**83**]{} (1955) 279–330 R. Empar[á]{}n, D. Mateos and P. K. Townsend, *Supergravity supertubes*, JHEP [**07**]{} (2001) 011, [[hep-th/0106012]{}](http://www.arXiv.org/abs/hep-th/0106012) D. K. Park, S. Tamaryan and H. J. W. M[ü]{}ller-Kirsten, *General criterion for the existence of supertube and BIon in curved target space*, [[hep-th/0302145]{}](http://www.arXiv.org/abs/hep-th/0302145) J.-H. Cho and P. Oh, *Rotating supertubes*, [[hep-th/0302172]{}](http://www.arXiv.org/abs/hep-th/0302172) E. Bergshoeff, R. Kallosh, T. Ort[í]{}n and G. Papadopoulos, *$\kappa$-symmetry, supersymmetry and intersecting branes*, Nucl. Phys. [**B502**]{} (1997) 149–169, [[hep-th/9705040]{}](http://www.arXiv.org/abs/hep-th/9705040) E. Bergshoeff and P. K. Townsend, *Super D-branes*, Nucl. Phys. [**B490**]{} (1997) 145–162, [[hep-th/9611173]{}](http://www.arXiv.org/abs/hep-th/9611173) J. P. Gauntlett, J. Gomis and P. K. Townsend, *BPS bounds for worldvolume branes*, JHEP [**01**]{} (1998) 003, [[hep-th/9711205]{}](http://www.arXiv.org/abs/hep-th/9711205) T. Eguchi and A. J. Hanson, *Asymptotically flat selfdual solutions to Euclidean gravity*, Phys. Lett. [**B74**]{} (1978) 249 J. Gomis and T. Mateos, *D6 branes wrapping K[ä]{}hler four-cycles*, Phys. Lett. [**B524**]{} (2002) 170–176, [[hep-th/0108080]{}](http://www.arXiv.org/abs/hep-th/0108080) M. Cvetič, G. W. Gibbons, H. L[ü]{} and C. N. Pope, *Ricci-flat metrics, harmonic forms and brane resolutions*, Commun. Math. Phys. [**232**]{} (2003) 457–500, [[hep-th/0012011]{}](http://www.arXiv.org/abs/hep-th/0012011) J. Brugues, J. Gomis, T. Mateos and T. Ramirez, *Supergravity duals of noncommutative wrapped D6 branes and supersymmetry without supersymmetry*, JHEP [**10**]{} (2002) 016, [[hep-th/0207091]{}](http://www.arXiv.org/abs/hep-th/0207091) P. Candelas, *Lectures on complex manifolds*, in *Superstrings ’87*, proceedings of the Trieste spring school, World Scientific, Eds. L. Alvarez-Gaum[é]{} et al., 1-88. J. P. Gauntlett and S. Pakis, *The geometry of $D = 11$ Killing spinors*, [[hep-th/0212008]{}](http://www.arXiv.org/abs/hep-th/0212008) N. E. Grandi and A. R. Lugo, *Supertubes and special holonomy*, Phys. Lett. [**B553**]{} (2003) 293–300, [[hep-th/0212159]{}](http://www.arXiv.org/abs/hep-th/0212159) [^1]: This is not in contradiction with the fact that the minimal spinors in 2+1 dimensions have 2 independent components since, because of the non-vanishing electromagnetic field, the theory on the worldvolume of the D2 is not Lorentz invariant. [^2]: In [@Park:2003zn], a first attempt to construct supertubes in curved spaces was performed. Their configurations are not supersymmetric because the backgrounds already destroy all supersymmetries. [^3]: In this sense, the apparently rotating supertubes considered in [@Cho:2003jd] are indeed equivalent, through a worldvolume reparametrisation, to the ordinary supertubes in flat space. [^4]: Note that it is not necessary that $P$ commutes with $M(y)$. [^5]: For the components of $p$-forms we use the notations of [@Candelas:1987is].
--- abstract: 'A systematic study of fractional revival at two sites in $XX$ quantum spin chains is presented and analytic models with this phenomenon are exhibited. The generic models have two essential parameters and a revival time that does not depend on the length of the chain. They are obtained by combining two basic ways of realizing fractional revival in a spin chain each bringing one parameter. The first proceeds through isospectral deformations of spin chains with perfect state transfer. The second arises from the recurrence coefficients of the para-Krawtchouk polynomials with a bi-lattice orthogonality grid. It corresponds to an analytic model previously identified that can possess perfect state transfer in addition to fractional revival.' address: - 'Centre de recherches mathématiques, Université de Montréal, Montréal (QC), Canada' - 'Centre de recherches mathématiques, Université de Montréal, Montréal (QC), Canada' - 'Donetsk Institute for Physics and Technology, Donetsk 83114, Ukraine' author: - 'Vincent X. Genest' - Luc Vinet - Alexei Zhedanov title: Quantum spin chains with fractional revival --- Introduction ============ Spin chains with engineered couplings have proved attractive for the purpose of designing devices to achieve quantum information tasks such as quantum state transfer or entanglement generation . One reason for the interest is that the internal dynamics of the chain takes care of the processes with a minimum of external intervention required. In this perspective, a desired feature of such chains is that they exhibit quantum revival. For perfect state transfer (PST), one wishes to have, for example, a one-excitation state localized at the beginning of the chain evolve with unit probability, after some time $T$, into the state with the excitation localized at the end of the chain. Such a relocalization of the wave packet is what is referred to as revival . Fractional revival occurs when a number of smaller packets seen as little clones of the original one form at certain sites and show local periodicities . Its realization in a spin chain would also allow to transport information with high efficiency via one of the clones. Moreover, in a balanced case where there is equal probability of finding clones at the beginning and at the end of a chain, fractional revival would provide a mechanism to generate entangled states. It is hence of relevance to determine if FR is feasible in spin chains, and if so, in which models. Some studies have established the fact that this effect can indeed be observed in spin chains . The present paper offers a systematic analysis of the circumstances under which fractional revival at two sites in quantum spin chains of the $XX$ type will be possible. The main question is to determine the Hamiltonians $H$ that will have the fractional revival property. Like for PST, the conditions for FR are expressed through requirements on the one-excitation spectrum of $H$. One then deals with an inverse spectral problem that can be solved with the assistance of orthogonal polynomial theory. In the full revival or PST case where this analysis has been carried out in detail (see for instance), one sees that a necessary condition is that the couplings and Zeeman terms must form a three-diagonal matrix that is mirror-symmetric. Furthermore, models based on special orthogonal polynomials have been found where PST can be exhibited in an exact fashion. These analytic models are quite useful; in fact, the simplest one was employed to perform an experimental quantum simulation with Nuclear Magnetic Resonance techniques of mirror inversion in a spin chain . We shall here also provide analytic models with FR. The outline of the paper is as follows. In Section 2, we present the Hamiltonians $H$ for the class of $XX$ spin chains that will be considered. The reader is reminded that their one-excitation restrictions $J$ are given by Jacobi matrices and the orthogonal polynomials associated to the diagonalization of those $J$’s are described. In Section 3, we review the elements of the characterization of $XX$ spin chains with PST that will be essential in our fractional revival study. In particular, the necessary and sufficient conditions for PST in terms of the spectrum of $J$, the mirror symmetry and the properties of the associated orthogonal polynomials will be recalled. In Section 4, we undertake a systematic analysis of the conditions under which fractional revival can occur at two sites. Up to a global phase, the revived states will be parametrized in terms of two reals amplitudes $\sin \theta$ and $\cos \theta$ and of a relative phase $\psi$; the case $\theta=0$ will correspond to the PST situation. It will be shown that in general, for FR to occur, the one-excitation spectrum of the Hamiltonian must take the form of a bi-lattice, i.e. the spectral points need to be the union of two uniform lattices translated one with respect to the other by a parameter $\delta$ depending on $\theta$ and $\psi$. Two special cases, namely $\psi=0$ and $\psi=\pi/2$, will be the object of the subsequent two sections. In the first case the spectrum condition is the PST one and in the second case, it is rather the mirror symmetry that is preserved. These two cases will provide the ingredients of a two-step process for obtaining the most general $XX$ chains with FR at two sites. In Section 5, it will be shown how spin chains with FR can be obtained from isospectral deformations of spin chains with PST. This has the relative phase $\psi=0$. Analytic models with FR will thus be obtained from analytic models with PST. It will be observed that only the central parameters of the chain will need to be modified so as to make FR happen. Mirror-symmetry will be seen to be replaced by a more complicated inversion. The orthogonal polynomials corresponding to the deformed Jacobi matrix will be shown to have a simple expression in terms of the unperturbed orthogonal polynomials associated to the parent PST Hamiltonian. Their knowledge will be relevant to the construction, at the end of Section 6, of the FR Hamiltonian with arbitrary parameters $\theta$ and $\psi$. Section 6 will deal with the special case when the relative phase is $\pi/2$, that is when one amplitude is real and the other purely imaginary. It will be seen that $J$ is mirror symmetric in this situation with its spectrum a bi-lattice. As shall be explained, an exact solution to the corresponding inverse spectral problem turns out to be already available and is provided by the recurrence coefficients of the para-Krawtchouk polynomials introduced in . It is remarkable that these somewhat exotic functions naturally arise in the FR analysis. We shall demonstrate that for certain values of the parameters, the models thus engineered possess both FR and PST. We shall finally come to the determination of the generic Hamiltonian with FR and show that is is obtained from the recurrence coefficients of para-Krawtchouk polynomials perturbed in the way described in Section 5; in other words it is constructed by performing an isospectral deformation of the Jacobi matrix of the para-Krawtchouk polynomials. The two arbitrary parameters are related to the the bi-lattice and to the deformation parameter. We shall summarize the outcome of the analysis and offer final remarks to conclude. XX quantum spin chain models with non-uniform nearest-neighbor interactions =========================================================================== We shall consider $XX$ spin chains with $N+1$ sites and nearest-neighbor interactions that are governed by Hamiltonians $H$ of the form $$\begin{aligned} \label{H} H=\frac{1}{2}\sum_{\ell=0}^{N-1}J_{\ell+1}(\sigma_{\ell}^{x}\sigma_{\ell+1}^{x}+\sigma_{\ell}^{y}\sigma_{\ell+1}^{y})+\frac{1}{2}\sum_{\ell=0}^{N}B_{\ell}(\sigma_{\ell}^{z}+1)\end{aligned}$$ that act on $(\mathbb{C}^2)^{\otimes N+1}$. The $J_{\ell}$ are the coupling constants between the sites $\ell-1$ and $\ell$ and the $B_{\ell}$ are the strengths of the magnetic fields at the sites $\ell$, where $\ell=0,1,\ldots, N$. The index $\ell$ on the Pauli matrices $\sigma^{x}$, $\sigma^{y}$, $\sigma^{z}$ indicates on which of the $(N+1)$ $\mathbb{C}^2$ factor these matrices act. If ${\,\rvert\uparrow\rangle}$ and ${\,\rvert\downarrow\rangle}$ denote the eigenstates of $\sigma^{z}$ with eigenvalues $+1$ and $-1$ respectively, $\sigma^{x}$ and $\sigma^{y}$ are known to act as follows in that basis: $\sigma^{x}{\,\rvert\uparrow\rangle}={\,\rvert\downarrow\rangle}$, $\sigma^{x}{\,\rvert\downarrow\rangle}={\,\rvert\uparrow\rangle}$; $\sigma^{y}{\,\rvert\uparrow\rangle}=-i{\,\rvert\downarrow\rangle}$, $\sigma^{y}{\,\rvert\downarrow\rangle}=i{\,\rvert\uparrow\rangle}$. The Hamiltonians $H$ are invariant under rotations about the $z$-axis, i.e. $$\begin{aligned} [H,\frac{1}{2}\sum_{\ell=0}^{N}(\sigma_{\ell}^{z}+1)]=0,\end{aligned}$$ and as a consequence, the eigenstates of $H$ split in subspaces labeled by the number of spins over the chain that are in the state ${\,\rvert\uparrow\rangle}$, which is a conserved quantity. In the following we shall focus on the subspace with one excitation and we will denote by $J$ the restriction of $H$ to that subspace, equivalent to $\mathbb{C}^{N+1}$. A natural orthonormal basis for the states with one excitation is given by the vectors $$\begin{aligned} {\,\rvert\ell\rangle}=(0,0,\ldots, 1,\ldots,0)^{\top},\qquad \ell=0,1,\ldots, N,\end{aligned}$$ where the only “1” in the $\ell$^th^ entry corresponds to the only state ${\,\rvert\uparrow\rangle}$ being at the site $\ell$. The action of $J$ on these basis vectors is directly obtained from and given by $$\begin{aligned} J{\,\rvert\ell\rangle}=J_{\ell+1}{\,\rvert\ell+1\rangle}+B_{\ell}{\,\rvert\ell\rangle}+J_{\ell}{\,\rvert\ell-1\rangle},\end{aligned}$$ where it is assumed that $J_0=J_{N+1}=0$. The restricted Hamiltonian $J$ thus takes the form of a $(N+1)\times (N+1)$ three-diagonal Jacobi matrix $$\begin{aligned} \label{Tridiag} J= \begin{pmatrix} B_0 & J_1 & & & \\ J_1 & B_1 & J_2 & & \\ & J_2 & B_2 & \ddots & \\ & & \ddots & \ddots & J_{N} \\ & & &J_{N}&B_{N} \end{pmatrix}.\end{aligned}$$ It is known that Jacobi matrices are diagonalized by orthogonal polynomials. Let us record, in this connection, results that will prove useful in our study. Let ${\,\rvert\lambda\rangle}$ be the eigenvectors of $J$: $$\begin{aligned} \label{Eigen-General} J{\,\rvert\lambda\rangle}=\lambda{\,\rvert\lambda\rangle}.\end{aligned}$$ Since $J$ is Hermitian the eigenvalues $\lambda$ are real. We shall assume that they are non-degenerate, i.e. that they take $N+1$ different values $\lambda_{s}$ for $s=0,1,\ldots,N$. This is the case when the entries $J_{\ell}$ are positive. We will take the eigenvalues to be ordered $$\begin{aligned} \lambda_0<\lambda_1<\cdots <\lambda_N.\end{aligned}$$ Consider the expansion of ${\,\rvert\lambda_{s}\rangle}$ in terms of the basis vectors ${\,\rvert\ell\rangle}$, $$\begin{aligned} \label{Expansion} {\,\rvert\lambda_{s}\rangle}=\sum_{\ell=0}^{N}W_{s\ell}\,{\,\rvert\ell\rangle}\end{aligned}$$ and write the elements $W_{s\ell}$ of the transition matrix in the form $$\begin{aligned} W_{s\ell}=W_{s0}\,\chi_{\ell}(\lambda_{s})\equiv \sqrt{\omega_{s}}\chi_{\ell}(\lambda_{s}).\end{aligned}$$ It follows from that $\chi_{\ell}(\lambda_{s})$ are polynomials satisfying the three-term recurrence relation $$\begin{aligned} J_{\ell+1}\,\chi_{\ell+1}(\lambda_{s})+B_{\ell}\,\chi_{\ell}(\lambda_{s})+J_{\ell}\,\chi_{\ell-1}(\lambda_{s})=\lambda\, \chi_{\ell}(\lambda_{s}),\end{aligned}$$ with initial condition $$\begin{aligned} \chi_{0}=1,\qquad \chi_{-1}=0.\end{aligned}$$ Since both the eigenbasis $\{{\,\rvert\lambda_{s}\rangle}\}_{s=0}^{N}$ and the basis $\{{\,\rvert\ell\rangle}\}_{\ell=0}^{N}$ are orthonormal and all the coefficients are real, the matrix $(W)_{s\ell}$ is orthogonal and the condition ${\langle\,\ell\,\rvert\,\ell'\,\rangle}=\delta_{\ell\ell'}$ implies that the polynomials $\chi_{\ell}(\lambda)$ are orthogonal on the finite set of spectral points $\lambda_{s}$: $$\begin{aligned} \label{Ortho} \sum_{s=0}^{N}w_{s}\chi_{m}(\lambda_{s})\chi_{n}(\lambda_{s})=\delta_{mn}.\end{aligned}$$ Note that the weights $w_{s}$ are normalized, that is $$\begin{aligned} \label{Norm-Cond} \sum_{s=0}^{N}w_{s}=1,\end{aligned}$$ since $\chi_{0}(\lambda_{s})=1$. Owing to the orthogonality of $W$, in addition to , we also have $$\begin{aligned} \label{Expansion-2} {\,\rvert\ell\rangle}=\sum_{s=0}^{N}W_{s\ell}\,{\,\rvert\lambda_{s}\rangle}=\sum_{s=0}^{N}\sqrt{w_{s}}\,\chi_{n}(\lambda_{s}){\,\rvert\lambda_{s}\rangle}.\end{aligned}$$ It is sometimes useful to use the monic polynomials $$\begin{aligned} \label{14} P_{\ell}(\lambda)=\sqrt{h_{\ell}}\,\chi_{\ell}(\lambda),\qquad \sqrt{h_{\ell}}=J_1J_2\cdots J_{\ell},\end{aligned}$$ whose leading coefficient is 1: $P_{\ell}(\lambda)=\lambda^{\ell}+\cdots $. The spectrum of $J$ is encoded in the characteristic polynomial $$\begin{aligned} P_{N+1}(\lambda)=(\lambda-\lambda_0)(\lambda-\lambda_1)\cdots (\lambda-\lambda_{N}).\end{aligned}$$ The following formula from the standard theory of orthogonal polynomials gives the weights $w_{s}$ in terms of $P_{N}(\lambda)$ and $P_{N+1}(\lambda)$: $$\begin{aligned} \label{W-1} w_{s}=\frac{h_{N}}{P_{N}(\lambda_{s})P_{N+1}'(\lambda_{s})},\qquad s=0,1,\ldots, N,\end{aligned}$$ with $P_{N+1}'(\lambda)$ denoting the derivative of $P_{N+1}(\lambda)$ with respect to $\lambda$ [@Chihara-2011]. A review of perfect state transfer in a $XX$ spin chain ======================================================= Perfect state transfer is achieved along a chain if there is a time $T$ for which $$\begin{aligned} \label{PST-Cond} e^{-iTJ}{\,\rvert0\rangle}=e^{i\phi}{\,\rvertN\rangle},\end{aligned}$$ where $\phi$ is a real number. In this case the initial state with one spin up at the zeroth site will be found with unit probability after time $T$ in the state with one spin up at the site $N$. One then says that the state ${\,\rvert\uparrow\rangle}$ has been perfectly transferred from one end of the chain to the other. The requirements for PST have been determined from studying the implications of . Since this will be the backdrop for our discussion of fractional revival, it is pertinent to summarize the PST analysis here. Using the expansion on both sides of gives $$\begin{aligned} \label{Condition-Chi} e^{-i\phi}e^{-iT\lambda_{s}}=\chi_{N}(\lambda_{s}),\qquad s=0,1,\ldots, N.\end{aligned}$$ Since $\chi_{N}(\lambda_{s})$ is real, this implies that $$\begin{aligned} \label{Condition-Chi-2} \chi_{N}(\lambda_{s})=\pm 1.\end{aligned}$$ Now because the zeros of $\chi_{N}(\lambda)$ must lie between those of $\chi_{N+1}(\lambda)$ which are located at the $\lambda_{s}$, we must conclude that $\chi_{N}(\lambda_{s})$ will alternate between $+1$ and $-1$ [@Chihara-2011]. Given that the eigenvalues are ordered, the condition that the weights, as given by , must be positive leads finally to $$\begin{aligned} \label{Eigen-Chi} \chi_{N}(\lambda_{s})=(-1)^{N+s},\qquad s=0,1,\ldots, N.\end{aligned}$$ In view of , we see that imposes the following requirement on the spectrum of $J$: $$\begin{aligned} \label{Eigen-Lambda} e^{-iT\lambda_{s}}=e^{i\phi}(-1)^{N+s},\qquad s=0,1,\ldots,N,\end{aligned}$$ which is tantamount to the condition that the successive eigenvalues are such that $$\begin{aligned} \label{Eigen-Cond} \lambda_{s+1}-\lambda_{s}=\frac{\pi}{T} M_{s},\end{aligned}$$ with $M_{s}$ an arbitrary positive odd integer. We have thus with and , the necessary and sufficient conditions for PST to occur. It is seen from that the necessary condition is equivalent to requiring that the polynomials associated to the Jacobi matrix $J$ be orthogonal with respect to the weights $$\begin{aligned} \label{Weight-Explicit} w_{s}=\frac{\sqrt{h_{N}}\, (-1)^{N+s}}{P_{N+1}'(\lambda_s)},\quad s=0,1,\ldots, N.\end{aligned}$$ Remarkably, the weights given by have the general property [@Tsujimoto_2015] $$\begin{aligned} \label{Gen-Prop} \sum_{s}w_{2s}=\sum_{s}w_{2s+1}=\frac{1}{2}.\end{aligned}$$ In turn, or can be shown to hold if and only if the matrix $J$ is mirror symmetric with respect to the anti-diagonal , that is if and only if $$\begin{aligned} \label{Per-Sym} RJR=J,\end{aligned}$$ with $$\begin{aligned} R= \begin{pmatrix} &&&1\\ &&1&\\ &\udots&& \\ 1 &&& \end{pmatrix}.\end{aligned}$$ In terms of the couplings and magnetic field strengths, one sees from that this symmetry amounts to the relations $$\begin{aligned} \label{Mir-Sym} J_{n}=J_{N+1-n},\qquad B_{n}=B_{N-n}.\end{aligned}$$ Matrices with that property are referred to as persymmetric matrices. A simple and direct proof that implies can be found in [@2010_Kay_IntJQtmInf_8_641]. Let us now observe that this required symmetry will lead not only to PST but to a complete mirror inversion of the register at time $T$ if the spectral condition is satisfied. First note that $R$ is an involution, that is $R^2=\mathbb{1}$. Because $J$ is reflection-invariant, its eigenstates can be taken to also be eigenstates of $R$ and since we have assumed that the spectrum of $J$ is not degenerate, each eigenstate must hence be of definite parity $\epsilon_{s}$, equal to either $+1$ or $-1$. We hence have $$\begin{aligned} \label{abv-5} R{\,\rvert\lambda_{s}\rangle}=\epsilon_{s}{\,\rvert\lambda_{s}\rangle}.\end{aligned}$$ From , we have $$\begin{aligned} R{\,\rvert\lambda_{s}\rangle}=\sum_{\ell=0}^{N}\sqrt{w_{s}}\,\chi_{\ell}(\lambda_{s}){\,\rvertN-\ell\rangle} =\sum_{\ell=0}^{N}\sqrt{w_{\ell}}\,\chi_{N-\ell}(\lambda_{s}){\,\rvert\ell\rangle},\end{aligned}$$ but given , using again we find that $$\begin{aligned} \label{Chi-Prop} \chi_{N-\ell}(\lambda_{s})=(-1)^{N+s}\chi_{\ell}(\lambda_{s}),\end{aligned}$$ where we have determined $\epsilon_{s}$ by setting $\ell=0$ and using . As a result, we see that $$\begin{gathered} \label{Calculation} {{\,\langle k\rvert}\,e^{-iTJ}\,{\,\rvert\ell\rangle}}=\sum_{\ell=0}^{N} e^{-iT\lambda_{s}}\,w_{s}\,\chi_{\ell}(\lambda_{s})\chi_{k}(\lambda_{s}) =e^{i\phi}\sum_{s=0}^{N}(-1)^{N+s}w_{s}\,\chi_{\ell}(\lambda_{s})\chi_k(\lambda_{s}) \\ =e^{i\phi}\sum_{s=0}^{N}w_{s}\,\chi_{N-\ell}(\lambda_{s})\chi_{k}(\lambda_{s})=e^{i\phi}\delta_{k,N-\ell},\end{gathered}$$ with the successive help of , and . In other words, there is probability 1 of finding at the site $N-n$, after time $T$, the spin up initially at site $n$. In matrix form, we have shown that or and have for consequence that $$\begin{aligned} e^{-iTJ}=e^{i\phi}R.\end{aligned}$$ Now the main issue is to determine the Hamiltonians $H$ of type that have the PST property. This is an inverse spectral problem since we start from conditions on the eigenvalues. As it turns out, this specific problem has been well studied [@2004_Gladwell]. The outcome is that once a spectrum satisfying or is given as input, the Hamiltonian with the desired properties is uniquely determined. This stems from the fact that the weights $w_{s}$ entailing mirror-symmetry are uniquely prescribed by and that the corresponding orthogonal polynomials can be unambiguously constructed. Their recurrence coefficients henceforth provide the couplings $J_{\ell}$ and the local magnetic fields $B_{\ell}$ of a Hamiltonian $H$ with PST. One algorithm for obtaining the orthogonal polynomials is described in . By considering natural types of spectrum, analytic models of spin chains with PST have thus been obtained and associated to various families of orthogonal polynomials : Krawtchouk and Dual Hahn , Dual $-1$ Hahn , $q$-Krawtchouk and $q$-Racah . Of particular interest for what follows are models connected to the so-called para-Krawtchouk polynomials . The paradigm example of analytic models, investigated in , is obtained by considering the linear spectrum $$\begin{aligned} \lambda_{s}=\frac{\pi}{T}\Big(s-N/2\Big),\qquad s=0,1,\ldots, N,\end{aligned}$$ for which yields the binomial distribution $$\begin{aligned} w_{s}=\frac{N!}{s!(N-s)!}\left(\frac{1}{2}\right)^{N}.\end{aligned}$$ The associated polynomials are known to be the symmetric Krawtchouk polynomials with recurrence coefficients $$\begin{aligned} \label{Recu-Krawtchouk} B_{\ell}=0,\quad J_{\ell}^2=\frac{\pi^2}{T^2}\frac{\ell(N+1-\ell)}{4}.\end{aligned}$$ The mirror symmetry is manifest. This is one instance where the one-excitation spectrum dynamics is exactly solvable. The general transition amplitude from state ${\,\rvert\ell\rangle}$ to state ${\,\rvertk\rangle}$ in time $t$ has been calculated in to be $$\begin{gathered} \label{Kraw-Amp} {{\,\langle k\rvert}\,e^{-itJ}\,{\,\rvert\ell\rangle}}=\left(\frac{1}{2}\right)^{N}\sqrt{\binom{N}{k}\binom{N}{\ell}}\;(1-e^{-i\frac{T}{\pi}t})^{k+\ell} \\ \times \,(1+e^{-i\frac{T}{\pi}t})^{N-k-\ell}\; {}_2F_{1}\left(\genfrac{}{}{0pt}{}{-k,-\ell}{-N},\;\frac{-4e^{-i\frac{T}{\pi}t}}{(1-e^{-i\frac{T}{\pi}t})^2}\right),\end{gathered}$$ where ${}_2F_{1}$ is the classical hypergeometric series . It must be stressed finally that a myriad of analytic models can be generated from those associated to known orthogonal polynomials by a procedure known as “spectral surgery” . Indeed, it has been shown that the first or last eigenvalues ($\lambda_0$ or $\lambda_N$) or a pair of neighboring spectral points ($\lambda_i,\lambda_{i+1}$) can be removed without affecting the PST properties . If $P_{\ell}(\lambda)$ are the monic polynomials associated to the original chain with Hamiltonian $J$, the removal of one level, say $\lambda_i$, will give a new Jacobi matrix $J'$ with entries given by $$\begin{aligned} \label{C-1} (J_{\ell}')^2=\left(\frac{A_{\ell}}{A_{\ell-1}}\right)\,J_{\ell}^2 ,\quad B_{\ell}'=B_{\ell+1}+A_{\ell+1}-A_{\ell},\end{aligned}$$ where $$\begin{aligned} A_{\ell}=\frac{P_{\ell+1}(\lambda_i)}{P_{\ell}(\lambda_i)}.\end{aligned}$$ The monic polynomials corresponding to $J'$ are $$\begin{aligned} \label{C-2} P_{\ell}'(x)=\frac{P_{\ell+1}(x)-A_{\ell}P_{\ell}(x)}{x-x_i}.\end{aligned}$$ Formulas and can be applied iteratively to obtain new analytic models from known ones. The removal of pairs of neighboring spectral points is required in the bulk to ensure the positivity of the weights. Spectral conditions for fractional revival ========================================== We are now ready to examine the requirements for fractional revival in a spin chain of type $XX$. We shall limit ourselves to revivals occurring at only two sites chosen to be $\ell=0$ and $\ell=N$, the two ends of the chain. We shall here analyze systematically the conditions that will ensure that for a time $T$ $$\begin{aligned} \label{FR-C1} e^{-iTJ}{\,\rvert0\rangle}=\alpha {\,\rvert0\rangle}+\beta {\,\rvertN\rangle},\end{aligned}$$ with $|\alpha|^2+|\beta|^2=1$. When this is so, the spin up at site $\ell=0$ at $t=0$ is revived at the sites $\ell=0$ and $\ell=N$ after time $T$ with amplitude $\alpha$ and $\beta$, respectively. Let us write $$\begin{aligned} \alpha=e^{i\phi}\sin 2\theta,\qquad \beta=e^{i(\phi+\psi)}\cos 2\theta,\end{aligned}$$ with $e^{i\phi}$ and $e^{i\psi}$ the global and relative phase factors of the two complex numbers $\alpha$ and $\beta$. Taking $-\frac{\pi}{4}\leq \theta \leq \frac{\pi}{4}$ and $0\leq \psi <\pi$ with $\phi\in \mathbb{R}$ covers all possible amplitudes. The use of leads to the relation $$\begin{aligned} \label{Cond-Eigenvalues-2} e^{-iT\lambda_{s}}=e^{i\phi}\left[\sin 2\theta +e^{i\psi}\cos 2\theta\,\chi_{N}(\lambda_s)\right],\qquad s=0,1,\ldots, N.\end{aligned}$$ This implies that $\sin 2\theta +e^{i\psi}\cos 2\theta\,\chi_{N}(\lambda_s)$ has modulus 1, that is $$\begin{aligned} \label{Condition-Chi-3} \chi_{N}^2(\lambda_s)+2\tan 2\theta \cos\psi-1=0.\end{aligned}$$ Obviously when $\theta=0$, we recover condition . Equation indicates that $\chi_{N}(\lambda_{s})$ will take one of two values, $\gamma$ and $-\frac{1}{\gamma}$, with $\gamma$ satisfying $$\begin{aligned} \label{Cond-Gamma} \gamma-\frac{1}{\gamma}=-2 \tan 2\theta \cos \psi.\end{aligned}$$ By a continuity argument as $\theta\rightarrow 0$, upon comparing with and assuming that $\gamma$ is the positive root (there must be one because of the interlacing property of the zero of orthogonal polynomials), one concludes that, for $N$ odd \[Cond-Gamma-2\] $$\begin{aligned} \chi_{N}(\lambda_{2s})=-\frac{1}{\gamma},\qquad \chi_{N}(\lambda_{2s+1})=\gamma,\end{aligned}$$ for $N$ even $$\begin{aligned} \chi_{N}(\lambda_{2s})=\gamma,\qquad \chi_{N}(\lambda_{2s+1})=-\frac{1}{\gamma},\end{aligned}$$ for all $s\in \{0,\ldots, N\}$. Assume for now that the relative phase factor $e^{i \psi}$ is generic. Once and are satisfied, it must still be ensured that is obeyed. When $N$ is odd, this amounts to the conditions \[abcd\] $$\begin{aligned} \label{a} &\cos(T\lambda_{2s}+\phi)=\sin 2\theta-\frac{1}{\gamma} \cos 2\theta \cos \psi, \quad \sin(T\lambda_{2s}+\phi)=\frac{1}{\gamma}\cos 2\theta \sin \psi, \\ \label{c} &\cos(T\lambda_{2s+1}+\phi)=\sin 2\theta+\gamma \cos 2\theta \cos \psi, \quad \sin(T\lambda_{2s+1}+\phi)=-\gamma \cos 2\theta \sin \psi.\end{aligned}$$ Define $\xi, \eta\in [0,2\pi]$ by setting the right-hand sides of equations , to be respectively $\cos \xi$, $\sin \xi$, $\cos \eta$ and $\sin \eta$. This implies $$\begin{aligned} \label{sets} \{T\lambda_{2s}+\phi\}\subseteq \{\xi, \xi\pm 2\pi, \xi \pm 4 \pi, \ldots \}, \quad \{T \lambda_{2s+1}+\phi\}\subseteq\{\eta,\eta\pm 2\pi,\eta\pm 4\pi,\ldots \}.\end{aligned}$$ When $N$ is even, it is seen that the roles of $\xi$ and $\eta$ are interchanged. On thus observes that the spectrum of $J$ must be the following bi-lattices: $$\begin{aligned} \label{Bi-Lattice} \frac{T\lambda_{s}+\phi}{\pi}=\frac{\mu}{\pi}+s+\frac{1}{2}(\delta-1)(1-(-1)^{s}),\end{aligned}$$ for $s=0,1,\ldots, N$ with $$\begin{aligned} \label{mu} \mu= \begin{cases} \xi & \text{for $N$ odd} \\ \eta & \text{for $N$ even} \end{cases},\end{aligned}$$ and $$\begin{aligned} \delta= \begin{cases} 2+\frac{1}{\pi}(\eta-\xi) & \text{for $N$ odd} \\ \frac{1}{\pi}(\xi-\eta) & \text{for $N$ even} \end{cases}.\end{aligned}$$ We have used the latitude in picking the initial point in the sets and those obtained by the exchange $\xi\leftrightarrow \eta$ for $N$ even, to ensure that $\delta$ is positive. Let us point out that the spectra can in fact also be subsets of the bi-lattices after appropriate surgery. There are two special cases for the phase $\psi$ that are of particular interest and that will be the object of the next sections. Each preserve one of the two necessary and sufficient conditions and for PST. These two distinguished cases are $\psi=0$ and $\psi=\pi/2$. Let us make initial observations about what happens for those values. i. $\psi=\frac{\pi}{2}$ It is readily seen from and that $\gamma=1$ and that $\chi_{N}(\lambda_{s})=(-1)^{N+s}$. Hence mirror-symmetry is maintained in this case. This is the case considered in . Equations give that $$\begin{aligned} \xi=-\eta=\frac{\pi}{2}-2\theta.\end{aligned}$$ The spectral points thus form bi-lattices of the form with $$\begin{aligned} \label{delta} \delta=1\pm \frac{4\theta}{\pi}.\end{aligned}$$ The upper/lower sign in the above equation corresponds to $N$ being odd/even. Note that $\delta \in [0,2]$ for $-\frac{\pi}{4}\leq \theta \leq \frac{\pi}{4}$; when $\theta=0$, it is the PST situation, $\delta=1$ and becomes the linear spectrum of the Krawtchouk polynomials with $\phi=\frac{\pi}{2}(N\pm 1)$. As shall be explained in Section 6, the model corresponding to the spectral conditions for $\psi=\frac{\pi}{2}$ is analytic and can exhibit both FR and PST. i. $\psi=0$ In this case gives for $\gamma$ $$\begin{aligned} \label{abv-2} \gamma=\cot \left(\frac{\pi}{4}-\theta\right),\end{aligned}$$ and in view of mirror symmetry is broken. However using and it is immediate to check that becomes $$\begin{aligned} e^{-iT\lambda_{s}}=e^{i\phi}(-1)^{N+s},\end{aligned}$$ which is the spectral condition for PST. So when $\psi=0$, PST is absent for lack of mirror symmetry but the spectrum remains unchanged. This isospectral situation is discussed next. Isospectral deformations of chains with perfect state transfer ============================================================== We shall now describe the spin chains with fractional revival that can be obtained by isospectral deformations of chains with PST [@GVZ-2015]. This picture arises when the relative phase $\psi$ is nil. It should be stressed from the outset that the procedure will generate analytic models with FR when it is applied to spin chains for which PST can be exactly demonstrated. In this section, for the sake of clarity, we shall denote by $\widetilde{J}$ the one-excitation Hamiltonians of spin chains with FR and by $J$ those of spin chains with PST. When $\psi=0$, the FR condition reads $$\begin{aligned} \label{FR-2} e^{-iT\widetilde{J}}{\,\rvert0\rangle}=e^{i\phi}\left[\sin 2\theta\, {\,\rvert0\rangle}+\cos 2\theta\, {\,\rvertN\rangle}\right]\end{aligned}$$ and becomes $$\begin{aligned} \label{ebv-3} e^{-i\phi}e^{-iT\lambda_{s}}=\sin 2\theta+\cos 2\theta\,\chi_{N}(\lambda_{s}).\end{aligned}$$ We observed that because the right-hand side of is real, the condition on the spectrum of $\widetilde{J}$ is the same as for PST, that is . For one such spectrum, still assumed to be non-degenerate, it must be possible to relate by a conjugation the Jacobi matrix with PST to the one with FR. Recall that the PST matrix $J$ can be uniquely constructed from the data. There is thus an orthogonal matrix $U$ such that $$\begin{aligned} \widetilde{J}=U J U^{\top}.\end{aligned}$$ It then follows that $$\begin{aligned} \label{Q-I} e^{-iT\widetilde{J}}=U e^{-iTJ}U^{\top}=e^{i\phi} U R U^{\top}\equiv e^{i\phi}Q.\end{aligned}$$ Note that the action of $Q$ on ${\,\rvert0\rangle}$ is prescribed by . This similarity transformation is easily found and can be presented as follows. For convenience write $U$ in the form $$\begin{aligned} U=VR.\end{aligned}$$ Let $V$ be the $(N+1)\times (N+1)$ matrix defined as follows. For $N$ odd, take $$\begin{aligned} \label{V-Odd} V= \begin{pmatrix} \sin \theta &&&&&\cos \theta \\ &\ddots &&&\udots & \\ &&\sin \theta& \cos\theta&&\\ &&\cos \theta& -\sin\theta &&\\ &\udots &&&\ddots&\\ \cos \theta &&&&&-\sin \theta \end{pmatrix},\end{aligned}$$ and for $N$ even, let $$\begin{aligned} \label{V-Even} V= \begin{pmatrix} \sin \theta &&&&&&\cos\theta\\ & \ddots &&&&\udots&\\ &&\sin\theta&0&\cos\theta &&\\ && 0 & 1 &0 &&\\ &&\cos \theta & 0 & -\sin\theta &&\\ &\udots &&&&\ddots&\\ \cos \theta &&&&&&-\sin\theta \end{pmatrix}.\end{aligned}$$ It is immediate to check that $V=V^{\top}$ and that $V^2=\mathbb{1}$. It then follows that $UU^{\top}=\mathbb{1}$. Note also that $\det V=-1$. Obviously $V(0)=R$. The matrix $Q$ introduced in is thus given by $$\begin{aligned} VRV=Q,\end{aligned}$$ and is obtained from $V$ by substituting $\theta$ by $2\theta$ in for $N$ odd and in for $N$ even. Obviously $Q^2=\mathbb{1}$. Recall that the PST matrix $J$ is persymmetric $RJR=J$. It is then easy to see that for $$\begin{aligned} \label{Conju} \widetilde{J}=U J U^{\top}=VJV,\end{aligned}$$ condition is satisfied. In fact, not only is this realized but in view of and the expression for $Q$, we shall have fractional revival between the mirror-symmetric sites $\ell$ and $N-\ell$ since we have $$e^{-iT\widetilde{J}}=\left\{ \begin{matrix} & \text{$N$ odd} & \text{$N$ even} \\[.2cm] \sin 2\theta\,{\,\rvert\ell\rangle}+\cos 2\theta\,{\,\rvertN-\ell\rangle} & \ell\leq \frac{N-1}{2} & \ell<\frac{N}{2} \\[.1cm] -\sin 2\theta\,{\,\rvert\ell\rangle}+\cos 2\theta\,{\,\rvertN-\ell\rangle} & \ell\geq \frac{N+1}{2} & \ell>\frac{N}{2} \end{matrix}\right.,$$ and for $N$ even $$\begin{aligned} e^{-i T\widetilde{J}}\,{\,\rvert\textstyle{\frac{N}{2}}\rangle}=e^{i\phi}{\,\rvert\textstyle{\frac{N}{2}}\rangle}.\end{aligned}$$ To sum up, we have seen that a chain with FR can be obtained from any chain with PST by conjugating the Jacobi matrix of the latter according to . The resulting operator $\widetilde{J}$ is not mirror-symmetric but is seen to be invariant under the one-parameter involution $Q$, that is $$\begin{aligned} Q\widetilde{J}Q=\widetilde{J}.\end{aligned}$$ It is remarkable that the only modifications or perturbations in the couplings and magnetic fields that arise when passing from $J$ to $\widetilde{J}$ occur in the middle of the chain. Indeed, upon performing the conjugation with or , recalling that $J$ is persymmetric, one finds that the only entries of $\widetilde{J}$ that differ from those of $J$ are \[Perturbations\] $$\begin{aligned} \begin{aligned} \widetilde{J}_{\frac{N+1}{2}}&=J_{\frac{N+1}{2}}\,\cos 2\theta, \\ \widetilde{B}_{\frac{N\mp 1}{2}}&= B_{\frac{N-1}{2}}\pm J_{\frac{N+1}{2}}\sin 2\theta, \end{aligned}\end{aligned}$$ for $N$ odd and $$\begin{aligned} \begin{aligned} \widetilde{J}_{\frac{N}{2}}&= J_{\frac{N}{2}}(\cos \theta+\sin \theta), \\ \widetilde{J}_{\frac{N}{2}+1}&=J_{\frac{N}{2}}(\cos \theta-\sin \theta), \end{aligned}\end{aligned}$$ for $N$ even. When $N$ is even, only the couplings between the three middle neighbors are altered. When $N$ is odd, it is only the coupling between the two middle neighbors that is affected together with the magnetic field strengths at those two middle sites. Note that if all the $B_{\ell}$ of $J$ are initially zero, $\widetilde{J}$ will only have two Zeeman terms of equal magnitude and opposite sign at $\ell=\frac{N-1}{2}$ and $\ell=\frac{N+1}{2}$. The fact that in these models so few couplings or field strengths of the PST chain need to be adjusted to obtain the chain with FR could prove to be a practical advantage. One can imagine that the calibration would first be done by engineering the couplings so as to reproduce the PST mirror inversion and that thereafter the transformation to the FR mode would not be technically too prohibitive. It is also interesting to remark that it is possible to have no (zero) coupling between two equal parts of the chain and hence two separate chains in fact, and yet to keep some transport. Indeed, when $\theta=\pm \frac{\pi}{4}$ for $N$ even we have $\widetilde{J}_{\frac{N+1}{2}}=0$, $\widetilde{B}_{\frac{N\mp 1}{2}}= B_{\frac{N-1}{2}}\pm J_{\frac{N+1}{2}}$ and when $N$ is odd, $\widetilde{J}_{\frac{N}{2}}=0$, $\widetilde{J}_{\frac{N}{2}+1}=\sqrt{2} J_{\frac{N}{2}}$. It is clear that analytic spin chain models with fractional revival can be obtained from the analytic models with PST that are known by performing the isospectral deformations that we have described in this section. Take again for example the system associated to the Krawtchouk polynomials. Starting with the couplings $J_{\ell}$ and magnetic field $B_{\ell}$ given in and modifying them according to will yield a rather simple Hamiltonian $\widetilde{H}$ (with one-excitation sector $\widetilde{J}$) with fractional revival. The exact solvability properties of the perturbed model will be inherited from those of the Krawtchouk chains. For instance, the general transition amplitude between the one-excitation states ${\,\rvert\ell\rangle}$ and ${\,\rvertk\rangle}$ during time $t$ under the evolution governed by $\widetilde{J}$, that is ${{\,\langle k\rvert}\,e^{-i t \widetilde{J}}\,{\,\rvert\ell\rangle}}$ can be obtained directly from the corresponding quantity associated to $J$ and given in . Indeed, $$\begin{aligned} {{\,\langle k\rvert}\,e^{-it\widetilde{J}}\,{\,\rvert\ell\rangle}}&={{\,\langle k\rvert}\,V e^{-iTJ} V\,{\,\rvert\ell\rangle}} =\sum_{m n}V_{mk}V_{n\ell}\,{{\,\langle m\rvert}\,e^{-itJ}\,{\,\rvertn\rangle}},\end{aligned}$$ which will yield a sum of (at most) four terms owing to the special form of $V$. Let us mention that some of the couplings have appeared in studies of entanglement generation. The cas $\theta=\pi/8$ in and the case $N$ even in [@2010_Kay_IntJQtmInf_8_641]. To complete the discussion, we shall conclude this section by providing information on the relation that the orthogonal polynomials associated to the Jacobi matrix $\widetilde{J}=V J V$ with fractional revival bear with those attached to the matrix $J$ with PST. This will offer consistency checks and will be of relevance when considering the generic situation when the relative phase $\psi$ of is arbitrary. Let ${\,\rvert\widetilde{\lambda}_{s}\rangle}$ be the eigenstates of $\widetilde{J}$ $$\begin{aligned} \widetilde{J}\,{\,\rvert\widetilde{\lambda}_{s}\rangle}=\lambda_{s} {\,\rvert\widetilde{\lambda}_{s}\rangle}.\end{aligned}$$ Recall that $\widetilde{J}$ and $J$ have the same spectrum. We have an expansion analogous to in terms of a different set of orthogonal polynomials $\widetilde{\chi}_{\ell}(\lambda)$: $$\begin{aligned} \label{Expansion-3} {\,\rvert\widetilde{\lambda}_{s}\rangle}=\sum_{\ell=0}^{N}\sqrt{\widetilde{w}_{s}}\,\widetilde{\chi}_{\ell}(\lambda_{s}){\,\rvert\ell\rangle},\end{aligned}$$ where the weights $\widetilde{w}_{s}$ are given by the formula that now reads $$\begin{aligned} \widetilde{w}_{s}=\frac{\widetilde{h}_{N}}{\widetilde{P}_N(\lambda_{s}) \widetilde{P}_{N+1}'(\lambda_{s})},\end{aligned}$$ with the monic polynomials $\widetilde{P}_{\ell}$ defined by $$\begin{aligned} \widetilde{P}_{\ell}=\sqrt{\widetilde{h}_{\ell}}\,\widetilde{\chi}_{\ell},\qquad \sqrt{\widetilde{h}_{\ell}}=\widetilde{J}_1\widetilde{J}_2\cdots \widetilde{J}_{\ell}.\end{aligned}$$ From we see that $$\begin{aligned} \sqrt{\frac{\widetilde{h}_{N}}{h_{N}}}=\cos 2\theta.\end{aligned}$$ Now, in view of and , we find that the weights $\widetilde{w}_{s}$ and $w_{s}$ are related as follows: $$\begin{aligned} \widetilde{w}_{2s}= \begin{cases} \gamma \cos 2\theta\, w_{2s} & \text{$N$ odd} \\ \frac{1}{\gamma} \cos 2\theta\, w_{2s} & \text{$N$ even} \end{cases}, \quad \widetilde{w}_{2s+1}= \begin{cases} \frac{1}{\gamma}\cos 2\theta\, w_{2s+1} & \text{$N$ odd} \\ \gamma \cos 2\theta\, w_{2s+1} & \text{$N$ even} \end{cases}.\end{aligned}$$ Since $\gamma+\gamma^{-1}=2 \sec 2\theta $ as is readily observed from , one checks in particular that $\sum_{s}\widetilde{w}_{s}=1$ using . With ${\,\rvert\widetilde{\lambda}_{s}\rangle}=V{\,\rvert\lambda_{s}\rangle}$, using , one also has $$\begin{aligned} \label{Expansion-4} {\,\rvert\widetilde{\lambda}_{s}\rangle}=V{\,\rvert\lambda_{s}\rangle}=\sum_{\ell,k=0}^{N}\sqrt{w_{s}}\,\chi_{k}(\lambda_{s}) V_{\ell k}{\,\rvert\ell\rangle},\end{aligned}$$ and upon comparing with , one finds the relation $$\begin{aligned} \sqrt{\widetilde{w}_{s}}\,\widetilde{\chi}_{\ell}(\lambda_{s})=\sum_{k=0}^{N}\sqrt{w_{s}}\, V_{\ell k}\,\chi_{k}(\lambda_{s}).\end{aligned}$$ Given the form of $V$ and the property of the polynomials, one can write $$\begin{aligned} \label{Reee-1} \sqrt{\widetilde{w}_{s}}\,\widetilde{\chi}_{\ell}(\lambda_{s})=\sqrt{w_{s}}\,(V_{\ell, \ell}+(-1)^{N+s}V_{\ell, N-\ell})\,\chi_{\ell}(\lambda_{s}),\end{aligned}$$ together with $$\begin{aligned} \label{Reee-2} \sqrt{\widetilde{w}_{s}}\,\widetilde{\chi}_{\frac{N}{2}}(\lambda_{s})=\sqrt{w_{s}}\,\chi_{\frac{N}{2}}(\lambda_{s}),\end{aligned}$$ for $N$ even. In this last instance, note that gives $$\begin{aligned} \chi_{\frac{N}{2}}(\lambda_{s})=(-1)^{s}\chi_{\frac{N}{2}}(\lambda_{s}),\end{aligned}$$ meaning that $\chi_{\frac{N}{2}}(\lambda)$ is zero on all the odd eigenvalues: $\chi_{\frac{N}{2}}(\lambda_{2s+1})=0$. The simple trigonometric identities $$\begin{aligned} \label{75} \gamma \pm \gamma^{-1}=\tan \left(\frac{\pi}{4}-\theta\right)\pm \cot \left(\frac{\pi}{4}-\theta\right)= \begin{cases} 2 \sec 2\theta & \\ -2 \tan 2\theta & \end{cases}\end{aligned}$$ allow to show that $$\begin{aligned} \label{76} \gamma \cos 2\theta= (\sin \theta -\cos \theta)^2 \qquad \gamma^{-1}\cos 2\theta=(\sin \theta +\cos \theta)^2\end{aligned}$$ Using these relations and examining each case, one checks that and imply \[Evaluations\] $$\begin{aligned} \begin{aligned} \widetilde{\chi}_{\ell}(\lambda_{s})&=\chi_{\ell}(\lambda_{s}),\qquad \ell \leq \frac{N-1}{2}, \\ \widetilde{\chi}_{\ell}(\lambda_{2s})&=\gamma^{-1}\chi_{\ell}(\lambda_{2s}),\qquad \ell \geq \frac{N+1}{2}, \\ \widetilde{\chi}_{\ell}(\lambda_{2s+1})&=\gamma \chi_{\ell}(\lambda_{2s+1}),\qquad \ell \geq \frac{N+1}{2}, \end{aligned}\end{aligned}$$ for $N$ odd and $$\begin{aligned} \begin{aligned} \widetilde{\chi}_{\ell}(\lambda_{s})&=\chi_{\ell}(\lambda_{s}),\qquad \ell <\frac{N}{2}, \\ \widetilde{\chi}_{\frac{N}{2}}(\lambda_{2s})&=\frac{\chi_{\frac{N}{2}}(\lambda_{2s})}{\sin\theta+\cos \theta},\qquad \ell=\frac{N}{2}, \\ \widetilde{\chi}_{\frac{N}{2}}(\lambda_{2s+1})&=\chi_{\frac{N}{2}}(\lambda_{2s+1})=0,\qquad \ell=\frac{N}{2}, \\ \widetilde{\chi}_{\ell}(\lambda_{2s})&=\gamma \chi_{\ell}(\lambda_{2s}),\qquad \ell >\frac{N}{2}, \\ \widetilde{\chi}_{\ell}(\lambda_{2s+1})&=\gamma^{-1}\chi_{\ell}(\lambda_{2s+1}),\qquad \ell>\frac{N}{2}, \end{aligned}\end{aligned}$$ for $N$ even. The proper sign should be chosen in taking square roots of the relations so that is fulfilled. Consider now the difference between the monic polynomials $\widetilde{P}_{N}$ and $P_{N}$: $$\begin{aligned} \widetilde{P}_{N}-P_{N}=\sqrt{h_{N}}(\cos 2\theta \,\widetilde{\chi}_{N}-\chi_N).\end{aligned}$$ This is a polynomial of degree $N-1$. Evaluating on the spectral points we have $$\begin{aligned} \widetilde{P}_{N}(\lambda_{2s})-P_{N}(\lambda_{2s})&=\sqrt{h_{N}}\,(1-\gamma^{-1}\cos 2\theta), \\ \widetilde{P}_{N}(\lambda_{2s+1})-P_{N}(\lambda_{2s+1})&=\sqrt{h_{N}}\,(\gamma \cos 2\theta -1),\end{aligned}$$ for $N$ odd. The right-hand sides are interchanged for $N$ even. It is readily checked with that $$\begin{aligned} (1-\gamma^{-1}\cos 2\theta)=\gamma \cos 2\theta-1.\end{aligned}$$ Hence for $N$ odd and even, $$\begin{aligned} \widetilde{P}_{N}(\lambda_{s})-P_{N}(\lambda_{s})=\sqrt{h_{N}}(\gamma \cos 2\theta -1).\end{aligned}$$ This shows that $\widetilde{P}_{N}-P_{N}$, a polynomial of degree $N-1$, is equal to a constant for $N$ points and must hence be identically equal to that constant. We thus have $$\begin{aligned} \label{ddp} \widetilde{P}_{N}(\lambda_{s})=P_{N}(\lambda_{s})+\zeta_0,\end{aligned}$$ with $$\begin{aligned} \zeta_0=J_1J_2\cdots J_{N}(\gamma \cos 2\theta-1).\end{aligned}$$ From the knowledge of $\widetilde{P}_{N+1}(\lambda_{s})=P_{N+1}(\lambda_{s})$ and of $\widetilde{P}_{N}(\lambda_{s})$, using the recurrence relations, it is possible to show by induction that \[84\] $$\begin{aligned} \widetilde{P}_{\ell}=P_{\ell},\end{aligned}$$ and $$\begin{aligned} \label{BB} \widetilde{P}_{N-\ell}=P_{N-\ell}+\zeta_{\ell} P_{\ell},\end{aligned}$$ for $$\begin{aligned} \ell= \begin{cases} 0, \ldots, \frac{N-1}{2} & \text{$N$ odd} \\ 0,\ldots, \frac{N}{2}-1 & \text{$N$ even} \end{cases},\end{aligned}$$ with in addition $$\begin{aligned} \widetilde{P}_{\frac{N}{2}}(\lambda)=P_{\frac{N}{2}}(\lambda)=(\lambda-\lambda_1)(\lambda-\lambda_3)\cdots (\lambda-\lambda_{N-1}),\end{aligned}$$ when $N$ is even. The constant $\zeta_{\ell}$ in is given by $$\begin{aligned} \zeta_{\ell}=\frac{\zeta_0}{J^2_{N+1-\ell}\cdots J_{N-1}^2 J_{N}^2}.\end{aligned}$$ Details will be given elsewhere [@Tsujimoto_2015]. The fact that the polynomials $\widetilde{P}_{\ell}$ are equal to the unperturbed polynomials $P_{\ell}$ for the first half of the indices/degrees was expected because the recurrence coefficients are the same up that point. It is readily checked that the evaluations on the spectral points are entirely consistent with the formulas when one allows for . The bi-lattices models and para-Krawtchouk polynomials ====================================================== We saw in Section 4 that within the class of $XX$ spin chains with non-uniform nearest neighbor couplings, the general conditions in order to have fractional revival at two sites are two-fold. One, the spectrum $\{\lambda_{s}\}$ of the one-excitation Hamiltonian $J$ must be comprised of the points of the bi-lattice or of an ordered subset of those grid points resulting from the removal of consecutive eigenvalues. Second, the transition matrix that diagonalizes $J$ must be made out of polynomials that are orthogonal with respect to the weight $w_{s}$ given by $$\begin{aligned} \label{weights-2} \begin{aligned} w_{2s}= \begin{cases} -\frac{\gamma \sqrt{h_{N}}}{P_{N+1}'(\lambda_{2s})} & \text{$N$ odd} \\ \frac{\sqrt{h_{N}}}{\gamma P_{N+1}'(\lambda_{2s})} & \text{$N$ even} \end{cases}, \qquad w_{2s+1}= \begin{cases} \frac{\sqrt{h_{N}}}{\gamma P_{N+1}'(\lambda_{2s+1})} & \text{$N$ odd} \\ -\frac{\gamma \sqrt{h_{N}}}{P_{N+1}'(\lambda_{2s+1})} & \text{$N$ even} \end{cases}, \end{aligned}\end{aligned}$$ with $\gamma$ the positive root of and $P_{N+1}'(\lambda)$ as before, the derivative of the characteristic polynomial. Finding the specifications of the corresponding spin chain amounts to an inverse spectral problem that can be solved by finding the polynomials occurring in the transition matrix and thereafter their recurrence coefficients which are the entries of $J$. In Section 4 still, we pointed out that there is an interesting special case that arises when the relative phase $\psi=\frac{\pi}{2}$. When this is so, $\gamma=1$, $w_{s}$ is given by and we know that $J$ is mirror-symmetric. This case has been considered in and we shall discuss it in detail here. The authors of have determined numerically the persymmetric Jacobi matrix in the perfectly balanced situation $\theta=\pi/8$. We shall indicate that there is in fact an exact description for any $\theta$. Indeed, chains with bi-lattice spectra and mirror-symmetric couplings have been analyzed with the help of the para-Krawtchouk polynomials that two of us have identified and characterized in . Since their Jacobi matrix is persymmetric, these models are poised to admit PST. The circumstances under which they shall exhibit PST in addition to FR will be discussed. We shall conclude the section by returning to the general case. We shall explain that it can be realized by combining the construction of the persymmetric matrices associated to bi-lattices and para-Krawtchouk polynomials with the isospectral deformations described in Section 5 and possibly surgeries. When $\psi=\pi/2$, the spectral condition becomes $$\begin{aligned} e^{-iT\lambda_{s}}=e^{i\phi}\left[\sin 2\theta +(-1)^{N+s}\,i\,\cos 2\theta \right],\end{aligned}$$ which amounts to $$\begin{aligned} e^{-iT\lambda_{2s}}=e^{i\phi}e^{i\left(\frac{\pi}{2}-2\theta\right)},\qquad e^{-iT\lambda_{2s+1}}=e^{i\phi}e^{-i\left(\frac{\pi}{2}-2\theta\right)}.\end{aligned}$$ Let us now calculate the transition amplitude ${{\,\langle k\rvert}\,e^{-iTJ}\,{\,\rvert\ell\rangle}}$ in analogy with what was done in Section 3 (see ). Assume that $N$ is odd, one has $$\begin{aligned} \begin{aligned} &{{\,\langle k\rvert}\,e^{-iT J}\,{\,\rvert\ell\rangle}} \\ &=\sum_{2s} e^{-iT\lambda_{2s}}\,w_{2s}\,\chi_{\ell}(\lambda_{2s})\chi_{k}(\lambda_{2s}) +\sum_{2s+1}e^{-iT\lambda_{2s+1}}\,w_{2s+1}\,\chi_{\ell}(\lambda_{2s+1})\chi_k(\lambda_{2s+1}) \\ &=e^{i\phi}e^{-i\pi/2}\Big[ \cos 2\theta \,\Big(\sum_{2s}w_{2s}\,\chi_{\ell}(\lambda_{2s})\chi_{k}(\lambda_{2s}) -\sum_{2s+1}w_{2s+1}\,\chi_{\ell}(\lambda_{2s+1})\chi_{k}(\lambda_{2s+1})\Big) \\ &+i \sin 2\theta\,\Big( \sum_{2s}w_{2s}\,\chi_{\ell}(\lambda_{2s})\chi_{k}(\lambda_{2s})+\sum_{2s+1}w_{2s+1}\,\chi_{\ell}(\lambda_{2s+1})\chi_{k}(\lambda_{2s+1})\Big)\Big]. \end{aligned}\end{aligned}$$ Making use of , one finds that $$\begin{aligned} {{\,\langle k\rvert}\,e^{-iTJ}\,{\,\rvert\ell\rangle}}=e^{i\phi}\left[\delta_{\ell k}\sin 2\theta+i \cos 2\theta \delta_{N-\ell, k}\right],\end{aligned}$$ which shows that $$\begin{aligned} e^{-iTJ}{\,\rvert\ell\rangle}=e^{i\phi}\left[\sin 2\theta {\,\rvert\ell\rangle}+i \cos 2\theta {\,\rvertN-\ell\rangle}\right],\qquad \ell=0,1,\ldots, N.\end{aligned}$$ As observed in , we see that a state localized at site $\ell$ will be revived at the sites $\ell$ and $N-\ell$. In matrix form, we have found that $$\begin{aligned} \label{93} e^{-iTJ}=e^{i\phi} \begin{pmatrix} \sin 2\theta & & & i \cos 2\theta \\ &\ddots & \udots & \\ & \udots & \ddots & \\ i \cos 2\theta &&& \sin 2\theta \end{pmatrix},\end{aligned}$$ for $N$ odd. In the same way, one shows that for $N$ even $$\begin{aligned} \label{94} e^{-iTJ}=e^{i\phi} \begin{pmatrix} \sin 2\theta &&&& i \cos 2\theta \\ & \ddots &&\udots & \\ &&e^{i\left(\frac{\pi}{2}-2\theta \right)} && \\ &\udots && \ddots & \\ i \cos 2\theta &&&& \sin 2\theta \end{pmatrix}.\end{aligned}$$ The set of couplings and magnetic field strengths for which Hamiltonians of the form will lead to this behavior has been provided explicitly in . They happen to be formed of the recurrence coefficients of the orthogonal polynomials that have been called the para-Krawtchouk polynomials. These OPs are precisely those that are associated to persymmetric Jacobi matrices with the bi-lattice spectra $$\begin{aligned} \label{latt} \overline{x}_{s}=s+\frac{1}{2}(\delta-1)(1-(-1)^{s}).\end{aligned}$$ They have been constructed with the help of the Euclidean algorithm, described in , from the knowledge of the two polynomials $\overline{P}_{N+1}$ and $\overline{P}_{N}$, the former being prescribed by the spectrum and the latter by the mirror symmetry. They have been named para-Krawtchouk polynomials on the one hand because their spectrum coincides, when $N\rightarrow \infty$, with that of the parabosonic oscillator [@1994_Rosenblum] and on the other hand because they become the standard Krawtchouk polynomials when $\delta=1$. Their recurrence coefficients and properties are given in . One has for $N$ odd $$\begin{aligned} \label{96} \overline{B}_{\ell}=\frac{N-1+\delta}{2},\qquad \overline{J}_{\ell}=\frac{1}{2}\sqrt{\frac{\ell(N+1-\ell)((N+1-2\ell)^2-\delta^2)}{(N-2\ell)(N-2\ell+2)}},\end{aligned}$$ and for $N$ even $$\begin{aligned} \label{97} \begin{aligned} \overline{B}_{\ell}&=\frac{N-1+\delta}{2}+\frac{(\delta-1)(N+1)}{4}\left(\frac{1}{2\ell-N-1}-\frac{1}{2\ell+1-N}\right), \\ \overline{J}_{\ell}&=\frac{1}{2}\sqrt{\frac{\ell(N+1-\ell)((2\ell-N-1)^2-(\delta-1)^2)}{(2\ell-N-1)^2}}, \end{aligned}\end{aligned}$$ for $\ell=0,1,\ldots, N$. These formulas correspond to the lattice . It is easy to see from the recurrence relation $$\begin{aligned} \overline{x}\,\overline{P}_{\ell}(\overline{x})=\overline{P}_{\ell+1}(\overline{x})+\overline{B}_{\ell} \overline{P}_{\ell}(\overline{x})+\overline{J}_{\ell}^2 \overline{P}_{\ell-1}(\overline{x}),\end{aligned}$$ of the monic polynomials for instance, that an affine transformation of the lattice points $$\begin{aligned} x_{s}=a \overline{x}_{s}+b,\end{aligned}$$ will lead to orthogonal polynomials $P_{\ell}(x)$ with recurrence coefficients given by $$\begin{aligned} \label{100} B_{\ell}=a \overline{B}_{\ell}+b,\qquad J_{\ell}=a \overline{J}_{\ell}.\end{aligned}$$ Note that the diagonal terms $\overline{B}_{\ell}$, that is the magnetic fields, are the same at every site for $N$ odd; see . They can thus be made equal to zero by an affine transformation. This is not so for $N$ even however. Comparing with and making use of , we see that by choosing the global phase $\phi$ to be $$\begin{aligned} \phi=\frac{\pi}{2}(N-1+\delta)= \begin{cases} \frac{\pi(N+1)}{2} & \text{$N$ odd} \\ \frac{\pi(N-1)}{2}& \text{$N$ even} \end{cases},\end{aligned}$$ and in view of -, the spin chains with the fractional revival features described in this section have their couplings and magnetic fields given by $$\begin{aligned} \label{102} B_{\ell}=0,\qquad J_{\ell}=\frac{\pi}{T} \overline{J}_{\ell},\end{aligned}$$ for $N$ odd and $$\begin{aligned} \label{103} B_{\ell}=-\frac{\theta}{T}(N+1)\left(\frac{1}{2\ell-N-1}-\frac{1}{2\ell+1-N}\right),\quad J_{\ell}=\frac{\pi}{T} \overline{J}_{\ell},\end{aligned}$$ for $N$ even, with $\overline{J}_{\ell}$ given by and where $\delta=1+4\theta/\pi$ for $N$ odd and $\delta=1-4\theta/\pi$ for $N$ even. As observed also numerically in , relative to the coefficients of the Krawtchouk chain given in , the magnetic fields remains zero for $N$ odd while they are proportional to $\theta$ for $N$ even. Note that the Krawtchouk chain parameters are recovered when $\theta=0$ and that contrary to the isospectral models with fractional revival covered in the last section, here, all the $J_{\ell}$ are modified in comparison with those of . Interestingly, the para-Krawtchouk models have been shown in to enact PST for $$\begin{aligned} \delta=\frac{M_1}{M_2},\end{aligned}$$ where $M_1$ and $M_2$ are positive co-prime integers and $M_1$ is odd. Let us here explain how spin chains that lead to fractional revival can also exhibit perfect state transfer. To that end, introduce the Hadamard matrices \[Hadamard\] $$\begin{aligned} H=\frac{1}{\sqrt{2}} \begin{pmatrix} 1&&&&&1 \\ &\ddots &&&\udots & \\ &&1&1&& \\ &&1&-1&& \\ &\udots &&&\ddots & \\ 1&&&&&-1 \end{pmatrix},\end{aligned}$$ for $N$ odd and $$\begin{aligned} H=\frac{1}{\sqrt{2}} \begin{pmatrix} 1&&&&&&1 \\ &\ddots &&&&\udots& \\ &&1&&1&& \\ &&&\sqrt{2}&&& \\ &&1&&-1&& \\ &\udots &&&&\ddots& \\ 1&&&&&&-1 \end{pmatrix},\end{aligned}$$ for $N$ even. It is readily seen that $$\begin{aligned} e^{-iTJ}=H\,U(\alpha)\,H,\end{aligned}$$ with $\alpha=\frac{\pi}{2}-2\theta$ and $U(\alpha)$ the unitary diagonal matrix with elements $$\begin{aligned} U_{ij}(\alpha)=\delta_{ij} \begin{cases} e^{i\alpha} & i,j=0,\ldots, \lfloor \frac{N}{2}\rfloor \\ e^{-i\alpha} & i,j=\lfloor \frac{N}{2}+1\rfloor, \ldots, N \end{cases},\end{aligned}$$ where $\lfloor x\rfloor$ is the integer part of $x$. For $M$ an integer, it thus follows that $$\begin{aligned} e^{-iMTJ}=H\,U(M\alpha)\,H,\end{aligned}$$ since $H^2=1$. Therefore, after a time $MT$ one has $\frac{\pi}{2}-2\theta\rightarrow M\left(\frac{\pi}{2}-2\theta\right)$. Express now the manifestation of fractional revival in the form $$\begin{aligned} e^{-iTJ}{\,\rvert0\rangle}=e^{i\phi}\left[\cos\left(\frac{\pi}{2}-2\theta\right){\,\rvert0\rangle}+i \sin \left(\frac{\pi}{2}-2\theta\right){\,\rvertN\rangle}\right].\end{aligned}$$ It follows that $$\begin{aligned} e^{-iM T J}{\,\rvert0\rangle}=e^{i\phi}\left[\cos M\left(\frac{\pi}{2}-2\theta\right){\,\rvert0\rangle}+i \sin M \left(\frac{\pi}{2}-2\theta\right){\,\rvertN\rangle}\right].\end{aligned}$$ Perfect state transfer will occur if $$\begin{aligned} \label{Cnd-3} M\left(\frac{\pi}{2}-2\theta\right)=M_1\left(\frac{\pi}{2}\right),\end{aligned}$$ with $M_1$ an arbitrary odd number since then $e^{iMTJ}{\,\rvert0\rangle}=e^{i\widetilde{\phi}}{\,\rvertN\rangle}$ with $\widetilde{\phi}$ some phase factor. Condition is readily seen to be equivalent to ; when $N$ is even and $\delta=1-\frac{4\theta}{\pi}$ it is immediate and when $N$ is odd and $\delta=1+\frac{4\theta}{\pi}$ one uses the properties of the cosine to conclude. Hence when is verified these spin chains with FR will also exhibit PST at time $MT$. Take for example the perfectly balanced case of FR which occurs when $\theta=\pi/8$, one has then $\delta=1+\frac{4\theta}{\pi}=\frac{3}{2}$ for $N$ odd or $\delta=1-\frac{4\theta}{\pi}=\frac{1}{2}$ for $N$ even and it follows that PST will also happen. The evolution will go like this: at time $t=T$, the packet initially at site $0$ is revived at $0$ and $N$, at $t=2T$ it is perfectly transferred at $N$, at $t=3T$ is is revived again at $0$ and $N$, at $t=4T$ it perfectly returns to $0$, and so on. Let us now complete our systematic analysis by considering the general case where the phase $\psi$ is arbitrary. As stated at the beginning of the section the polynomials that will determine the general Hamiltonians are orthogonal with respect to the weights associated to the bi-lattices . We now understand that we can obtain these polynomials in two steps. First, we determine the para-Krawtchouk polynomials associated to a bi-lattice with $$\begin{aligned} \label{abv-4} \delta=1\pm \frac{4\sigma}{\pi},\quad \binom{\text{$N$ odd}}{\text{$N$ even}},\end{aligned}$$ where $\sigma$ is given by $$\begin{aligned} \label{sigma-def} \sigma=\frac{\pi +\eta-\xi}{4}.\end{aligned}$$ The Jacobi matrix $J$ is then given by and with $\theta$ replaced by $\sigma$ and again using the $+$ sign in when $N$ is odd and the $-$ sign when $N$ is even. At this point we have that $e^{-iTJ}$ is given by or with $\theta$ again replaced by $\sigma$ and $\phi$ by $$\begin{aligned} \label{phibar} \overline{\phi}=\frac{\pi}{4}(N\pm 1)+\frac{1}{2}(\eta+\xi),\quad \binom{\text{$N$ odd}}{\text{$N$ even}}.\end{aligned}$$ This gives us the polynomials $P_{\ell}$ that are associated to the bi-lattice but are orthogonal with respect to the weights with $\gamma=1$ (corresponding to a persymmetric Jacobi matrix) and the $h_{N}$ of the para-Krawtchouk polynomials. The required polynomials that are properly orthogonal against the weights are the perturbed polynomials $\widetilde{P}_{\ell}$ defined in with $$\begin{aligned} \zeta_0=J_1J_2\cdots J_{N}(\widetilde{\gamma}\cos 2\tau-1),\end{aligned}$$ with $\tau$ a new angle so that $\widetilde{\gamma}-\widetilde{\gamma}^{-1}=-2\tan 2\tau$ and $J_{\ell}$ the recurrence coefficients determined in the first step. These polynomials $\widetilde{P}_{\ell}$ will yield through their recurrence relation, the parameters $\widetilde{J}_{\ell}$ and $\widetilde{B}_{\ell}$ of the generic chain. The modifications relative to the para-Krawtchouk coefficients are given by the formulas with $\theta$ replaced by $\tau$. The determination of $e^{-iT\widetilde{J}}$ is achieved by conjugating $e^{-iTJ}$ as given in or respectively with the matrix $V$ of or with $\theta$ replaced by $\tau$. Thus are determined the Hamiltonians (within the class considered) that have general fractional revivals at two sites. Note that as needed, the two-step process has introduced two angles $\sigma$ and $\tau$. The correspondence with the original parameters $\theta$ and $\psi$ can be obtained by determining explicitly $e^{-iT\widetilde{J}}$ as indicated before and identifying the coefficients so that $e^{-iT\widetilde{J}}{\,\rvert0\rangle}=e^{i\phi}\left[\sin 2\theta {\,\rvert0\rangle}+i e^{i\psi}\cos 2\theta {\,\rvertN\rangle}\right]$. This leads to the following relations \[cccc\] $$\begin{aligned} \label{aaaa} e^{i\overline{\phi}}(\sin 2\sigma+i \cos 2\sigma \sin 2\tau)=e^{i\phi}\sin 2\theta, \\ \label{bbbb} i e^{i\overline{\phi}}\cos 2\sigma \cos 2\tau=e^{i(\phi+\psi)} \cos 2\theta,\end{aligned}$$ with $\overline{\phi}$ given by . These conditions are the same for $N$ odd or even provided the appropriate $\overline{\phi}$ is chosen. Equation immediately leads to $$\begin{aligned} \phi=\overline{\phi}-\psi+\frac{\pi}{2}+2n\pi,\qquad n\in \mathbb{Z}.\end{aligned}$$ The real part of yields $$\begin{aligned} \label{abv-6} \sin 2\sigma=\sin 2\theta \sin \psi,\end{aligned}$$ which must be identically satisfied. Upon writing the above equation in the form $$\begin{aligned} \cos \left(\frac{\xi-\eta}{2}\right)=\sin 2\theta \sin \psi,\end{aligned}$$ using , that holds is verified from trigonometric identities having recalled the definitions of $\xi$ and $\eta$ (see the sentence after ) and observed that $\sin \xi$ and $\sin \eta$ must have opposite signs. There then remains from the conditions $$\begin{aligned} \cos 2\tau&=\cos 2\theta \mathrm{cosec}\,\left(\frac{\xi-\eta}{2}\right), \\ \sin 2\tau&=\sin 2\theta \cos \psi \mathrm{cosec}\,\left(\frac{\xi-\eta}{2}\right),\end{aligned}$$ which determine $\tau$. Conclusion ========== Let us summarize our findings. We have completely characterized the $XX$ spin chains with nearest neighbor couplings that admit fractional revival at two sites. There are two basic ways according to which FR can be realized. One is via isospectral deformations of chains with the PST property and the other is by a mirror-symmetric set of couplings corresponding to the recurrence coefficients of the para-Krawtchouk polynomials. Hamiltonians with FR at two sites controlled by two arbitrary parameters are obtained by compounding these two approaches. The second approach comes with a complete set of couplings and magnetic fields while the first approach only sees the modification of a few central coefficients of a parent PST chain. The time $T$ for FR occurrence doest not depend on the length of the chain. The first method can be applied to any PST chain to obtain a chain with FR. Assuming the model is exactly solvable to start with, it will remain so under the isospectral deformation. The models corresponding to the second way are analytic and may exhibit PST in addition to FR. A note is in order here. In principle all chains with FR at two sites can be obtained from the generic two-parameter models by surgeries. Indeed, since we are dealing with spectra that are finite, any admissible set of eigenvalues can be obtained by removing levels from a bi-lattice chosen as large as required. With every such removal, the analytic expressions for the chain parameters will become more and more involved thus obscuring the exact solvability property. It has been indicated that information transfer can be achieved with spin chains showing FR at two sites. Knowing that the clone of the initial information will be at the end of the chain with definite probability $(\cos 2\theta)^2$ at the prescribed time $T$, the end site content at that time can thus be used as input to some quantum process or computation with the effect that the final output of the computation will provide the right answer with known probability related to $(\cos 2\theta)^2$. Observe that this probability can be tuned by setting correspondingly the chain couplings. Note also that the presence of another clone at the site $(\ell=0)$ where the data is entered could be used periodically in an experimental or practical context to check that transmission is proceeding without alterations since the outcomes at $\ell=0$ and at $\ell=N$ are correlated. It has also been pointed out that balanced perfect revival can generate entanglement. Indeed it is readily observed that for $\theta=\pi/8$ the sites $0,1, N-1$ and $N$ for instance, will support at time $T$ the entangled state ${\,\rvert\uparrow\rangle}{\,\rvert\downarrow\rangle}+{\,\rvert\downarrow\rangle}{\,\rvert\uparrow\rangle}$. Another question has to do with precision. Throughout this paper we have looked for situations where FR occurs with probability 1. This could be unduly stringent in view of the unavoidable instrumental error for instance. In fact, it would suffice in that perspective to consider situations where FR can happen with probability as close to 1 as desired. This question has been analyzed in the case of full revival and has been referred to as almost perfect state transfer (APST) or pretty good state transfer . One may assume that the isospectral deformations of a chain with APST will lead to chains with almost perfect fractional revival (APFR). Furthermore it has been shown in that the para-Krawtchouk chains admit APST for a time $T$ independent of $N$ if the bi-lattice parameter $\delta$ is irrational, they should thus admit APFR in those cases too. The robustness of FR in the para-Krawtchouk model has also been checked numerically in . Finally, it would be of great interest to study the possibilities for fractional revival at more than two sites. It is known that any unitary matrix can be presented in a form with two diagonals and 2 antidiagonals [@1993_Watkins_SIAMRev_35_430]; this is related to CMV theory . Assume that $e^{-iTJ}$ is in that form in the register basis ${\,\rvert\ell\rangle}$, where $\ell=0,1,\ldots, N$. This implies revival at up to four sites. A relevant question is to determine the Hamiltonians $H$ with their one-excitation restrictions $J$ that will lead to such unitaries. This is likely to involve operators beyond the realm of nearest-neighbor interactions. We hope to report on this question in the near future. Acknowledgments {#acknowledgments .unnumbered} =============== The authors would like to thank L. Banchi, S. Bose, G. Coutinho, M. Christandl and S. Severini for their collegial input. While this paper was being completed we were informed that L. Banchi and G. Coutinho had obtained in a different way the fractional revival described in Section 4 using the same perturbed polynomials that we have identified. We are very grateful that they shared their results with us prior to publication. VXG holds a scholarship from the Natural Science and Engineering Research Council of Canada (NSERC). The research of LV is supported in part by NSERC. AZ would like to thank the Centre de recherches mathématiques for its hospitality. References {#references .unnumbered} ========== [10]{} S. Bose. . , 48:13–30, 2007. A. Kay. . , 8:641–676, 2010. G. M. Nikolopoulos and I. Jex. . Springer, 2014. R. W. Robinett. . , 392(1-2):1–119, March 2004. M. Berry, I. Marzoli, and W. Schleich. . , 14:39–46, June 2001. D. L. Aronstein and C. R. Jr. Stroud. . , 55(6):4526–4537, 1997. B. Chen, Z. Song, and C.-P. Sun. . , 75:012113, 2007. L. Banchi, E. Compagno, and S. Bose. . , 91:052323, 2015. V. X. Genest, L. Vinet, and A. Zhedanov. . , 2015. L. Vinet and A. Zhedanov. . , 85:012323, 2012. C. Albanese, M. Christandl, N. Datta, and A. Ekert. . , 93:230502, 2004. K. [Rama Koteswara Rao]{}, T. S. Mahesh, and A. Kumar. . , 90(1):012306, July 2014. L. Vinet and A. Zhedanov. . , 45(26):265304, July 2012. T. Chihara. . . Dover Publications, reprint edition, 2011. V. X. Genest, S. Tsujimoto, L. Vinet, and A. Zhedanov. . G. M. L. Gladwell. . . Springer, 2^nd^ edition, 2004. R. Koekoek, P. A. Lesky, and R. F. Swarttouw. . Springer, 2010. T. Shi, Y. Li, Z. Song, and C.-P. Sun. . , 71:032309, 2005. N. I. Stoilova and J. [Van der Jeugt]{}. . , 7:33–45, 2011. L. Vinet and A. Zhedanov. . , 343:012125, 2012. E. I. Jafarov and J. [Van der Jeugt]{}. . , 43:405301, 2010. R. Chakrabarti and J. [Van der Jeugt]{}. . , 43:085302, 2010. G. Andrews, R. Askey, and R. Roy. , volume 71 of [*[Encyclopedia of Mathematics and its Applications]{}*]{}. Cambridge University Press, 2001. L. Dai, Y. P. Feng, and Kwek L. C. . , 43(3):035302, January 2010. M. Rosenblum. . In A. Feintuch and I. Gohberg, editors, [*[Nonselfadjoint operators and related topics]{}*]{}, volume 73 of [*[Operator Theory: Advances and Applications]{}*]{}. Springer, 1994. L. Vinet and A. Zhedanov. . , 86:052319, 2012. C. Godsil, S. Kirkland, S. Severini, and J. Smith. . , 109:050502, 2012. D. S. Watkins. . , 35(3):430–471, September 1993. M. J. Cantero, L. Moral, and L. Velazquez. . , 362:29–56, March 2003.
--- abstract: 'The present article can be considered as a complement to the work P.R.D **93,** 045002 (2016) where an nonperturbative approach to QED with $x$-electric critical potential steps was developed. In the beginning we study conditions when - and -spaces of the QED under consideration are unitarily equivalent. Then we construct a general density operator with the vacuum initial condition. Such an operator describes a deformation of the initial vacuum state by $x$-electric critical potential steps. We construct reductions of the deformed state to electron and positron subsystems, calculating the loss of the information in these reductions. We illustrate the general consideration studying the deformation of the quantum vacuum between two capacitor plates. Finally we calculate the entanglement measures of these reduced matrices as von Neumann entropies.' author: - | S.P. Gavrilov$^{1,2}$[^1], D. M. Gitman$^{1,3,4}$[^2] and A.A. Shishmarev $^{1,4}\thanks{% [email protected]}$\ [$^{1}$ Department of Physics, Tomsk State University, Tomsk 634050, Russia; ]{}\ [$^{2}$ Department of General and Experimental Physics, ]{}\ [Herzen State Pedagogical University of Russia,]{}\ [Moyka embankment 48, 191186 St. Petersburg, Russia;]{}\ [$^{3}$ P.N. Lebedev Physical Institute, 53 Leninsky prospekt, 119991 Moscow, Russia;]{}\ [$^{4}$ Institute of Physics, University of São Paulo, CP 66318, CEP 05315-970, São Paulo, SP, Brazil]{}\ title: Unitarity and vacuum deformation in QED with critical potential steps --- Introduction ============ Problems of quantum field theory with external field violating the vacuum stability are already being studied systematically for a long time. Recently they turned out to be of special attention due to new real possible applications in astrophysics and physics of nanostructures. A nonperturbative formulation of QED with the so-called $t$-potential electric steps (time-depending potentials) was developed in Refs. [Gitman1,Gitman2,Gitman3]{} and applied to various model and realistic physical problems, see e.g. [@GGT; @DvGavGi; @GavGitSh15]. In the recent work [@GavGi15] Gavrilov and Gitman succeeded to construct a consistent version of QED with the so-called $x$-electric critical potential steps (time-independent nonuniform electric fields of constant direction that are concentrated in restricted space areas), for which a large area of new important applications opens, see reviews in [@GavGi15; @L-field]. However many principle questions of the formulation still require detailed clarification. The present work is devoted to some of them. In the beginning we study conditions when - and -spaces of the QED under consideration are unitarily equivalent. Then we construct a general density operator with the vacuum initial condition. Such an operator describes a deformation of the initial vacuum state by $x$-electric critical potential steps. We construct reductions of the deformed state to electron and positron subsystems, calculating the loss of the information in these reductions. We illustrate the general consideration studying the deformation of the quantum vacuum between two capacitor plates. In this article we generally adapt the notations of the paper [@GavGi15], where the general theory of QED with $x$-electric critical potential steps was developed, and Ref. [@L-field], where the particular case of a constant electric field between two capacitor plates was studied. In fact, the present article can be considered as a complement to the work [@GavGi15]. Unitarity in QED with $x$-electric potential steps\[Sec.2\] =========================================================== It was shown in Ref. [@GavGi15] that in the presence of $x$-electric potential steps the quantized Dirac field can be described in terms of and electrons and positrons. Such particles are characterized by quantum numbers $n$ that can be divided in five ranges $% \Omega _{i}$,* *$i=1,...,5$. We denote* *the corresponding quantum numbers by $n_{i}$, so that $n_{i}\in \Omega _{i}$. The manifold of all the quantum numbers $n$ is denoted by $\Omega $, so that $\Omega =\Omega _{1}\cup \cdots \cup \ \Omega _{5}$. The and vacua can be factorized$$\left\vert 0,\mathrm{in}\right\rangle =\sideset{}{^{\,\lower1mm\hbox{$% \otimes$}}}\tprod\limits_{i=1}^{5}\left\vert 0,\mathrm{in}\right\rangle ^{\left( i\right) }\ ,\ \ \left\vert 0,\mathrm{out}\right\rangle =% \sideset{}{^{\,\lower1mm\hbox{$\otimes$}}}\tprod\limits_{i=1}^{5}\left\vert 0,\mathrm{out}\right\rangle ^{\left( i\right) }\ , \label{2.1}$$where $\left\vert 0,\mathrm{in}\right\rangle ^{\left( i\right) }$ and $% \left\vert 0,\mathrm{out}\right\rangle ^{\left( i\right) }$ are the partial vacua in the ranges $\Omega _{i}$. Note that in each range $\Omega _{i}$ it is also possible to factorize vacuum vectors in modes with fixed quantum number $n$ so that$$\left\vert 0,\mathrm{in}\right\rangle ^{\left( i\right) }=\prod_{n\in \Omega _{i}}\left\vert 0,\mathrm{in}\right\rangle _{n}^{\left( i\right) },\text{ \ }% \left\vert 0,\mathrm{out}\right\rangle ^{\left( i\right) }=\prod_{n\in \Omega _{i}}\left\vert 0,\mathrm{out}\right\rangle _{n}^{\left( i\right) }. \label{2.1a}$$ It was shown that all and vacua, except the vacua in the range $\Omega _{3}$ (in the so-called Klein zone) coincide,$$\left\vert 0,\mathrm{out}\right\rangle ^{\left( i\right) }=\left\vert 0,% \mathrm{in}\right\rangle ^{\left( i\right) },\ \ i=1,2,4,5,\ \ \left\vert 0,% \mathrm{out}\right\rangle ^{\left( 3\right) }\neq \left\vert 0,\mathrm{in}% \right\rangle ^{\left( 3\right) }. \label{2.2}$$ In what follows, we use the subindex $K$ to denote all the quantities from the Klein zone, e.g. $\left\vert 0,\mathrm{in}\right\rangle ^{\left( 3\right) }=\left\vert 0,\mathrm{in}\right\rangle ^{\left( K\right) }$, $% \Omega _{3}=\Omega _{K},$ and so on. The vacuum-to-vacuum transition amplitude $c_{v}=\langle 0,\mathrm{out}|0,% \mathrm{in}\rangle $ coincides (due to Eq. (\[2.2\])) with the vacuum-to-vacuum transition amplitude $c_{v}^{\left( K\right) }$ in the Klein zone,$$c_{v}=\langle 0,\mathrm{out}|0,\mathrm{in}\rangle =c_{v}^{\left( K\right) }=\ ^{\left( K\right) }\langle 0,\mathrm{out}|0,\mathrm{in}\rangle ^{(K)}\ . \label{2.4}$$ The linear canonical transformation between the and sets of creation and annihilation operators in the Klein zone ($a$ and $b$ operators are related to electrons and positrons, respectively) can be written in the following form$$\begin{aligned} \ & ^{-}a_{n}(\mathrm{in})=w_{n}\left( +|+\right) ^{-1}\left[ \text{ }% ^{+}a_{n}(\mathrm{out})+w_{n}\left( +-|0\right) \ _{+}b_{n}^{\dagger }(% \mathrm{out})\right] , \notag \\ & \ _{-}b_{n}^{\dagger }(\mathrm{in})=w_{n}\left( -|-\right) ^{-1}\left[ \ _{+}b_{n}^{\dagger }(\mathrm{out})-w_{n}\left( +-|0\right) \text{ }^{+}a_{n}(% \mathrm{out})\right] , \label{2.3a}\end{aligned}$$where $$\begin{aligned} &&w\left( +|+\right) _{n^{\prime }n}=c_{v}^{-1}\langle 0,\mathrm{out}% \left\vert \ ^{+}a_{n^{\prime }}\left( \mathrm{out}\right) \ ^{-}a_{n}^{\dagger }(\mathrm{in})\right\vert 0,\mathrm{in}\rangle , \notag \\ &&w\left( -|-\right) _{n^{\prime }n}=c_{v}^{-1}\langle 0,\mathrm{out}% \left\vert \ _{+}b_{n^{\prime }}\left( \mathrm{out}\right) \ _{-}b_{n}^{\dagger }(\mathrm{in})\right\vert 0,\mathrm{in}\rangle \,, \label{2.5}\end{aligned}$$are relative scattering amplitudes of electrons and positrons, and$$\begin{aligned} &&w\left( +-|0\right) _{n^{\prime }n}=c_{v}^{-1}\langle 0,\mathrm{out}% \left\vert \ ^{+}a_{n^{\prime }}\left( \mathrm{out}\right) \ _{+}b_{n}\left( \mathrm{out}\right) \right\vert 0,\mathrm{in}\rangle \,, \notag \\ &&w\left( 0|-+\right) _{nn^{\prime }}=c_{v}^{-1}\langle 0,\mathrm{out}% \left\vert \ _{-}b_{n}^{\dagger }(\mathrm{in})\ ^{-}a_{n^{\prime }}^{\dagger }(\mathrm{in})\right\vert 0,\mathrm{in}\rangle \,. \label{2.6}\end{aligned}$$are relative amplitudes of a pair creation and a pair annihilation, and$$c_{v}=c_{v}^{\left( K\right) }=\dprod\limits_{n}w_{n}\left( -|-\right) ^{-1}\,. \label{2.7}$$All the amplitude can be expressed via the coefficients $g\left( _{\zeta }\left\vert ^{\zeta ^{\prime }\ }\right. \right) $ which, in turn, are calculated via corresponding solutions of the Dirac equation with $x$-electric potential steps. An important question is whether and spaces are unitarily equivalent? The answer is positive if the linear canonical transformation (\[2.3a\]) (together with its adjoint transformation) is proper one. In the latter case there exists a unitary operator $V$, such that$$\begin{aligned} &&V\left( a(\mathrm{out}),a^{\dag }(\mathrm{out}),b(\mathrm{out}),b^{\dag }(% \mathrm{out})\right) V^{\dag }=\left( a(\mathrm{in}),a^{\dag }(\mathrm{in}% ),b(\mathrm{in}),b^{\dag }(\mathrm{in})\right) , \notag \\ &&\left\vert 0,\mathrm{in}\right\rangle =V\left\vert 0,\mathrm{out}% \right\rangle ,\ V^{\dag }=V^{-1}\ . \label{2.8}\end{aligned}$$ Let us denote all the operators via $\alpha $ and all the operators via $\beta .$ Then the linear uniform canonical transformation between these operators can be written as (we consider the only Fermi case here)$$\beta =\Phi \alpha +\Psi \alpha ^{+},\ \ \Phi \Phi ^{+}+\Psi \Psi ^{+}=1,\ \Phi \Psi ^{T}+\Psi \Phi ^{T}=0. \label{2.9}$$According to ([@Berezin; @Kiperman]), transformation (\[2.9\]) is proper one if $\Psi $ is a Hilbert-Schmidt operator, i.e., $\dsum\limits_{m,n}\left% \vert \Psi _{mn}\right\vert ^{2}<\infty $. It is easily to see that Hilbert-Schmidt criterion for the transformation (\[2.3a\]) reads$$\sum\limits_{n}\left[ \left\vert \frac{w_{n}\left( +-|0\right) }{w_{n}\left( +|+\right) }\right\vert ^{2}+\left\vert \frac{w_{n}\left( +-|0\right) }{% w_{n}\left( -|-\right) }\right\vert ^{2}\right] <\infty . \label{2.10}$$ As it was shown in Ref. [@GavGi15],$$\left\vert \frac{w_{n}\left( +-|0\right) }{w_{n}\left( +|+\right) }% \right\vert ^{2}=N_{n}^{a},\ \left\vert \frac{w_{n}\left( +-|0\right) }{% w_{n}\left( -|-\right) }\right\vert ^{2}=N_{n}^{b}, \label{2.11}$$where $N_{n}^{a}$ and $N_{n}^{b}$ are differential mean numbers of electrons and positrons created from the vacuum by the potential step. Then, the left-hand side of Eq. (\[2.10\]) is the total number $N$ of particles created from the vacuum, such that unitarity condition can be written as$$\sum\limits_{n}\left( N_{n}^{a}+N_{n}^{b}\right) =N<\infty . \label{2.12}$$Note that in- and out-spaces of the scalar QED in the presence of critical potential steps are unitarily equivalent under the same condition. For realistic external field limited in space and time this condition is obviously satisfied. Inequality (\[2.10\]) derived for QED with $x$-electric potential steps can be considered as one more confirmation of the consistency of the latter theory and correct interpretation of and particles there. One should note that qualitatively similar result was established in Ref. [@Gitman2] for QED with time-dependent electric potential steps. Deformation of initial vacuum state\[Sec.3\] ============================================ In this section we are going to study deformation of initial vacuum state under the action of a $x$-electric potential step. In the Heisenberg picture, the density operator of the system whose initial state is the vacuum, is given by equation$$\hat{\rho}=|0,\mathrm{in}\rangle \langle 0,\mathrm{in}|. \label{3.1}$$The and Fock spaces are related by the unitary operator $V$, see (\[2.8\]). Then$$\hat{\rho}=V|0,\mathrm{out}\rangle \langle 0,\mathrm{out}|V^{\dag }\ . \label{3.2}$$ In QED with $x$-electric potential steps the operator $V$ was constructed in [@GavGi15]. Since it can be factorized, the density operator (\[3.2\]) can be factorized as well, $$\begin{aligned} &&V=\prod\limits_{i=1}^{5}V^{(i)},\ \ |0,\mathrm{in}\rangle ^{(i)}=V^{(i)}|0,% \mathrm{out}\rangle ^{(i)}, \notag \\ &&\hat{\rho}=\prod\limits_{i=1}^{5}V^{(i)}|0,\mathrm{out}\rangle ^{\left( i\right) }\;^{(i)}\langle 0,\mathrm{out}|V^{(i)\dag }. \label{3.3}\end{aligned}$$Due to the specific structure of the operator $V^{\left( i\right) }$,$\ i=1,2,4,5$, we have$$V^{(i)}|0,\mathrm{out}\rangle ^{\left( i\right) }\;^{\left( i\right) }\langle 0,\mathrm{out}|V^{(i)\dag }=|0,\mathrm{out}\rangle ^{\left( i\right) }\;^{(i)}\langle 0,\mathrm{out}|\ =|0,\mathrm{in}\rangle ^{\left( i\right) }\;^{(i)}\langle 0,\mathrm{in}|,\ \ i=1,2,4,5.$$The latter relation has clear physical meaning – vacuum states in the ranges $\Omega _{1}$, $\Omega _{2}$, $\Omega _{4}$, and $\Omega _{5}$ do not change with time, there is no particle creation there. Let us use the following notation$$\begin{aligned} P^{\prime } &=&\prod\limits_{i=1,2,4,5}|0,\mathrm{out}\rangle ^{\left( i\right) }\;^{(i)}\langle 0,\mathrm{out}|\ =\prod\limits_{i=1,2,4,5}|0,% \mathrm{in}\rangle ^{\left( i\right) }\;^{(i)}\langle 0,\mathrm{in}|, \notag \\ \ \hat{\rho}_{K} &=&V^{(K)}P_{K}V^{(K)\dag },\ \ P_{K}=|0,\mathrm{out}% \rangle ^{\left( K\right) }\;^{(K)}\langle 0,\mathrm{out}|, \label{3.4}\end{aligned}$$then$$\hat{\rho}=P^{\prime }\hat{\rho}_{K}\ . \label{3.5}$$ Using the following explicit form of the operator $V^{(K)}=V^{(3)}$ derived in Ref. [@GavGi15],$$\begin{aligned} V^{(K)}& =\exp \left[ -\sum_{n\in \Omega _{K}}{\ }^{+}a_{n}^{\dag }(\mathrm{% out})w_{n}\left( +-|0\right) {\ }_{+}b_{n}^{\dag }(\mathrm{out})\right] \\ & \times \exp \left[ -\sum_{n\in \Omega _{K}}{\ }_{+}b_{n}(\mathrm{out})\ln w_{n}\left( -|-\right) {\ }_{+}b_{n}^{\dag }(\mathrm{out})\right] \\ & \times \exp \left[ \sum_{n\in \Omega _{K}}{\ }^{+}a_{n}^{\dag }(\mathrm{out% })\ln w_{n}\left( +|+\right) {\ }^{+}a_{n}(\mathrm{out})\right] \\ & \times \exp \left[ -\sum_{n\in \Omega _{K}}{\ }_{+}b_{n}(\mathrm{out}% )w_{n}\left( 0|-+\right) {\ }^{+}a_{n}(\mathrm{out})\right] ,\end{aligned}$$one can derive two alternative expressions for the density operator $\hat{% \rho}_{K}$. The first one is a normal form exponential with respect to the -operators (denoted by $:\ldots :$): $$\begin{gathered} \hat{\rho}_{K}|c_{v}|^{-2}=\mathbf{:}\exp \left\{ -\sum_{n\in \Omega _{K}}% \left[ \text{ }^{+}a_{n}^{\dag }(\mathrm{out})\text{ }^{+}a_{n}(\mathrm{out}% )+\text{ }_{+}b_{n}^{\dag }(\mathrm{out})\text{\ }_{+}b_{n}(\mathrm{out}% )\right. \right. \notag \\ +\left. \left. \text{ }^{+}a_{n}^{\dag }(\mathrm{out})w_{n}\left( +-|0\right) \text{ }_{+}b_{n}^{\dag }(\mathrm{out})+\text{ }_{+}b_{n}(% \mathrm{out})w_{n}\left( +-|0\right) ^{\ast }\text{ }^{+}a_{n}(\mathrm{out})% \right] \right\} \mathbf{:\ }. \label{3.7}\end{gathered}$$Representation (\[3.7\]) can be derived in the following way: Using ([3.4]{}) and the explicit form of $V^{(K)},$ we can write* *$$\begin{aligned} &&\hat{\rho}_{K}|c_{v}|^{-2}=\exp \left[ -\sum_{n\in \Omega _{K}}{\ }% ^{+}a_{n}^{\dag }(\mathrm{out})w_{n}\left( +-|0\right) {\ }_{+}b_{n}^{\dag }(% \mathrm{out})\right] \notag \\ &&P_{K}\exp \left[ -\sum_{n\in \Omega _{K}}{\ }_{+}b_{n}(\mathrm{out}% )w_{n}\left( +-|0\right) ^{\ast }{\ }^{+}a_{n}(\mathrm{out})\right] . \label{3.7c}\end{aligned}$$Making use of well-known Berezin representation [@Berezin] for a projection operator* *$P_{K}$ on the vacuum state, $$\ P_{K}=\mathbf{:}\exp \left\{ -\sum_{n\in \Omega _{K}}\left[ \text{ }% ^{+}a_{n}^{\dag }(\mathrm{out})\text{ }^{+}a_{n}(\mathrm{out})+\text{ }% _{+}b_{n}^{\dag }(\mathrm{out})\text{\ }_{+}b_{n}(\mathrm{out})\right] \right\} \mathbf{:} \label{3.7b}$$and taking into account that the left and the right exponents in Eq. ([3.7c]{}) are already normal ordered, we easily obtain representation ([3.7]{}). The second representation reads:$$\begin{aligned} &&\ \hat{\rho}_{K}|c_{v}|^{-2}=\prod_{n\in \Omega _{K}}\left[ 1-{\ }% ^{+}a_{n}^{\dag }(\mathrm{out})w_{n}\left( +-|0\right) {\ }_{+}b_{n}^{\dag }(% \mathrm{out})\right] \notag \\ &&\times P_{K,n}\left[ 1-{\ }_{+}b_{n}(\mathrm{out})w_{n}\left( +-|0\right) ^{\ast }{\ }^{+}a_{n}(\mathrm{out})\right] ,\text{ \ } \notag \\ \text{ } &&P_{K,n}=|0,\mathrm{out}\rangle _{n}^{(K)}\ {}_{n}^{(K)}\langle 0,% \mathrm{out}|. \label{3.8a}\end{aligned}$$Representation (\[3.8a\]) can be derived as follows: Using the fact that operators with different quantum numbers $n$ commute, and using the relation, see, e.g., Ref. [@GGT]*, *$$\exp \left[ a^{\dag }Da\right] =\mathbf{:}\exp \left[ a^{\dag }\left( e^{D}-1\right) a\right] \mathbf{:\ }, \label{3.9}$$to transform exponents from $V^{(K)}$, we expand then the obtained expressions in power series. Since the -operators in $V^{(K)}$ are Fermi type, these series are reduced to finite term expressions. Their actions on the vacuum $|0,\mathrm{out}\rangle ^{(K)}$ can be easily calculated, and using of Eq. (\[2.1a\]), we arrive at Eq. (\[3.8a\]). Finally we consider the structure of the $|0,\mathrm{in}\rangle $ state in terms of -operators. First of all we use the fact that the state vector under discussion is factorized,$$\begin{aligned} &&|0,\mathrm{in}\rangle =V|0,\mathrm{out}\rangle =|0,\mathrm{in}\rangle ^{\prime }|0,\mathrm{in}\rangle ^{(K)}, \notag \\ &&|0,\mathrm{in}\rangle ^{\prime }=\prod\limits_{i=1,2,4,5}|0,\mathrm{in}% \rangle ^{(i)},\ \ |0,\mathrm{in}\rangle ^{(K)}=V^{(K)}|0,\mathrm{out}% \rangle ^{(K)}. \label{3.10}\end{aligned}$$Then using the explicit form $V^{(K)}$, we obtain$$|0,\mathrm{in}\rangle ^{(K)}=c_{v}\prod\limits_{n\in \Omega _{K}}\left[ 1-{\ }^{+}a_{n}^{\dag }(\mathrm{out})w_{n}\left( +-|0\right) {\ }_{+}b_{n}^{\dag }(\mathrm{out})\right] |0,\mathrm{out}\rangle ^{(K)}. \label{3.11}$$ In each fixed mode $n\in \Omega _{K}$, the state vector $|0,\mathrm{in}% \rangle $ is a linear superposition of two terms – the vacuum vector in this mode and a state with an electron-positron pair.* * Reductions to electron and positron subsystems\[Sec.4\] ======================================================= It should be stressed that the system under consideration can be considered as a composed from a subsystem of electrons and a subsystem of positrons. One can introduce the so-called two reduced density operators: $\hat{\rho}% _{+}$ of the electron subsystem and $\hat{\rho}_{-}$ of the positron subsystem, averaging complete density operator (\[3.1\]) over all possible positron states or over all possible electron states, respectively, $$\begin{aligned} & \hat{\rho}_{+}=\mathrm{tr}_{-}\hat{\rho}=\sum_{i=3}^{5}\sum_{M}\sum_{\{m\}% \in \Omega _{i}}{}_{b}^{(i)}\langle M,\mathrm{out}|\hat{\rho}|M,\mathrm{out}% \rangle _{b}^{(i)}\,, \notag \\ & \hat{\rho}_{-}=\mathrm{tr}_{+}\hat{\rho}=\sum_{i=1}^{3}\sum_{M}\sum_{\{m\}% \in \Omega _{i}}{}_{a}^{(i)}\langle M,\mathrm{out}|\hat{\rho}|M,\mathrm{out}% \rangle _{a}^{(i)}\,, \notag \\ & |M,\mathrm{out}\rangle _{b}^{(i)}=\left( M!\right) ^{-1/2}b_{m_{1}}^{\dagger }(\mathrm{out})\ldots b_{m_{M}}^{\dagger }(\mathrm{% out})|0,\mathrm{out}\rangle _{b}^{(i)}, \notag \\ & |M,\mathrm{out}\rangle _{a}^{(i)}=\left( M!\right) ^{-1/2}a_{m_{1}}^{\dagger }(\mathrm{out})\ldots a_{m_{M}}^{\dagger }(\mathrm{% out})|0,\mathrm{out}\rangle _{a}^{(i)}. \label{reduction}\end{aligned}$$ Vectors $|0,\mathrm{out}\rangle _{a}^{(i)}$ and $|0,\mathrm{out}\rangle _{b}^{(i)}$ are the electron and positron vacua in $\Omega _{i}$-range, defined by $${a}_{n}^{(i)}(\mathrm{out})|0,\mathrm{out}\rangle _{a}^{(i)}=0,\text{ }\ {b}% _{n}^{(i)}(\mathrm{out})|0,\mathrm{out}\rangle _{b}^{(i)}=0, \label{4.1}$$where ${a}_{n}^{(i)}(\mathrm{out})$ and ${b}_{n}^{(i)}(\mathrm{out})$ are corresponding annihilation operators of electron and positron in this range, respectively. Of course, these electron and positron vacua can be factorized in quantum modes, as was mentioned already above. One can see that$$\begin{aligned} &|0,\mathrm{out}\rangle ^{(1,2)}=&|0,\mathrm{out}\rangle _{a}^{(1,2)}=\prod_{n\in \Omega _{1,2}}|0,\mathrm{out}\rangle _{n,a}^{(1,2)}, \notag \\ &|0,\mathrm{out}\rangle ^{(4,5)}=&|0,\mathrm{out}\rangle _{b}^{(4,5)}=\prod_{n\in \Omega _{4,5}}|0,\mathrm{out}\rangle _{n,b}^{(4,5)}, \notag \\ &|0,\mathrm{out}\rangle ^{(3)}=&|0,\mathrm{out}\rangle ^{(K)}=|0,\mathrm{out}% \rangle _{a}^{(K)}\otimes |0,\mathrm{out}\rangle _{b}^{(K)}, \notag \\ &|0,\mathrm{out}\rangle _{a}^{(K)}=&\prod_{n\in \Omega _{K}}|0,\mathrm{out}% \rangle _{n,a}^{(K)},\text{ \ }|0,\mathrm{out}\rangle _{b}^{(K)}=\prod_{n\in \Omega _{K}}|0,\mathrm{out}\rangle _{n,b}^{(K)}. \label{4.1a}\end{aligned}$$ Using Eq. (\[3.5\]) and representation (\[3.8a\]) for $\hat{\rho}_{K}$, it is easy to calculate traces in Eqs. (\[reduction\]), and to obtain thus explicit forms of the reduced operators $\hat{\rho}_{\pm }$:$$\begin{aligned} &&\hat{\rho}_{+}|c_{v}|^{-2}=\prod\limits_{i=1,2}|0,\mathrm{out}\rangle ^{(i)}\ {}^{(i)}\langle 0,\mathrm{out}| \notag \\ &&\otimes \prod_{n\in \Omega _{K}}\left[ P_{K,a,n}+|w_{n}\left( +-|0\right) |^{2}{}^{+}a_{n}^{\dag }(\mathrm{out})P_{K,a,n}{}^{+}a_{n}(\mathrm{out})% \right] , \notag \\ &&\hat{\rho}_{-}|c_{v}|^{-2}=\prod\limits_{i=4,5}|0,\mathrm{out}\rangle ^{(i)}\ {}^{(i)}\langle 0,\mathrm{out}| \notag \\ &&\otimes \prod_{n\in \Omega _{K}}\left[ P_{K,b,n}+|w_{n}\left( +-|0\right) |^{2}\text{ }_{+}b_{n}^{\dag }(\mathrm{out})P_{K,b,n}\text{{}}_{+}b_{n}(% \mathrm{out})\right] , \notag \\ &&P_{K,a,n}=|0,\mathrm{out}\rangle _{n,a}^{(K)}\ {}_{n,a}^{(K)}\langle 0,% \mathrm{out}|,\text{ \ \ }P_{K,b,n}=|0,\mathrm{out}\rangle _{n,b}^{(K)}\ {}_{n,b}^{(K)}\langle 0,\mathrm{out}|. \label{4.2}\end{aligned}$$ We can also consider a reduction of density operator (\[3.5\]), which occur due to measurement of a physical quantity by some classical tool, or, in other words, due to decoherence. Suppose that we are measuring the number of particles $N(\mathrm{out})$ in the state $\hat{\rho}$ of the system under consideration. The operator corresponding to this physical quantity is $\hat{% N}(\mathrm{out})=\sum_{i=1}^{5}\hat{N}_{i}(\mathrm{out}),$ where$$\begin{aligned} &&\hat{N}_{1}(\mathrm{out})=\sum_{n\in \Omega _{1}}\left[ \text{ }% ^{+}a_{n}^{\dag }(\mathrm{out})\text{ }^{+}a_{n}(\mathrm{out})+\text{ }% _{-}a_{n}^{\dag }(\mathrm{out})\text{ }_{-}a_{n}(\mathrm{out})\right] , \notag \\ &&\hat{N}_{2}(\mathrm{out})=\sum_{n\in \Omega _{2}}a_{n}^{\dag }a_{n},\text{ \ }\hat{N}_{4}(\mathrm{out})=\sum_{n\in \Omega _{4}}b_{n}^{\dag }b_{n}, \notag \\ &&\hat{N}_{3}(\mathrm{out})=\sum_{n\in \Omega _{K}}\left[ {\ }% ^{+}a_{n}^{\dag }(\mathrm{out}){\ }^{+}a_{n}^{\dag }(\mathrm{out})+{\ }% _{+}b_{n}^{\dag }(\mathrm{out}){\ }_{+}b_{n}(\mathrm{out})\right] , \notag \\ &&\hat{N}_{5}(\mathrm{out})=\sum_{n\in \Omega _{5}}\left[ \text{ }% _{+}b_{n}^{\dag }(\mathrm{out})\text{ }_{+}b_{n}(\mathrm{out})+\text{ }% ^{-}b_{n}^{\dag }(\mathrm{out})\text{ }^{-}b_{n}(\mathrm{out})\right] . \label{4.3}\end{aligned}$$ According to von Neumann [@Neumann], the density operator $\hat{\rho}$ after such a measurement is reduced to the operator $\hat{\rho}_{N}$ of a form $$\hat{\rho}_{N}=\sum_{s}\langle s,\mathrm{out}|\hat{\rho}|s,\mathrm{out}% \rangle \hat{P}_{s},\text{ \ }\hat{P}_{s}=|s,\mathrm{out}\rangle \langle s,% \mathrm{out}|, \label{4.4}$$where $|s,\mathrm{out}\rangle $ are eigenstates of the operator $\hat{N}(% \mathrm{out})$ with the eigenvalues $s$ that represent the total number of electrons and positrons in the state $|s,\mathrm{out}\rangle $, $$\begin{aligned} &&\hat{N}(\mathrm{out})|s,\mathrm{out}\rangle =s|s,\mathrm{out}\rangle , \\ &&\ |s,\mathrm{out}\rangle =\prod_{n\in \Omega _{1}}\left[ \text{ }% ^{+}a_{n}^{\dag }(\mathrm{out})\right] ^{l_{n,1}}\left[ \text{ }% _{-}a_{n}^{\dag }(\mathrm{out})\right] ^{k_{n,1}}\prod_{n\in \Omega _{2}}\left( \text{ }a_{n}^{\dag }\right) ^{l_{n,2}}\prod_{n\in \Omega _{4}}\left( \text{ }b_{n}^{\dag }\right) ^{l_{n,4}} \\ &&\times \prod_{n\in \Omega _{5}}\left[ \text{ }_{+}b_{n}^{\dag }(\mathrm{out% })\right] ^{l_{n,5}}\left[ \text{ }^{-}b_{n}^{\dag }(\mathrm{out})\right] ^{k_{n,5}}\prod_{n\in \Omega _{K}}\left[ {\ }^{+}a_{n}^{\dag }(\mathrm{out})% \right] ^{l_{n,3}}\left[ {\ }_{+}b_{n}^{\dag }(\mathrm{out})\right] ^{k_{n,3}}|0,\mathrm{out}\rangle , \\ &&s=\sum_{n\in \Omega _{1}}\left( l_{n,1}+k_{n,1}\right) +\sum_{n\in \Omega _{2}}\left( l_{n,2}\right) +\sum_{n\in \Omega _{4}}\left( l_{n,4}\right) +\sum_{n\in \Omega _{5}}\left( l_{n,5}+k_{n,5}\right) +\sum_{n\in \Omega _{K}}\left( l_{n,3}+k_{n,3}\right) .\end{aligned}$$Note that $l_{n,i}$, $k_{n,i}=(0,1),$ due to the fact that we deal with fermions. Due to the structure of the operator $\hat{\rho}$, the weights $\langle s,% \mathrm{out}|\hat{\rho}|s,\mathrm{out}\rangle $ are nonzero only for pure states $|s,\mathrm{out}\rangle $ with an integer number of pairs in $\Omega _{K}$ (since the initial state of the system was a vacuum, and there is no particle creation outside of the Klein zone). Thus, the operator $\hat{\rho}% _{N}$ takes the form$$\hat{\rho}_{N}|c_{v}|^{-2}=P^{\prime }\prod_{n\in \Omega _{K}}\left[ P_{K,n}+\!|w_{n}\left( +-|0\right) |^{2}\text{ }^{+}a_{n}^{\dag }(\mathrm{out% })\text{ }_{+}b_{n}^{\dag }(\mathrm{out})P_{K,n}\text{ }_{+}b_{n}(\mathrm{out% })\text{ }^{+}a_{n}(\mathrm{out})\right] , \label{4.8}$$where operators $P_{K,n}$ and $P^{\prime }$ were defined in the previous Section, see Eq. (\[3.8a\]). Note that the measurement destroys nondiagonal terms of the density operator (\[3.8a\]). Since the operator $V$ is unitary and the initial state of the system under consideration is a pure state (the vacuum state) the density operator ([3.5]{}) describes a pure state as well. Therefore its von Neumann entropy is zero. However, the reduced density operators $\hat{\rho}_{\pm }$ (\[4.2\]) describe already mixed states and their entropies $S(\hat{\rho}_{\pm })$ are not zero,$$S(\hat{\rho}_{\pm })=-k_{B}\mathrm{tr}\hat{\rho}_{\pm }\ln \hat{\rho}_{\pm }. \label{4.9}$$It is known that this entropy can be treated as a measure of the quantum entanglement of the electron and positron subsystems and can be treated as the measure of the information loss. Using the normalization condition for the reduced density operators, $% \mathrm{tr}\hat{\rho}_{\pm }=1$, the relation (\[3.9\]), definitions for differential mean numbers of particles $N_{n}^{a}$ and antiparticles $% N_{n}^{b}$ created from vacuum $$N_{n}^{a}=\mathrm{tr}\hat{\rho}_{+}a_{n}^{\dagger }(\mathrm{out})a_{n}(% \mathrm{out}),\ N_{n}^{b}=\mathrm{tr}\hat{\rho}_{-}b_{n}^{\dagger }(\mathrm{% out})b_{n}(\mathrm{out}), \label{4.11}$$and the fact that$$N_{n}^{a}=N_{n}^{b}=N_{n}^{\text{$\mathrm{cr}$}},\ \ |w_{n}\left( +-|0\right) |^{2}=N_{n}^{\text{$\mathrm{cr}$}}\left( 1-N_{n}^{\text{$\mathrm{% cr}$}}\right) ^{-1}, \label{4.12}$$we can calculate traces in Eqs. (\[4.9\]) and rewrite RHS in these equations as $$S(\hat{\rho}_{\pm })=\sum_{n\in \Omega _{K}}S_{n},\text{ \ }S_{n}=-k_{B}% \left[ (1-N_{n}^{\text{$\mathrm{cr}$}})\ln \left( 1-N_{n}^{\text{$\mathrm{cr} $}}\right) +N_{n}^{\text{$\mathrm{cr}$}}\ln N_{n}^{\text{$\mathrm{cr}$}}% \right] . \label{4.13}$$ The von Neumann-reduced density operator (\[4.8\]) also describe mixed state; making use of the fact that the pure states $|0,\mathrm{out}\rangle _{n}^{(K)}$ and $\text{ }^{+}a_{n}^{\dag }(\mathrm{out})\text{ }% _{+}b_{n}^{\dag }(\mathrm{out})|0,\mathrm{out}\rangle _{n}^{(K)}$ are orthogonal and normalized, it is not difficult to show that the von Neumann entropy $S(\hat{\rho}_{N})$ of the mixed state (\[4.8\]) coincide with the entropies $S(\hat{\rho}_{\pm })$ of the reduced density operators $\hat{\rho}% _{\pm }$. The differential mean number of fermions created $N_{n}^{\text{$\mathrm{cr}$}% }$ can vary only within the range $(0,1)$. The partial entropy $S_{n}$ for given $n$ in Eq. (\[4.13\]) is symmetric with respect to value of $N_{n}^{% \text{$\mathrm{cr}$}}$. It reaches maximum at $N_{n}^{\text{$\mathrm{cr}$}% }=1/2$ and turns to zero at $N_{n}^{\text{$\mathrm{cr}$}}=1$ and $N_{n}^{% \text{$\mathrm{cr}$}}=0$. This fact can be interpreted as follows. In the case of $N_{n}^{\text{$\mathrm{cr}$}}=0$ there are no particles created by the external field and the initial vacuum state in the mode remains unchanged. The case $N_{n}^{\text{$\mathrm{cr}$}}=1$ corresponds to the situation when a particle is created with certainty. The maximum of $S_{n}$, corresponding to $N_{n}^{\text{$\mathrm{cr}$}}=1/2$, is associated with the state with the maximum amount of uncertainty. Deformation of the quantum vacuum between two capacitor plates \[Sec.6\] ======================================================================== Here we illustrate the general consideration considering the deformation of the quantum vacuum between two infinite capacitor plates separated by a finite distance $L$. Some aspects of particle creation by the constant electric field between such plates (this field is also called $L$-constant electric field) were studied in Ref. [@L-field]. The latter field is a particular case of $x$-electric potential step. Thus, we consider the $L$-constant electric field in $d=D+1$ dimensions. We chose $\mathbf{E}% (x)=\left( E^{i},\ i=1,...,D\right) ,\ E^{1}=E_{x}(x),\ E^{2,...,D}=0$,$$E_{x}(x)=\left\{ \begin{array}{l} 0,\ x\in (-\infty ,-L/2] \\ E=\mathrm{const}>0,\ x\in (-L/2,L/2) \\ 0,\ x\in \lbrack L/2,\infty )% \end{array}% \right. .$$The potential energy of an electron in the $L$-electric field under consideration is $$U(x)=\left\{ \begin{array}{ll} U_{\mathrm{L}}=-eEL/2, & x\in (-\infty ,-L/2] \\ eEx, & x\in (-L/2,L/2) \\ U_{\mathrm{R}}=eEL/2, & x\in \lbrack L/2,\infty )% \end{array}% \right. . \label{6.2}$$The magnitude of the corresponding $x$-electric is $\mathbb{U}=eEL.$ We are interested in the critical steps, for which $$\mathbb{U}=eEL>2m \label{6.2a}$$and the vacuum is unstable in the Klein zone. We consider a particular case with a sufficiently large length $L$ between the capacitor plates, [ ]{}$$\sqrt{eE}L\gg \max \left\{ 1,E_{c}/E\right\} . \label{L-large}$$Here $E_{c}=m^{2}/e$ is the critical Schwinger field. In what follows we conditionally call this approximation as large work approximation. Such[ ]{}kind of $x$-electric step represent a regularization for a constant uniform electric field and is suitable for imitating a small-gradient field.[ ]{} It was shown in Ref. [@L-field] that the main particle production occurs in an inner subrange $\tilde{\Omega}_{K}$ of the Klein zone, $\tilde{\Omega}% _{K}\subset \Omega _{K}$,[ ]{}$$\begin{aligned} & \tilde{\Omega}_{K}:\ |p_{0}|/\sqrt{eE}<\sqrt{eE}L/2-K,\ \lambda <K_{\bot }^{2}, \notag \\ & \lambda =\frac{\mathbf{p}_{\bot }^{2}+m^{2}}{eE},\ \sqrt{eE}L\gg K\gg K_{\bot }^{2}\gg \max \{1,E_{c}/E\}. \label{b}\end{aligned}$$where $K$ and $K_{\bot }$ are any given positive numbers satisfying the condition (\[b\]). The differential number of particles with quantum numbers $n\in $ $\tilde{% \Omega}_{K}$ created from the vacuum reads $$\begin{aligned} & N_{n}^{\text{\textrm{cr}}}=e^{-\pi \lambda }\left[ 1+O(|\xi _{1}|^{-3})+O\left( |\xi _{2}|^{-3}\right) \right] , \notag \\ & \xi _{1}=\frac{-eEL/2-p_{0}}{\sqrt{eE}},\ \xi _{2}=\frac{eEL/2-p_{0}}{% \sqrt{eE}}. \label{6.4}\end{aligned}$$We recall that, in fact,* *the quantum numbers $n$ that label electron and positron states in general formulas gather several quantum numbers,$$n=\left( p_{0},\mathbf{p}_{\perp },\sigma \right) ,\text{ \ }\mathbf{p}% _{\perp }=\left( p_{2},\ldots ,p_{D}\right) , \label{6.3}$$where for an electron $p_{0}$ is its energy and for a positron $-p_{0}$ is its energy, for an electron* *$\mathbf{p}_{\perp }$* *denote its transversal components of the momentum, whereas for* *a positron* *$-\mathbf{p}_{\perp }$ denote its transversal components of the momentum. For an electron $\sigma $ is its spin polarization and for a positron $-\sigma $ is its* *spin polarization. Note that the electron and positron in a pair created by an external field have the same quantum numbers $n$. The quantity (\[6.4\]) is almost constant over the wide range of energy $% p_{0}$ for any given $\lambda $ $<K_{\bot }^{2}$, for these quantum numbers we can assume $N_{n}^{\text{\textrm{cr}}}\approx e^{-\pi \lambda }$. In the limiting case of the large work approximation, $\sqrt{eE}L\rightarrow \infty $, one obtains the well-known result for particle creation by a constant uniform electric field $N_{n}^{\text{\textrm{cr}}}=e^{-\pi \lambda }$, see Ref. [@Nikishov1; @Nikishov2; @Nikishov3]. In the approximation under the consideration, the total number of particles created from the vacuum is given by a sum (integral) over $n\in \tilde{\Omega% }_{K}$,$$N^{\text{$\mathrm{cr}$}}=\sum_{n\in \Omega _{K}}N_{n}^{\text{$\mathrm{cr}$}% }\approx \sum_{\mathbf{p}_{\bot },\text{ }p_{0}\in \tilde{\Omega}% _{K}}\sum_{\sigma }N_{n}^{\text{$\mathrm{cr}$}}=\frac{J_{(d)}TV_{\bot }}{% (2\pi )^{d-1}}\int_{\tilde{\Omega}_{K}}dp_{0}d\mathbf{p}_{\bot }N_{n}^{\text{% $\mathrm{cr}$}}\ . \label{6.6}$$where $J_{(d)}=2^{\left[ d/2\right] -1}$ is a spin summation factor, $% V_{\bot }$ is the $(d-2)$-dimensional spatial volume in hypersurface orthogonal to the electric field direction and $T$ is the time duration of the electric field. The integration over $p_{0}$ results in $$N^{\text{$\mathrm{cr}$}}=\frac{J_{(d)}TV_{\bot }LeE}{(2\pi )^{d-1}}\int_{% \tilde{\Omega}_{K}}d\mathbf{p}_{\bot }e^{-\pi \lambda }\text{ }. \label{6.8}$$Integrating Eq. (\[6.8\]) over $p_{\bot }$, we obtain that the total number of created from the vacuum particles in the large work approximation has the form$$N^{\text{$\mathrm{cr}$}}=\frac{J_{(d)}TV(eE)^{d/2}}{(2\pi )^{d-1}}\exp \left( -\pi \frac{E_{c}}{E}\right) , \label{6.9}$$where $V=$ $LV_{\bot }$ is the volume inside of the capacitor (the volume occupied by the electric field). It is obvious that $N^{\text{\textrm{cr}}}<\infty $, when the values $V$ and $T$ are finite, or, in other words, when regularization of the finite volume and finite time of the field action is used. Looking on the condition ([2.12]{}), we see that the $x$-electric potential step which represent the electric field inside of the capacitor does not violate the unitarity in QED. Let us estimate the information loss of the reduced states of the deformed vacuum, which can be calculated as entropies (\[4.13\]) of these states,. Using the same summation rule as in (\[6.6\]), one can write $$S(\hat{\rho}_{\pm })=-k_{B}\frac{J_{(d)}TV_{\bot }}{(2\pi )^{d-1}}% \int_{\Omega _{K}}dp_{0}d\mathbf{p}_{\bot }\left[ N_{n}^{\text{$\mathrm{cr}$}% }\ln N_{n}^{\text{$\mathrm{cr}$}}+(1-N_{n}^{\text{$\mathrm{cr}$}})\ln (1-N_{n}^{\text{$\mathrm{cr}$}})\right] . \label{6.10}$$ For Fermi particles under the consideration, $N_{n}^{\text{\textrm{cr}}}\leq 1$. This allows us to expand the logarithm in the RHS of Eq. (\[6.10\]) in powers of $N_{n}^{\text{\textrm{cr}}}$. Thus, we represent the term $% (1-N_{n}^{\text{\textrm{cr}}})\ln (1-N_{n}^{\text{\textrm{cr}}})$ as follows $$(1-N_{n}^{\text{$\mathrm{cr}$}})\ln (1-N_{n}^{\text{$\mathrm{cr}$}})=-\ (1-N_{n}^{\text{$\mathrm{cr}$}})\sum_{l=1}^{\infty }l^{-1}\left( N_{n}^{% \text{$\mathrm{cr}$}}\right) ^{l}. \label{6.11}$$Using (\[6.11\]) in Eq. (\[6.10\]), we obtain the following intermediate result $$S(\hat{\rho}_{\pm })=k_{B}\frac{J_{(d)}TV_{\bot }}{(2\pi )^{d-1}}% \int_{\Omega _{K}}dp_{0}d\mathbf{p}_{\bot }\left[ -N_{n}^{\text{$\mathrm{cr}$% }}\ln N_{n}^{\text{$\mathrm{cr}$}}+(1-N_{n}^{\text{$\mathrm{cr}$}% })\sum_{l=1}^{\infty }l^{-1}\left( N_{n}^{\text{$\mathrm{cr}$}}\right) ^{l}% \right] . \label{6.12}$$ As we have mentioned before, the considerable amount of particles is created only in the subrange $\tilde{\Omega}_{K}\in \Omega _{K}$, where terms proportional to $|\xi _{1,2}|^{-3}$ are small and can be neglected, allowing to use the leading-order approximation $N_{n}^{\text{\textrm{cr}}}\approx e^{-\pi \lambda }$ in the RHS of Eq. (\[6.12\]). Then we obtain $$\begin{aligned} S(\hat{\rho}_{\pm }) &\approx &k_{B}\frac{J_{(d)}TVeE}{\left( 2\pi \right) ^{d-1}}\int_{\tilde{\Omega}_{K}}d\mathbf{p}_{\bot }\left[ \pi \lambda e^{-\pi \lambda }+(1-e^{-\pi \lambda })\sum_{l=1}^{\infty }l^{-1}e^{-\pi \lambda l}\right] \text{ }\;\mathrm{if}\;d>2; \notag \\ S(\hat{\rho}_{\pm }) &\approx &k_{B}\frac{TVeE}{2\pi }A\left( 2,E_{c}/E\right) \text{ }\;\mathrm{if}\;d=2, \notag \\ A\left( 2,E_{c}/E\right) &=&\left\{ \pi E_{c}/E\exp \left( -\pi E_{c}/E\right) -\left[ 1-\exp \left( -\pi E_{c}/E\right) \right] \ln \left[ 1-\exp \left( -\pi E_{c}/E\right) \right] \right\} . \label{6.13}\end{aligned}$$ In the dimensions $d>2$ the integration over the transversal components of the momentum can be easily performed. Outside of the subrange $\tilde{\Omega}% _{K}$, the integrand is very small, so that we can extend the integration limits of $p_{\bot }$ to the infinity. Thus, we finally get $$S(\hat{\rho}_{\pm })\approx k_{B}\frac{J_{(d)}TV(eE)^{d/2}}{(2\pi )^{d-1}}% A\left( d,E_{c}/E\right) \text{ }\;\mathrm{if}\;d>2, \label{6.14}$$where the factor $A\left( d,E_{c}/E\right) $ has the form$$\begin{aligned} &&\ A\left( d,E_{c}/E\right) =\left( \pi E_{c}/E+d/2-1\right) \exp \left( -\pi E_{c}/E\right) \notag \\ &&+\sum_{l=1}^{\infty }\left[ l^{-d/2}-l^{-1}(l+1)^{(2-d)/2}\exp \left( -\pi E_{c}/E\right) \right] \exp \left( -\pi lE_{c}/E\right) . \label{6.15}\end{aligned}$$For example, estimations of this factor for strong field $E_{c}/E\ll 1$ and critical field $E_{c}/E=1$ with $d=4,3$ are $A\left( 4,0\right) =\pi ^{2}/6 $, $A\left( 4,1\right) \approx 0,22$; $A\left( 3,0\right) \approx 0,93$, $% A\left( 3,1\right) \approx 0,20$. In the case of a weak field, $E_{c}/E\gg 1 $, the entropy is exponentially small for any $d$,$$A\left( d,E_{c}/E\right) \approx \left( \pi E_{c}/E+d/2\right) \exp \left( -\pi E_{c}/E\right) .$$ One can note, that the large work approximation (\[6.14\]) obtained for $S(% \hat{\rho}_{\pm })$ in the case of the $x$-electric step under consideration coincides with the same approximation for $S(\hat{\rho}_{\pm })$ in the case of the $t$-electric step with an uniform electric field that is acting during a finite time interval $T$ (the so called $T$-constant field) obtained in Ref. [@GavGitSh15]. This observation confirms the fact that the $T$-constant and $L$-constant fields produce equal physical effects in the large work approximation (or as $T\rightarrow \infty $ and $L\rightarrow \infty $), such that it is possible to consider these fields as regularizations of a constant uniform electric field given by two distinct gauge conditions for electromagnetic potentials. Obviously, exact expressions for the entropies $S(\hat{\rho}_{\pm })$ differ in the general case. Acknowledgments {#acknowledgments .unnumbered} =============== The work of the authors[ ]{}was supported by a grant from the Russian Science Foundation, Research Project No. 15-12-10009. [99]{} D. M. Gitman, J. Phys. A **10**, 2007 (1977). E. S. Fradkin and D. M. Gitman, Fortschr. Phys. **29**, 381 (1981). E. S. Fradkin, D. M. Gitman and S. M. Shvartsman, *Quantum Electrodynamics with Unstable Vacuum* (Springer-Verlag, Berlin, 1991). S. P. Gavrilov, D. M. Gitman, and J. L. Tomazelli, Nucl. Phys.** B795**, 645 (2008). M. Dvornikov, S.P. Gavrilov, and D.M. Gitman, Phys. Rev. D **89**, 105028 (2014). S. P. Gavrilov, D. M. Gitman, and A.A. Shishmarev, Phys. Rev. A **91**, 052106 (2015). S. P. Gavrilov and D. M. Gitman, Phys. Rev. D **93,** 045002 (2016). S. P. Gavrilov and D. M. Gitman, Phys. Rev. D **93**, 045033 (2016). W. H. Furry, Phys. Rev. **81**, 115 (1951). F. A. Berezin, *The method of second quantization* (Nauka, Moscow, 1965) \[Transl. (Academic Press, New York, 1966)\]. V. A. Kiperman, Teor. Mat. Fiz. **5**, 3 (1970). J. von Neumann, *Mathematische Grundlagen der Quantenmechanik* (Verlag von Julius Springer-Verlag, Berlin, 1932). S. P. Gavrilov and D. M. Gitman, Phys. Rev. D D **53**, 7162 (1996). A.I. Nikishov, Zh. Eksp. Teor. Fiz. **57**, 1210 (1969) \[Transl. Sov. Phys. JETP **30**, 660 (1970)\]. A.I. Nikishov,* in Quantum Electrodynamics of Phenomena in Intense Fields*, Proc. P.N. Lebedev Phys. Inst. (Nauka, Moscow, 1979), Vol. 111, p. 153. A.I. Nikishov, Nucl. Phys. **B21**, 346 (1970). [^1]: [email protected] [^2]: [email protected]
--- author: - 'T. Hendrix' - 'R. Keppens' - 'P. Camps' bibliography: - 'aa25498-14.bib' title: Modelling ripples in Orion with coupled dust dynamics and radiative transfer --- =1 [In light of the recent detection of direct evidence for the formation of Kelvin-Helmholtz instabilities in the Orion nebula, we expand upon previous modelling efforts by numerically simulating the shear-flow driven gas and dust dynamics in locations where the H$_{II}$ region and the molecular cloud interact. We aim to directly confront the simulation results with the infrared observations.]{} [To numerically model the onset and full nonlinear development of the Kelvin-Helmholtz instability we take the setup proposed to interpret the observations, and adjust it to a full 3D hydrodynamical simulation that includes the dynamics of gas as well as dust. A dust grain distribution with sizes between 5-250 nm is used, exploiting the gas+dust module of the MPI-AMRVAC code, in which the dust species are represented by several pressureless dust fluids. The evolution of the model is followed well into the nonlinear phase. The output of these simulations is then used as input for the SKIRT dust radiative transfer code to obtain infrared images at several stages of the evolution, which can be compared to the observations.]{} [We confirm that a 3D Kelvin-Helmholtz instability is able to develop in the proposed setup, and that the formation of the instability is not inhibited by the addition of dust. Kelvin-Helmholtz billows form at the end of the linear phase, and synthetic observations of the billows show striking similarities to the infrared observations. It is pointed out that the high density dust regions preferentially collect on the flanks of the billows. To get agreement with the observed Kelvin-Helmholtz ripples, the assumed geometry between the background radiation, the billows and the observer is seen to be of critical importance.]{} Introduction ============ Sometimes a little push is all that is needed to make a seemingly stable fluid evolve into a turbulent state. Typically this transition is caused by a fluid instability, and many of these mechanisms have been studied extensively in the past decades (see e.g. @1961hhs..book.....C). The Kelvin-Helmholtz instability (KHI) is a notable example of this as it plays an important role in a wide range of different fluid applications such as for example oceanic circulation [@Haren], winds on planet surfaces [@1997QJRMS.123.1433C], the flanks of expanding coronal mass ejections [@2011ApJ...729L...8F], magnetic reconnection in the solar corona [@2003SoPh..214..107L], interaction between comet tails and the solar wind [@1980SSRv...25....3E], mixing of solar wind material into Earth’s magnetosphere [@2004Natur.430..755H], astrophysical jets and many others. While the KHI is a hydrodynamical instability, magnetic fields can alter its dynamics and cause stabilisation or further destabilise the setup. As the previous range of examples demonstrates, many of the relevant astrophysical fluids in the KHI is of importance display magnetic effects. In molecular clouds, the KHI has been linked to the formation of filamentary structures , as well as to turbulence formation. While the source of turbulence, observed in molecular clouds through the detection of non-thermal line-widths around $1\times10^5$ - $2\times10^5$ cm s$^{-1}$, is still debated, it has been linked at least partially to the KHI allowing to transfer energy to smaller scale structures . While the occurrence of the KHI in space is clearly established, direct evidence of ongoing instabilities are harder to obtain. At a distance of 412 pc [@2009ApJ...700..137R], the Orion nebula is the closest H$_{II}$ region. Its association with young massive stars and its apparent brightness make it an intensively investigated region over a large range of frequencies . As such, it is an ideal laboratory for investigation of smaller scale structure development. Recently @2010Natur.466..947B discussed mid-infrared observations of ripple-like structures on the edge of the Orion nebula’s H$_{II}$ region and the surrounding giant molecular clouds. The wave-like nature of this observation (see figure \[fig:berne\]), points to a mechanism with fixed periodicity in time or space. This periodic structure, in combination with the detection of a strong velocity gradient resulting in velocity differences up to $7\times10^5$ - $9\times10^5$ cm s$^{-1}$ leads @2010Natur.466..947B to propose that these ripples are manifestations of the KHI.\ Because of the high research interest in the Orion nebula and the surroundings regions, the physical conditions in the neighbourhood of the observed ripples are fairly well documented, providing an ideal case to numerically model the observed system. In @2012ApJ...761L...4B an effort was undertaken to numerically study the linear growth phase of a KHI with physical values deduced from observations. It was found that the used setup was indeed Kelvin-Helmholtz unstable for setups with magnetic field orientations close to perpendicular to the flow, and parallel to the separation layer between the H$_{II}$ and cloud region.\ In this work, our goal is to expand the numerical modelling of the ripples in Orion in a way in which the observations can be directly compared to the modelling itself. To do so, several ingredients are needed. First, the proposed setup (see sections \[physSetup\] and \[magp\]) is simulated using a 3D numerical hydrodynamical simulation from the start of the instability, through the linear phase and into the nonlinear phase. To perform these simulations we use the [MPI-AMRVAC]{} code [@2012JCoPh.231..718K; @2014ApJS..214....4P], with numerical properties as described in section \[NumMeth\]. In the mid-infrared observation a significant part of the radiation is due to dust emission. Therefore we use the gas+dust module of the [MPI-AMRVAC]{} code to model the dynamics of dust particles, which are drag-coupled to the gas. We use a range of dust sizes and model it self-consistently with the gas dynamics. Finally, to connect the dynamical simulations to the observations we use the [SKIRT]{} dust radiative transfer code [@2011ApJS..196...22B; @Camps201520] to emulate the radiation by the dust particles and the effect of the actual geometry of the observed system, as explained in section \[RadTrans\]. The properties of the outcome of these simulations are described in section \[results\] and the conclusions are discussed in section \[conclusions\]. ![Observation of the ripples in Orion at 8 $\mu$m, taken with the Spitzer Infrared Array Camera. The spatial wavelength $\lambda$, the orientation of the phase velocity $V_{\phi}$, and the linear regime length $L_{lin}$ are identified in the image. Credit: figure (1) from @2012ApJ...761L...4B, reproduced by permission of the AAS.[]{data-label="fig:berne"}](./berne.jpg){width="\columnwidth"} Model ===== Physical setup {#physSetup} -------------- The setup used here is similar to that of the 2D setup of @2012ApJ...761L...4B, but here adjusted to a full 3D configuration. The domain of the simulation is a cube with $L=0.33$ pc sides, and is initially divided in three regions along the $y$-axis: the upper part corresponds to the hot, low density H$_{II}$ region (n$_{II} = 3.34 \times 10^{-23}$ g cm$^{-3}$, T$_{II}$ = 10$^4$ K), the lower part represents the cold, high density molecular cloud (n$_c = 1.67 \times 10^{-20}$ g cm$^{-3}$, T$_{c} = 20$ K) and both are separated by a thin middle layer with thickness $D=0.01$ pc. This boundary layer is thus oriented perpendicular to the $y$-axis. Note that the choice of density and temperature result in thermal pressure equilibrium between the upper and lower region as $$\qquad p = \rho \frac{k_b T}{m_H \mu},$$ with $p$ the pressure, $k_b$ the Boltzmann constant, $m_H$ the mass of hydrogen and $\mu$ the average molecular weight, set to $\mu = 1$ here. The energy density of the gas , $e$, can be calculated using the equation of state, and gives $$\qquad e = \frac{p}{\gamma - 1} + \frac{\rho v^2}{2},$$ with $\gamma = 5/3$ the adiabatic constant and $v$ the velocity of the flow.\ To initialise the dust content in the simulation domain, we assume that the dust-to-gas mass density ratio has the canonical value of 0.01 [@1954ApJ...120....1S] in the molecular cloud region, and no dust is present in the hot H$_{II}$ region. We assume that the size distribution of dust particles, $n$, can be approximated as $n(a) \propto a^{-3.5}$ with the size of the particles, $a$, between 5 nm and 250 nm as was determined from excitation in the interstellar medium (ISM) by @1994ApJ...422..164K. We use four dust fluids to represent this power law size distribution with each fluid representing a part of the size distribution, chosen in a way in which the total dust mass in each dust fluid is the same (see ). In this way, the resulting representative size of dust grain in the four dust fluids are 7.9 nm, 44.2 nm, 105 nm, and 189 nm, respectively. The grain density of all dust fluids is set to that of silicate grains, i.e. 3.3 g cm$^{-3}$ [@1984ApJ...285...89D].\ The H$_{II}$ region has an initially uniform velocity of magnitude $v_0 = 10^6$ cm s$^{-1}$ in the direction parallel to our $x$-axis. @2012ApJ...761L...4B propose that this high velocity is due to *champagne flow*, the resulting high velocity flow when the expanding H$_{II}$ breaks trough the molecular cloud. This velocity is similar to the shear velocity derived from observation in @2010Natur.466..947B. In the molecular cloud region the velocity is initially set to zero. In contrast to @2012ApJ...761L...4B, where a hyperbolic tangent profile is used for both velocity and density, we use a linear profile in the middle layer that continuously links up with the constant velocities and densities on both sides of the layer. This is done in analogy with our previous work , as it allows to better quantify the linear stability properties.\ A perturbation is added by introducing an initial velocity component perpendicular to the boundary layer: $$\begin{aligned} \qquad v_{y,0}(x,y,z) =& 10^{-3} v_0 \exp \left( -\frac{(y-M_y )^2}{2 \sigma_y^2} -\frac{(z-M_z)^2}{2 \sigma_z^2} \right) \sin{(k_x x)} \nonumber \\ \qquad + &10^{-4} v_0 \, \textrm{rect} (\frac{y}{5D}) (1 - 2\textrm{rand} ()) \label{perturb},\end{aligned}$$ with $\sigma_y = 5D$, $\sigma_z = L / 5$ and $M_y$ and $M_z$ being the $y$- and $z$- coordinates of the middle point of the separation layer. The first part on the right side of equation (\[perturb\]) adds a sine perturbation with wavelength $\lambda = k_x / 2\pi$. We adopt $\lambda = 0.11$ pc in accord with the observations in @2010Natur.466..947B. The second part on the right side of equation (\[perturb\]) adds random velocities[^1] between $-10^{-4} v_0$ and $10^{-4} v_0$ in a layer of thickness $5D$ around the middle of the separation layer. The velocity in the $z$-direction is seeded with a similar random term: $$\qquad v_{z,0}(x,y,z) = 10^{-4} v_0 \, \textrm{rect} (\frac{y}{5D}) (1 - 2\textrm{rand} ()).$$ The purpose of the exponential part in equation (\[perturb\]) in the $y$-direction is to preferentially locate the perturbation around the middle layer. The exponential part in the $z$-direction centres the perturbation around the middle of the $z$-axis to confine the instability development region. These random perturbations in the velocity break the symmetry of the setup, and allow in essence all unstable modes to develop spontaneously, although the fixed $\lambda$ wavelength in the $x$-direction gets preference. Magnetic pressure {#magp} ----------------- @2012ApJ...761L...4B take into account a magnetic contribution in their 2D setup as well, assuming a uniform magnetic field with a strength of $B = 200$ $\mu$G in the entire domain based on observations of surrounding regions [@2004ApJ...609..247A; @2005ASPC..343..183B]. Using the values of the physical setup (section \[physSetup\]) this results in a ratio between thermal and magnetic pressure $\beta_{pl} = p_t / p_M = 0.0173$, with $\beta_{pl} $ the plasma beta value, meaning that the magnetic pressure is dominant over the thermal pressure contribution. The dominance of magnetic over thermal pressure is confirmed by observations in the orion molecular cloud [@2014ApJ...795...13B], both for large and small scale structures. @2012ApJ...761L...4B note that the setup is most unstable when the magnetic field is perpendicular to the flow and parallel to the contact layer. In this configuration, a uniform magnetic field only contributes as an additional magnetic pressure $$\qquad p_M = \frac{B^2}{8\pi}.$$ This means that one can actually substitute the full MHD treatment by a HD treatment with an additional pressure term, in which the total pressure is raised while keeping the density fixed (thus artificially increasing the temperature). When calculating the thermal energy of the gas to quantify the coupling to the dust (see [@2014ApJS..214....4P]), this artificial term is subtracted to obtain the relevant temperature. To demonstrate that this approximation is valid, we compare evolution of an MHD setup with that of a HD + $p_M$ simulation in section \[2Dcomp\]. Numerical method {#NumMeth} ---------------- We use the [MPI-AMRVAC]{} code [@2012JCoPh.231..718K; @2014ApJS..214....4P] for all the hydrodynamical (HD) and magnetohydrodynamical (MHD) simulations. The dust module of [MPI-AMRVAC]{}, discussed in detail in , allows to add dust to a HD simulation by adding multiple dust fluids. These fluids follow the Euler equations with vanishing pressure [@rjl:dust] and couple to the gas fluid through a drag force term. Each dust fluid has its own physical properties such as grain size and grain material density. Typically we use multiple dust fluids with the same grain material density and different grain sizes to model the size distribution in the ISM.\ For the 3D simulations we use four levels of adaptive mesh refinement (AMR), resulting in an effective resolution of $448\times 1792\times448$ cells. The triggering of extra refinement levels is based on a combination of the gradients in the gas fluid and those in the dust fluid representing the largest grains. Because the actual physical domain is cube shaped, this resolution results in a four time higher resolution perpendicular to the flow (see section \[physSetup\]). This is necessary to resolve all small-scale variations that develop during the linear (and also the nonlinear) phase of the instability. The solution of the coupled gas+dust fluid equations is advanced using a total variation diminishing Lax-Friedrich (TVDLF) scheme with a two-step predictor-corrector time discretisation and a monotonised central (MC) type limiter [@1977JCoPh..23..263V]. To ensure stable time-stepping the timestep is limited by using a CFL number of 0.6 for gas and dust, as well a separate dust acceleration criterion based on the stopping time of dust grains [@2012MNRAS.420.2345L]. Radiative transfer {#RadTrans} ------------------ To be able to directly compare the output from the 3D hydrodynamical simulations with observations, post-processing of the data is performed with the Monte Carlo radiative transfer code SKIRT [@2011ApJS..196...22B; @Camps201520]. SKIRT simulates continuum radiation transfer in dusty astrophysical systems by launching a set of photon packages in a given wavelength range through the dust distribution obtained from our dynamical simulations. These packages are followed for several cycles of multiple anisotropic scattering, absorption and (re-)emission by interstellar dust, including non-local thermal equilibrium dust emission by transiently heated small grains. Emission from stochastically heated grains is used in all the results in this work and typically around 4 dust emission cycles are needed to come to equilibrium.\ To launch the packages into the domain, we use a (stellar) point-source at a given distance outside of the simulated domain as our source of initial photons. Photon packages in a wavelength range between 0.01 $\mu$m and 1000 $\mu$m are incorporated. In SKIRT we use exactly the same distribution of dust species as the one obtained from MPI-AMRVAC, meaning that the mass density distribution of the four dust fluids is used for each representative part of the grain size distribution and that, just like in the HD simulations, we adopt silicate properties for the grains in the radiative transfer. Results ======= 2D analysis {#2Dcomp} ----------- ![Growth of the kinetic energy perpendicular to the bulk flow. The MHD and HD simulation that take into account the magnetic pressure are similar, while the HD simulation without magnetic pressure behaves differently. The 3D setup is also shown up to $t=0.01$ and has a growth rate similar to that of the 2D setup.[]{data-label="fig:linGrowth"}](./linGrowth.jpg){width="\columnwidth"} ![Gas density plots of the KHI in 2D and 2.5D after the end of the linear phase. The density units are in g cm$^{-3}$. In all figures the entire domain (0.33 pc $\times$ 0.33 pc) is shown. **Left:** A 2D simulation of the KHI in HD with dust and an artificial magnetic pressure term $p_M$ added to the total pressure at $t=0.007$ (6.84$\times10^4$ years). **Centre:** The same setup, but in 2.5D MHD with a magnetic field perpendicular to the plane, also at $t=0.007$. **Right:** A 2D HD simulation without the effect of a magnetic field added into the total gas pressure, at $t=0.02$ (1.95$\times10^5$ years). Note this figure is taken at a different time as the linear phase end later in this case.[]{data-label="fig:magNomag"}](./endLinear2.jpg){width="\columnwidth"} To prove that an MHD setup with the magnetic field component perpendicular to the flow direction and parallel to the boundary layer can be reasonably approximated by a similar setup in HD but with added pressure, we simulate the setup discussed in sections \[physSetup\] and \[magp\] first in 2D, but in three variations: a HD simulation without a magnetic contribution, an MHD setup with magnetic field, and an HD simulation with the magnetic field contribution added to the pressure. The MHD setup is actually simulated in 2.5D, as it includes the information of the velocity and magnetic field perpendicular to the simulated plane. The simulated plane in 2D corresponds to a slice in the 3D simulation perpendicular to the $x-y$ plane and through the centre of the simulated domain. In figure \[fig:linGrowth\] the buildup of kinetic energy perpendicular to the flow direction is shown for all three 2D setups, and for the 3D run discussed further on. Clearly, for the MHD setup and the HD plus magnetic pressure setup the growth rate in the linear regime (up to $t=0.006$ in code units, or $\sim$ 5.87$\times10^4$ years) is the same. The growth rate is significantly slower when the magnetic pressure is ignored. Also, figure \[fig:magNomag\] shows that the formed structures are of similar size and shape in the two simulations where the magnetic pressure is taken into account. Small differences include the formation of small-scale structures on top of the larger structure. These small-scale perturbations are also present in the HD setup, but develop faster in the MHD simulation. The reason that they are less apparent in the HD simulation is because in the MHD case they seemingly grow faster due to small inhomogeneities (a decrease by $\approx 2\%$) in the magnetic field, leading to numerical differences that accumulate over time. When the magnetic pressure is not taken into account, it can be seen in figure \[fig:magNomag\] that the morphology is very different. Because the total pressure is lower, the Mach number for the flow at the boundary is higher, causing shocks to propagate. These shocks also cause the striped structure in the high density region. We will now further discuss a full 3D gas plus dust setup that has the pressure adjusted to account for the magnetic pressure effects. 3D model -------- In figure \[fig:linGrowth\] it can be seen that the growth rate of the 3D simulation is comparable to that of the 2D simulations in which the effect of the magnetic field is taken into account. Due to the added computational cost in 3D, this simulation is only followed until $t=0.01$ in code units, or up to about 9.78$\times10^4$ year. ### Dust distribution {#dustDistri} In previous work we found that in a 3D setup with the same density on both sides of the separation layer, the KHI can cause the dust density to increase by almost two orders of magnitude. These strong increases in dust density occur in filament-like locations between the vortices when dust is swirled out of the vortices and compressed into these regions. This process if strengthened further by additional 3D instabilities. Also, it was found that the process of dust density enhancement is stronger for larger dust particle sizes. Figure \[fig:maxDens\] shows that in the setup used here the growth in local dust density is less strong. During the end of the linear phase, i.e. up to time $t=0.006$ in figure \[fig:maxDens\], the maximal density increases gradually, and the rate of increase is proportional to the grain size. In the further nonlinear stage the densities still increase, however the relation between instantaneous local maximal density and grain size gets modified. Similarly to what was seen in , the density enhancements are significantly stronger in 3D than in 2D, where the maximum increase is less than $15\%$ for all dust species in the 2D case with magnetic pressure added. Clearly, 3D effects are paramount when studying dust growth.\ ![Time evolution of the maximal density enhancements in the 3D simulation for all four dust fluids, with *dust 1* representing the smallest grains (7.9 nm) and *dust 4* the largest grains (189 nm).[]{data-label="fig:maxDens"}](./maxDens.jpg){width="0.9\columnwidth"} The dust density enhancements are strongest in three distinct regions, which are indicated in figure \[fig:regions\]. Chronologically dust first accumulates in the convex outer region of the KH wave (the region labeled with 1 in figure \[fig:regions\]). This is due to the acceleration of dust by gas in the concave region when the gas swirls around the low pressure region created by the KHI. Next, the arc-like structure below the surface of the wave, i.e. region number 2 in figure \[fig:regions\], is formed. This region forms when the KHI accelerates the bulk of the gas upward into the low density region, and the dust is dragged with it. The location of the region is caused by a gradient in the drag strength, as the velocity difference between gas and dust is stronger under the region than above, causing the underlying dust to overtake the dust above it. The third dust gathering region is along the boundary between high and low density regions in between two successive waves or KHI rolls. A dust pile-up is seen here in the nonlinear stage when the velocity of the gas around the low pressure vortex is highest. In animated views one can see how the end point of the flow that passes over the crest of the waves moves from location 1 to a spread out region all along the density boundary, i.e. up to location 3 as indicated.\ While dust density increases up to a factor 10 are observed in these three regions for the four dust species, the actual location of these dust-gathering regions does not necessarily fully coincide for all dust species, similar to the findings in  where a clear size-separation was evident. Also, the actual importance of the three regions is distinct for different grain sizes. Therefore, the increase of the total dust density will be less strong and distributed over a larger region. Furthermore, the strongest increases can be found in small local clumps, as can be seen in figure \[fig:rhodTot\], visualising the total dust density concentrations. Quantitatively speaking, while 14.76$\%$ of the total volume experiences a total dust density enhancement of more than 5$\%$, in only 0.03$\%$ of the total volume the total dust density more than doubles (regions indicated in orange and red in figure \[fig:rhodTot\]). This is in contrast with the 3D simulations in , where the high density dust is found in long filamentary structures and more than 4.5 $\%$ of the volume exhibits a doubling of the total dust density. The main differences reside in the adopted initial density contrast, as well as the fact that here only the molecular cloud region initially had dust. ![Density of the largest dust species ($a=189$ nm) in a slice from the 3D simulation ($z=0.165$pc) at $t=0.0065$ (6.36$\times10^4$ years). Only a part of the simulated region with an extend of 0.138 pc in the $x$-direction is shown. Three distinct regions of dust density enhancement are indicated with labels 1, 2 and 3 discussed in the text. The velocity field of the largest dust species in the $x-y$ plane is indicted with the use of vectors, the largest velocity are around $6 \times 10^5$ cm s$^{-1}$.[]{data-label="fig:regions"}](./flowkh2.jpg){width="0.8\columnwidth"} ![Volume plot of the total dust density at $t=0.01$ (9.78$\times10^4$ years). Only densities higher than the initial maximum density ($\rho_d = 1.67\times 10^{-22}$ g cm$^{-3}$) are visualised.[]{data-label="fig:rhodTot"}](./rhodTot_frame50_quad.jpg){width="\columnwidth"} Modelling observations ---------------------- In the previous section we have outlined how the model setup from section \[physSetup\] evolves into a nonlinear 3D KHI. Next, we investigate how the simulated structures would look in synthetic observations. As described in section \[RadTrans\], the dust distribution of our 3D simulations is used as input for the SKIRT radiative transfer code. To see to which degree our simulations correspond to the actual observed structures (figure \[fig:berne\]), in addition to the hydrodynamical setup one has to take into account the orientation in relation to the observer, as well as the location of the light source(s). @2010Natur.466..947B indicated that the star $\theta^1$ Orionis C, a massive type O7V star located in the H$_{II}$ Trapezium region at a distance of $\sim$ 3.4 pc from the cloud, illuminates the ripples from behind with respect to the observer. In SKIRT the radiation of this star is simulated by adding a point source of photons at $d=3.4$ pc and inclination $\alpha$ with respect to the initial separation layer in the HD simulation, as illustrated in figure \[fig:geo\]. For the radiation of the star we use a model spectrum from with corresponds to a star with physical properties comparable to those of $\theta^1$ Orionis C[^2]. The location of the observer with respect to the simulated domain must also be specified in SKIRT. As shown in figure \[fig:geo\], the observer is placed at an angle $\beta$ with respect to the initial separation layer in the HD simulation.\ ![Geometry of the stellar object (photon source) and observer location with respect to the structures in Orion, designated by independent angles $\alpha$ and $\beta$, respectively. In this image, the location of the source and observer are shown with respect to the KH features at t=0.084 (8.21$\times10^4$ years). The black-white image is actually a SKIRT image at 54 $\mu$m, where we see the radiation which is coming from dense and heated dust in the billow structures formed by the KHI. In this image, the observer is located perpendicular to the $x-y$ plane.[]{data-label="fig:geo"}](./geo.jpg){width="\columnwidth"} Because the actual inclination between the observer, the billows and the background radiation source are hard to gauge from the observation, several different values of $\alpha$ and $\beta$ were tried to investigate their role. Table \[table:1\] gives an overview of several SKIRT geometries we will discuss here. An interesting setup to look at first is case D (figure \[fig:BDAC\], top right). With this arbitrary choice for the geometry ($\alpha=60^{\degree}$ and $\beta=90^{\degree}$) the result is rather different from the observations. While some periodicity is observable, no sharp elongated structures are seen. The diffuseness of the radiation in case D can be seen to be inherent to an observer angle of $90^{\degree}$. Figure \[fig:perp\] demonstrates that when going from $t=0.0082$ in E to $t=0.01$ in G, while the onset of the nonlinear phase increases the development of small-scale features (as discussed in section \[dustDistri\]), the emission in the nonlinear phase remains diffuse in both cases.\ In figure \[fig:geo\] we see that the emission at 54 $\mu$m is strongest where the dust is directly radiated by the source, but the colder dust inside the KH billows also radiates at this wavelength. At shorter wavelengths such as 8.25 $\mu$m, the direct light is the more important and only dust close to the edges of the billows radiates. To get features more reminiscent of the observations we can use this knowledge to consider two changes to the geometry of the source and the observer. On the one hand, the angle $\alpha$ can be chosen to maximise the photons from the source reaching the protruding billows and not the rest of the cloud, which increases the amount of observed photons in a more compact location. Nevertheless, the effect of changing $\alpha$ is small at 8.25 $\mu$m, as demonstrated by comparing cases A to C and B to D in figure \[fig:BDAC\]. On the other hand the observers angle $\beta$ can be chosen to be along the billows, maximising the perceived compactness. The change in observer angle has a much stronger impact. Changing $\beta$ from $90^{\degree}$ in case B to $\beta=128^{\degree}$ in case A clearly decreases the thickness of the features, increases the flux in the elongated regions, and enhances the contrast between the bright en dark regions. The choice for “optimal angles" is illustrated in figure \[fig:geo\]. The values we find are $\alpha=51^{\degree}$ and $\beta=128^{\degree}$. These values are used in cases F and H (figure \[fig:opti\]). Using this geometry, a fair approximation of the real observations can be made, at a comparable wavelength. The evolution from case F into H again displays the formation of the small scale structures in the nonlinear phase, on a scale which is comparable to the local bends in the infrared observations. Case $\alpha$ $\beta$ time ------ ---------- --------- -------- A 40 128 0.0082 B 40 90 0.0082 C 60 128 0.0082 D 60 90 0.0082 E 51 90 0.0082 F 51 128 0.0082 G 51 90 0.01 H 51 128 0.01 : Summary of the SKIRT radiative transfer models, with $\alpha$ the angle between the star and the cloud, and $\beta$ the angle between the cloud and the observer (see figure \[fig:geo\]), and the time in code units.[]{data-label="table:1"} ![SKIRT simulations of the same dataset with different geometries. From left to right and top to bottom: B, D, A, C. Horizontally the observers angle $\beta$ is the same ($\beta = 90^{\degree}$ on top, $\beta = 128^{\degree}$ below) and the same scaling is used. Note that the flux quantification is arbitrary here and no effort has been taken to compare these to real values. Vertically the irradiation angle is constant ($\alpha = 40^{\degree}$ left, $\alpha = 60^{\degree}$ right). All images are observed at 8.25 $\mu$m. []{data-label="fig:BDAC"}](./BDAC_sameScale_tag.jpg){width="1.03\columnwidth"} ![Synthetic observation of the KHI at 8.25 $\mu$m, with fixed observational angle $\beta=90^{\degree}$ and $\alpha=128^{\degree}$ (cases E and G). Two different times are shown, left: $t=0.0084$, right: $t=0.01$ or 8.21$\times10^4$ and 9.78$\times10^4$ year, respectively). During this interval the development of small-scale perturbations in the nonlinear phase can be seen. A linear scale is used for the intensity of the images.[]{data-label="fig:perp"}](./perp_hori.jpg){width="1.05\columnwidth"} ![Synthetic observation of the KHI at 8.25 $\mu$m, with observational angle $\beta=128^{\degree}$ and $\alpha=51^{\degree}$ (cases F and H). Two different times are shown, left: $t=0.0084$, right: $t=0.01$ or 8.21$\times10^4$ and 9.78$\times10^4$ year, respectively). In comparison to the images at $\beta=90^{\degree}$, the features of the KHI are more pronounced and clearly distinguishable from the background. A linear scale is used for the intensity of the images. []{data-label="fig:opti"}](./opti_hori.jpg){width="1.05\columnwidth"} Conclusions =========== In the previous sections, we have modelled a region of the Orion molecular cloud in which elongated ripple features are observed. To do so, we have built upon previous numerical models, and expanded these to full 3D dusty hydrodynamics coupled to a radiation transfer code designed for simulating dusty astrophysical systems. The synthetic images allow a direct comparison with the observations. In the infrared observations, the ripples are thin, elongated features that have a clear periodicity and are sharp and bright compared to the background radiation. All these features can also be reproduced by our model. The hydrodynamical simulations confirm that the previously proposed setup is indeed KH unstable for the observed spatial wavelength. We find that the dynamical contribution of dust with a size distribution typical for the ISM does not inhibit the formation of the KHI, and the growth rate in 3D is similar to that of the 2D simulation. We see that the presence of a background star is able to light up the features of the KH billows. Also, the synthetic images demonstrate clearly that the geometry is of great importance in distinguishing the KH features from the background. Observers located in a direction perpendicular to the shearing layer would observe some periodicity, however with shallow features over a continuous background, while observers which look along the formed billows observe them very sharp and bright compared to the background. Nevertheless, even when considering the most optimal geometry, the ripples are still somewhat wider than the sharp ripples of the observations. Additional to geometrical effects, the sharp features may point to strong local density increases in the dust, however in contrast to our previous investigation of dusty KHI only small increases in dust density are seen here, and the highest increases are found in small and compact clumps and not elongated regions. The treatment of additional physics such as self gravity and magnetic fields may lead to these additional density increases as was shown for larger scale structures in @2014ApJ...789...37V. It is unclear if a significant effect would also be expected here, as in section \[2Dcomp\] the magnetic field only causes minor deviations in the 2D setup. For simulations in 3D, the strong magnetic field (plasma $\beta_{pl}=0.0173$) may somewhat alter the outcome of the simulations in the nonlinear phase, when secondary 3D instabilities break the earlier quasi-2D behaviour. @2000ApJ...545..475R demonstrated that even weak magnetic fields can be of importance in the nonlinear regime. While a strong magnetic fields may suppress the growth of hydrodynamical perturbations perpendicular to the fields, @2007JGRA..112.6223M find that in cases with plasma beta as low as $\beta_{pl}=0.1$ secondary 3D instabilities also occur and cause small scale fragmentation along the initial magnetic field, however at a stage far in the nonlinear regime. The resulting influence of the 3D magnetic field on the dynamics of the dust grains, and thus also the observed structures, is further complicated by the unknown charge of the dust grains. While for example @2012ApJ...747...54H have calculated mean grain charging as function of grain sizes for different ISM phases, the charging of grains can be location dependant due to for example interaction with a radiation field, as is the case here. Fully taking into account the magnetic field would thus also require further assumptions to be made with regard to dust distribution as a function of the both the size and the charge. Furthermore, the strength of the magnetic field is one of the less constrained parameters in the model; while the value in the model ($B = 200$ $\mu$G) is representative for surrounding regions, no local measurements of orientation and strength exist to our knowledge. As the magnetic pressure is shown to be of importance in finding the correct value for the growth rate (section \[2Dcomp\]), the outcome would be different if a different magnetic field was assumed. This would especially be the case for different relative orientations of this field and the flow shear.\ Another important factor which may change the outcome of the simulations is the actual width of the shearing layer between the hot medium and the molecular cloud. The width is an important parameter in the evaluation of the stability and growth of the KHI instability. The value used here ($D = 0.01$ pc) is in analogy with the value of @2012ApJ...761L...4B where it is argued that this value represents the width of the photodissociation region (PDR), where molecular gas is dissociated by the far ultraviolet photons of the background star $\theta^1$ Orionis C. Nevertheless, as discussed in the supplement of @2010Natur.466..947B, actually a broader ($\sim 0.1$ pc) photo-ablation region forms between the PDR and the hot medium. Due to its thickness this region may inhibit the formation of the KHI with wavelengths in the range of the observed periodicity in the ripples or shorter, as a boundary layer of thickness $D$ inhibits the growth of perturbations with $\lambda < 4.91 D$ . Additionally it should be noted that the effect of heat conduction, which has not been included in this work, can be of importance in the formation of the shearing layer between the hot medium and the molecular cloud. Indeed, demonstrate that heat conduction can reduce the steepness of the velocity gradient between the cloud and a streaming flow, stabilising the surface of the cloud against the development of the KHI.\ While these remarks demonstrate that additional physics may be needed to understand the full range of interactions occurring in the Orion nebula, in this work we tried to model the observations of its KH ripples in full detail. We demonstrated that a full treatment of gas and dust dynamics, including a range of dust sizes, coupled with radiative transfer provides a promising approach to explaining the observations. Even though the physical values in the models are prone to intrinsic observational uncertainties or assumptions, we see that these values are reasonable in reproducing most of the features when the most optimal geometrical model is used. We acknowledge financial support from project GOA/2015-014 (KU Leuven) and by the Interuniversity Attraction Poles Programme initiated by the Belgian Science Policy Office (IAP P7/08 CHARM). Part of the simulations used the infrastructure of the VSC - Flemish Supercomputer Center, funded by the Hercules Foundation and the Flemish Government - Department EWI. [^1]: The random function $\textrm{rand}$ generates a random floating point value between 0 and 1, while the $\textrm{rect}$ function (also called “rectangular function”) is one between $-0.5$ and $0.5$ and zero elsewhere. [^2]: Model T46p1\_logg4p05.sed from <http://www.mpe.mpg.de/~martins/SED.html>
--- author: - 'Shijie Li, Youcai Zhang, Xiaohu Yang, Huiyuan Wang, Dylan Tweed, Chengze Liu, Lei Yang, Feng Shi, Yi Lu, Wentao Luo, Jianwen Wei' bibliography: - 'bibtex.bib' title: An empirical model to form and evolve galaxies in dark matter halos --- Introduction {#sect:intro} ============ Galaxies are thought to form and evolve in cold dark matter (CDM) halos, however, our understanding of the galaxy formation mechanisms and the interaction between baryons and dark matter are still quite poor, especially quantitatively (see @2010gfe..book.....M, for a detailed review). Within hydrodynamic cosmological simulations, the evolution of the gas component is described on top of the dark matter, with extensive implementation of cooling, star formation and feedback processes. Such detailed implementation of galaxy formation within a cosmological framework requires vast computational time and resources (@2005Natur.435..629S). However the formation of dark matter halos can be easily derived and interpreted, such merger trees can be derived directly from $N$-body simulations, or through Monte Carlo methods. Within those trees, sub-grid models can be applied on the scale of DM halos themselves. Such models are referred as semi-analytic models (hereafter SAM), and provide the means to test galaxy formation models at a much lower computational cost (@2007MNRAS.377...63C). In SAMs, some simple equations describing the underlying physical ingredients regarding the accretion and cooling of gas, star formation etc..., are connected to the dark matter halo properties, so that the baryons can evolve within the dark matter halos merger trees. The related free parameters in these equations are tuned to statistically match some physical properties of observed galaxies. The basic principles of modern SAMs were first introduced by [@1991ApJ...379...52W]. Consequently numerous authors participated in the studies of such models and made great progresses (e.g. @1993MNRAS.264..201K; @1998MNRAS.295..319M; @1999MNRAS.310.1087S; @2000MNRAS.319..168C; @2004MNRAS.348..333D; @2005ApJ...631...21K; @2006MNRAS.365...11C; @2006MNRAS.370..645B; @2007MNRAS.375.1189M; @2011MNRAS.413..101G). Through the steerable parameters, SAM has reproduced many statistical properties of large galaxy samples in the local universe such as luminosity functions, galactic stellar mass functions, correlation functions, Tull-Fisher relations, metallicity-stellar mass relations, black hole-bulge mass relations and color-magnitude relations. However, the main shortcoming of SAMs is that there are too many free parameters and degeneracies. Despite the successes of these galaxy formation models, the sub-grid physics is still poorly understood (@2012NewA...17..175B). By tuning the free parameters, the SAM prediction could match some of the observed galaxy properties in consideration, especially in the local universe. But none of the current SAMs can match the low and high redshift data simultaneously (@2012MNRAS.423.1992S). Traditionally, parameters are preferably set without providing a clear statistical measure of success for a combination of observed galaxy properties. As a SAM cost much less computation time than a full hydrodynamical galaxy formation simulation, one is allowed to explore a wide range of parameter space in acceptable time interval. To better constrain the SAM parameters, Monte Carlo Markov Chains (MCMC) method has been applied to SAMs in recent years. The first paper that incorporated MCMC into SAM is [@2008MNRAS.384.1414K], which used the star formation rate and metallicity as model constraint. Some other SAM groups also have developed their own models associated with the MCMC method (e.g. @2009MNRAS.396..535H [@2013MNRAS.431.3373H];@2010MNRAS.405.1573B; @2010MNRAS.407.2017B; @2011MNRAS.416.1949L [@2012MNRAS.421.1779L]; @2013MNRAS.428.2001M). The details of MCMC are beyond the aims of this paper, we refer the readers to these relevant literatures (@2007nrca.book.....P; @2008ConPh..49...71T). As pointed out in [@2010MNRAS.405.1573B], our understanding of galaxy formation is far from complete. SAMs should not be thought of as attempts to provide a final theory of galaxy formation, but instead to provide a mean by which new ideas and insights may be tested and by which quantitative and observationally comparable predictions may be extracted in order to test current theories. Because of the large number of free parameters, new ideas and sights relevant with the sub-grid physics may often bring new degeneracies with increased complexity and uncertainties to the model either traditional SAM or MCMC. In general, if we take a step back from SAMs, we find that the largest part of the parameters and uncertainties are related to the sub-grid physics implemented for the gas. Focussing the model on the formation and evolution of the [*stars*]{} within dark matter halos, the vast majority of the uncertainties in SAM related with the gas component will be reduced. Understanding the relation between dark matter halos and galaxies is a vital step to model galaxy formation and evolution in dark matter halos. In recent years, we have seen drastic progress in establishing the connection between galaxies and dark matter halos, such as the halo occupation distribution (HOD) models (e.g. @1998ApJ...494....1J, @2002ApJ...575..587B, @2005ApJ...630....1Z, @2010MNRAS.406..147F, @2011ApJ...738...22W, @2012ApJ...751L..44W, @2012ApJ...744..159L), and the closely related conditional stellar mass (or luminosity) function models(@2003MNRAS.339.1057Y, @2003MNRAS.340..771V, @2006ApJ...647..201C, @2007MNRAS.376..841V, @2009ApJ...695..900Y, @2012ApJ...752...41Y, @2015ApJ...799..130R). The former make use of the clustering of galaxies to constrain the probability of finding $N$ galaxies in a halo of mass $M$. While the latter make use of both clustering and luminosity(stellar mass) functions to constrain the probability of finding galaxies with given luminosity (or stellar mass) in a halo of mass $M$. In a recent study, [@2012ApJ...752...41Y](hereafter Y12) proposed a self-consistent model properly taking into account (1) the evolution of stellar-to-halo mass relation of central galaxies; (2) the accretion and subsequent evolution of satellite galaxies. Based on the host halo and subhalo accretion models provided in [@2009ApJ...707..354Z] and [@2011ApJ...741...13Y], Y12 obtained the conditional stellar mass functions (CSMFs) for both central and satellite galaxies as functions of redshift. Based on the mass assembly histories of central galaxies, the amount of accreted satellite galaxies and the fraction of surviving satellite galaxies constrained in Y12, we obtained the star formation histories (SFH) of central galaxies in halos of different masses (@2013ApJ...770..115Y). Similar SFH models were also proposed based on $N$-body or Monte Carlo merger trees (e.g., @2013MNRAS.428.3121M; @2013ApJ...770...57B; @2014MNRAS.439.1294L). These SFH maps give us the opportunity to grow galaxies in $N$-body simulations without the need to model the complicated gas physics. In those models referred as empirical model (EM) of galaxy formation, the growth of galaxies is statistically constrained using observational data. This paper is organized as follows. In section 2, we describe in detail our simulation data and EM model. In section 3, we show our model predictions associated with the stellar masses of galaxies. The model predictions related with the luminosity and HI gas components are presented in sections 4. Finally, in section 5, we make our conclusions and discuss the applications of our model and the galaxy catalog thus constructed. Simulation and our empirical model {#sec_model} ================================== The simulation -------------- ![Halo mass functions of the simulation. The black curve and cyan circles represent respectively the [@2001MNRAS.323....1S] (SMT2001) analytic prediction and data extracted from the L500 simulation. []{data-label="fig:hmf"}](fig1.eps){width="8.5cm"} Similar to the SAMs, our EM also starts from dark matter halo merger trees. In this study we use dark matter halo merger trees extracted from a high resolution $N$-body simulation. The simulation describes the evolution of the phase-space distribution of $3072^{3}$ dark matter particles in a periodic box of $500 {\>h^{-1}{\rm {Mpc}}}$ on a side. It was carried out in the Center for High Performance Computing, Shanghai Jiao Tong University. This simulation, hereby referred as L500, was run with [L-GADGET]{}, a memory-optimized version of [GADGET2]{} (@2005Natur.435..629S). The cosmological parameters adopted by this simulation are consistent with WMAP9 results as follows: $\Omega_{\rm m} = 0.282$, $\Omega_{\Lambda} = 0.718$, $\Omega_{\rm b} = 0.046$, $n_{\rm s}=0.965$, $h=H_0/(100 {\>{\rm km}\,{\rm s}^{-1}\,{\rm Mpc}^{-1}}) = 0.697$ and $\sigma_8 = 0.817$ (@2013ApJS..208...19H). The particle masses and softening lengths are, respectively, $3.3747\times10^{8}{\>h^{-1}\rm M_\odot}$ and $3.5 h^{-1}\rm kpc$. The simulation is started at redshift 100 and has 100 outputs from z=19, equally spaced in $\log$ (1 + z). Dark matter halos were first identified by the friends-of-friends(FOF) algorithm with linking length of $0.2$ times the mean particle separation and containing at least $20$ particles. The corresponding dark matter halo mass function (MF) of this simulation at redshift $z=0$ is represented by cyan circles in Fig. \[fig:hmf\], while the black curve corresponds to the analytic model prediction by [@2001MNRAS.323....1S](SMT2001). The halo mass function of this simulation is in good agreement with the analytic model prediction in the related mass ranges. Based on halos at different outputs, halo merger trees were constructed [@1993MNRAS.262..627L]. We first use the SUNFIND algorithm (@2001MNRAS.328..726S) to identify the bound substructures within the FOF halos or FOF groups. In a FOF group, the most massive substructure is defined as main halo and the other substructures are defined as subhalos. Each particle contained in a given subhalo or main halo is assigned a weight which decreases with the binding energy. We then find all main halos and subhalos in the subsequent snapshot that contain some of its particles. The descendant of any (sub)halo is chosen as the one with the highest weighted count of common particles. This criteria can be understood as a weighed maximum shared merit function (see @2005Natur.435..629S for more details). Note that, for some small halos, the tracks of which are temporarily lost in subsequent snapshot, we skip one snapshot in finding their descendants. These descendants are called “non-direct descendant". The empirical model of galaxy formation {#sec_sfr} --------------------------------------- Unlike any SAM where each halo initially gets a lump of hot gas to be eventually turned into a galaxy (@2006RPPh...69.3101B), our EM starts with stars. Here we make use of the SFH map of dark matter halos obtained by [@2013ApJ...770..115Y] to grow galaxies. In our EM of galaxy formation, *central* and *satellite* galaxies are assumed to be located at the center of the main halos and subhalos respectively. Their velocities are assigned using those of the main halos and subhalos. For those satellite galaxies whose subhalos are disrupted, (e. g. orphan galaxies) the host halo is populated according to its NFW profile. Their velocities are assigned according to the halo velocity combined with the velocity dispersion (see @2004MNRAS.350.1153Y for the details of such an assignment). Apart from the obvious issue of positioning mock galaxies, we have to implement the stellar mass evolution. For central and satellite galaxies, stellar mass $M_{\star, c}(t_2)$ at a time $t_2$ is derived by adding to the stellar mass $M_{\star, c}(t_1)$ at a time $t_1$ the contribution from star formation $\Delta M_{\star,c}(t_1)$ and disrupted satellites $\Delta M_{\star, dis}(t_1)$ as follows: $$\label{eq:central} M_{\star}(t_2) = M_{\star}(t_1) + \Delta M_{\star}(t_1,t_2) + \Delta M_{\star, dis}(t_1,t_2)$$ Obviously before implementing these models, the galaxies have to be seeded. For each halo and subhalo, we follow the merger tree back in time to determine the earliest time output (at $t_{\rm min}$) when it was identified as a halo (at least 20 particles). Then a seed galaxy with initial stellar mass $ M_{\star}(t_{\rm min})$ is assigned to this halo at the beginning redshift. Here the stellar mass is assigned according to the central-host halo mass relation obtained by [@2012ApJ...752...41Y], taking into account the cosmology of our simulations. We note that only halos with direct descendants are seeded. ### Star formation of central galaxies We first model the growth of [*central*]{} galaxies that are associated with the host (main) halos. Listed below are the details. - In order to integrate the contribution of star formation between snapshots corresponding to times $t_1$ and $t_2=t_1+\Delta T$, we increase the time resolution by defining smaller timesteps $\Delta t=\Delta T/N$. Here we choose $N=5$, since greater values have very limited impact on the results. We also assume that the SFR is constant during any time step $\Delta t$. - Then we estimate ${\dot M}_{\star}(t)$ the SFR of central galaxy at time $t$ in a halo with mass $M_{\rm h}$. As shown in [@2013ApJ...770..115Y], the distribution of SFR of central galaxies have quite large scatters around the median values and show quite prominent bimodal features. To partly take into account these scatters, for each timestep $\Delta t$, the star formation rate ${\dot M}_{\star}(t)$ is drawn from a lognormal distribution of mean ${\dot M}_{\star, 0}(t)$ and dispersion $\sigma$. So the SFR of central galaxies is indeed set as: $$\label{eq:SFR} \log {\dot M}_{\star}(t) = \log {\dot M}_{\star, 0}(t) + \sigma \cdot N_{\rm gasdev} \, ,$$ where ${\dot M}_{\star, 0}(t)$ is the median SFR predicted by [@2013ApJ...770..115Y] and $N_{\rm gasdev}$ is a random number generated using the code of Numerical Recipe(@2007nrca.book.....P). Here we adopt a $\sigma=0.3$ lognormal scatter as suggested in [@2013ApJ...770..115Y]. - The stellar mass formed between the snapshots, $\Delta M_{\star}(t_1,t_2)$ is determined as: $$\label{eq:deltam} \Delta M_{\star}(t_1,t_2) = \sum_{t=t_1}^{t=t_2} {\dot M}_{\star}(t) \Delta t \, .$$ ### Star formation of satellite galaxies After focussing on the growth of central galaxies, we need to focus on the satellite galaxies. We start by modeling their growth while they are still associated to subhalos. Once the host halo enters a bigger one and becomes a subhalo, the SFR of the new satellite is expected to decline as a function of time due to the stripping effect, etc. Here we use the star formation model of satellite galaxy proposed by [@2014MNRAS.439.1294L] to construct their star formation history. A simple $\tau$ model has been adopted to describe the star formation rate decline in [@2014MNRAS.439.1294L] as follows: $$\label{eq:sfrsat} {\dot M}_{\star,{\rm sat}}(t) = {\dot M}_{\star}(t_{a}) \exp\left( -\frac{t-t_{a}}{\tau_{\rm sat}} \right) \, ,$$ where $t_{a}$ is the time when the galaxy is accreted into its host to become a satellite and ${\dot M}_{\star}(t_{a})$ the corresponding SFR. $\tau_{\rm sat}$ is the exponential decay time scale characterizing the decline of the star formation for a galaxy of stellar mass $M_{\star}$. We adopt the following model of the characteristic time $$\label{eq:sfrsat_t} \tau_{\rm sat} = \tau_{\rm sat,0} \exp\left( -\frac{M_\star}{M_{\star,c}} \right) \, ,$$ where $\tau_{\rm sat,0}$ is the time for a galaxy with a stellar mass of $M_{\star,c}$. The values $\tau_{\rm sat,0}$ and $M_{\star,c}$ used in our model are the best fit values of MODEL III in [@2014MNRAS.439.1294L] with $\log (H_0 \tau_{\rm sat,0})=-1.37$ and $\log M_{\star,c}=-1.4$. The growth of the satellite stellar mass is thus becomes: $$\label{eq:deltam_sat} \Delta M_{\star,sat}(t_1,t_2) = {\dot M}_{\star}(t_{a}) \exp\left( -\frac{t_2-t_{a}}{\tau_{\rm sat}} \right) \cdot \Delta T$$ ### Merging and stripping of satellite galaxies {#sec_merger} Apart from the [*in situ*]{} star formation, another important process in our model is the merging and stripping of satellite galaxies. The merging process has been studied by many people through hydrodynamical simulation (e.g. @2005ApJ...624..505Z; @2008MNRAS.383...93B; @2008ApJ...675.1095J). Here we assume that the satellite galaxies orbiting within a dark matter halo may experience dynamical friction and will eventually be disrupted, while only a small fraction of stars are finally merged with center galaxy of the halo. So when a satellite cannot be associated with a subhalo, we use a delayed merger scheme where the satellite coalless with the central after the dynamical friction timescale described in the fitting formula of [@2008ApJ...675.1095J]: $$\label{eq:disrupt} T_\mathrm{dyn} = 1.4188\frac{r_{\rm c}M_{\rm h}}{v_{\rm c}M_{\rm sub}} \frac{1}{\ln(1+\frac{M_{\rm h}}{M_{\rm sub}})} \,,$$ where $M_{\rm sub}$ and $M_{\rm h}$ are the respective [*halo*]{} masses associated to satellite and central galaxies, at the timestep a satellite galaxy was last found in a subhalo. This formula is valid for a small satellite of halo mass $M_{\rm sub}$ orbiting at a radius $r_\mathrm{c}$ in a halo of circular velocity $v_\mathrm{c}$. As the satellite galaxy was last found in a subhalo is disrupted after $\Delta t=T_\mathrm{dyn}$, we transfer a fraction of its stellar mass to the central galaxy. So that tha contribution of disrupted satellite follows $$\label{eq:merger} \Delta M_{\star, dis}(t_1,t_2) = f_\mathrm{merger} \sum M_{\star, sat}(t_\mathrm{sat})\, ,$$ where $ M_{\star, sat}(t_{sat})$ is stellar mass of the in-falling satellite a determined when it was last found in a subhalo at $t_\mathrm{sat}$ with $t_{sat}+T_\mathrm{dyn} \leq t_2$. $f_{merger}$ is fraction of the satellite galaxy stellar mass merged into central galaxy. Here $f_\mathrm{merger}=0.13$ is set to the best fit value of MODEL III in [@2014MNRAS.439.1294L]. ### Passive evolution of galaxies {#sec_merger} Finally, we take into account the passive evolution of both central and satellite galaxies. As we have the stellar mass composition of each galaxy as a function of time, the final stellar mass is determined as: $$\label{eq:finalm} M_{\star}(t_0) = M_{\star}(t_{\rm min}) \cdot f_{\rm passive}(t_0-t_{\rm min}) + \sum_{t=t_{\rm min}}^{t=t_0}\Delta {M}_{\star}(t) \cdot f_{\rm passive}(t_0-t) \, ,$$ where $f_{\rm passive}(t)$ is the mass fraction of stars remaining at time $t$ after the formation. We obtained $f_{\rm passive}(t)$ from [@2003MNRAS.344.1000B], courtesy of Stephane Charlot (private communication). Other star formation history models ----------------------------------- There have been many other star formation history models proposed in recent years (e.g. @2009ApJ...696..620C, @2013ApJ...770...57B). Here we make use of the model constrained by [@2014MNRAS.439.1294L], in order to further test our empirical model. This model is similar in a sense that it consists on predicting SFR within halos and subhalos to build galaxies. In the larger part of the result section, the properties of the central and satellite galaxies are compared to our fiducial EM predictions. [@2014MNRAS.439.1294L] (hereafter Lu14) developed an empirical approach to describe the star formation history model of central galaxies and satellite galaxies. They assumed an analytic formula for the SFH of central galaxies with a few free parameters. The galaxies grow in dark matter halos based on the halo merger trees generated by Extended Press-Schechter (EPS: @1991ApJ...379..440B [@1991MNRAS.248..332B]) formalism and Monte Carlo method. With different observation constraint, they got four different empirical models. Here we only pick Model III in Lu14 to compare with our model. In Lu14, the star formation rate of central galaxies can be written as follows: $$\label{lu_cen} {\dot M}_\star = {\cal E} \frac{f_B M_{\rm vir}}{\tau_0} \left( 1+z \right)^{\kappa} (X+1)^{\alpha} \left(\frac{X+\mathcal{R}}{X+1}\right)^{\beta} \left(\frac{X}{X+\mathcal{R}} \right)^{\gamma} \,,$$ where ${\cal E}$ is an overall efficiency; $f_B$ is the cosmic baryonic mass fraction; $\tau_0$ is a dynamic timescale of the halos at the present day, set to be $\tau_0\equiv 1/(10 H_0)$; and $\kappa$ is fixed to be ${3/2}$ so that $\tau_0/(1+z)^{3/2}$ is roughly the dynamical timescale at redshift $z$. The quantity $X$ is defined to be $X\equiv M_{\rm vir}/M_{\rm c}$, where $M_{\rm c}$ is a characteristic mass and $\mathcal{R}$ is a positive number that is smaller than $1$. For the star formation rate of satellite galaxies, the related formula is already provided in Eq. \[eq:sfrsat\]. ![The upper left, lower left and right panels show the galaxy SMFs for central, satellite and all galaxies, respectively. In each panel, the red filled circles with error bars are the galaxy stellar mass function of SDSS DR7 obtained by [@2012ApJ...752...41Y]. The cyan circles with error bars are our fiducial EM results based on L500 simulation. The blue curves are the simular results but based on SFH model of [@2014MNRAS.439.1294L]. The error bars of our EM are calculated using 500 bootstrap re-samplings.[]{data-label="fig:smf"}](fig2.eps){width="50.00000%"} ![image](fig3.eps){width="18.0cm"} The stellar mass properties of galaxies {#sec_result1} ======================================= In order to check the performance of our EM for galaxy formation, we check the stellar mass function (SMF) and the two point correlation function (2PCF) of galaxies, and compare them to observational measurements. The related observational measurements are the SMFs at different redshifts (@2012ApJ...752...41Y; @2008ApJ...675..234P, hereafter PG08; @2005ApJ...619L.131D, hereafter Drory05), the CSMFs at low redshift (@2012ApJ...752...41Y) and the 2PCFs for galaxies in different stellar mass bins. SMFs of galaxies at different redshifts --------------------------------------- The first set of observational measurements are the stellar mass functions of galaxies at redshift $z=0.0$ which are shown in Fig. \[fig:smf\] for all (right panel), central (upper-left panel) and satellite (low-left panel) galaxies, respectively. The red circles with error-bars indicate the observational data obtained from SDSS DR7 by [@2012ApJ...752...41Y]. Cyan circles with error bars are the results of our model applied the halo merger trees of the L500 simulation. Meanwhile, blue curves are obtained using the Lu14 SFH model on the same trees. From the upper-left panel of Fig. \[fig:smf\], it is clear that for central galaxies the results of our model show an excellent agreement with observational data within a large stellar mass range ($\log {M_{\ast}}\sim 8.1-11.0$). However, in high mass range ($\log{M_{\ast}}\ga 11.0$), we somewhat underestimate the stellar mass function. This discrepancy is probably caused by the fact that in our model, we used the median SFH to grow galaxies in dark matter halos. However, in reality scatter of SFHs of high mass central galaxies may be larger and depend on their large scale environment. In addition, in our model we did not take into account the major mergers of galaxies, where only $f_{\rm merger}=0.13$ portion of stripped satellite galaxies can be accreted to the central galaxies. For the SFH models of Lu14, the results are very similar with our fiducial ones. For the satellite galaxies, as shown in the lower-left panel of Fig. \[fig:smf\], our fiducial EM reproduces the overall SMFs quite well. However, a slight deviation (over prediction) is seen at middle mass range ($\log {M_{\ast}}\sim 10.4-10.9$). In these satellite galaxies either the SFH modelled by Eq. \[eq:sfrsat\] is somewhat too strong, or the stripping and disruption of satellite modelled by Eq. \[eq:disrupt\] is not efficient enough. As for Lu14 model, it does not match that well with the SDSS observations, especially in the low mass range ($\log{M_{\ast}}\sim 8.0-9.5$). And in high mass range($\log{M_{\ast}}\sim 11.0-11.5$), it over predicts the mass function. Nevertheless, as Lu14 model itself is intended to reproduce the much steeper faint end slope of the luminosity function, especially for satellite galaxies, such differences are expected. The right panel of Fig. \[fig:smf\] shows the SMF of all galaxies which include central galaxies and satellite galaxies. The results of our fiducial EM in general agree with the observational data, with slight discrepancies at the high mass range ($\log {M_{\ast}}\ga 11.0$) mainly contributed by centrals, and at middle mass range ($\log{M_{\ast}}\sim 10.4-10.9$) mainly contributed by satellites. The Lu14 model show a larger discrepancy at low mass range($\log {M_{\ast}}\sim 8.0-9.5$) which is caused by the satellite components. Next, we check the stellar mass functions of galaxies at higher redshifts. Shown in Fig. \[fig:highzsmf\] are SMFs of galaxies at different redshift bins as indicated in each panel. In these higher redshift bins, in order to mimics the typical error in the stellar mass estimation in observations, we add logarithmic scatters to the stellar masses of galaxies as $\sigma_c(z) = {\rm max} [0.173, 0.2 z]$ (see @2012ApJ...752...41Y for more detail). The yellow filled circles with error-bars are results obtained by [@2005ApJ...619L.131D], in which they have combined the data from FORS Deep and from the GOODS/CDFS Fields. The cyan circles with error bars are our EM results based on L500 simulation, while blue curves are the results of Lu14 model based on L500 simulation. As shown in Fig. \[fig:highzsmf\], in both low and high redshift bins $z<1.0$ and $z>2.0$, the SMFs from our model agree quite well with the observational results. However in the redshift range $1.0<z<2.0$, our model over predicts the SMFs. As seen in the lower-left panel of Fig. \[fig:smf\], this discrepancies might be due to some over prediction of satellite galaxy counts. In comparison, we also show results based on Lu14 model, which present even higher SMFs within the redshift range $1.0<z<2.0$. ![image](fig4.eps){width="18.0cm"} ![image](fig5.eps){width="18.0cm"} CSMFs of galaxies at $z=0$ -------------------------- The conditional stellar mass function (CSMF) $\phi({M_{\ast}}|{M_{\rm h}})$, which describes the average number of galaxies as a function of galaxy stellar mass ${M_{\ast}}$ that can be formed within halos of mass ${M_{\rm h}}$, is an important measure that can be used to constrain galaxy formation models. As carried out in [@2010ApJ...712..734L] using the CSMFs of satellite galaxies, classical semi-analytical models at that time typically over predicted the satellite components by a factor of two which indicates that either less (or smaller) satellites can be formed, or more satellite galaxies need to be disrupted. Here we compare our model predictions with observational data in Fig. \[fig:csmf\_cen\] and Fig. \[fig:csmf\_sat\] for central and satellite galaxies separately. Based on the SDSS DR7 galaxy group catalog, [@2012ApJ...752...41Y] obtained the CSMFs of central galaxy and satellite galaxies, which are shown as the red filled circles with error-bars in Fig. \[fig:csmf\_cen\] and Fig. \[fig:csmf\_sat\], respectively. The CSMFs from our model are shown as cyan solid curves. Blue curves are the CSMFs obtained from galaxy catalogs constructed using Lu14 model. As shown in Fig. \[fig:csmf\_cen\], the central galaxy CSMFs of our model and Lu14 model are very similar. Both of them agree well with the observations in halo mass range $12.0 \le \log {M_{\rm h}}< 13.8$ but are slightly under estimated in halo mass range $13.8 \le \log {M_{\rm h}}< 15.0$. As shown in Fig. \[fig:csmf\_sat\] for satellite galaxies, the CSMFs of our model agree well with the observation in general. There are little deviations in halo mass ranges $12.0\le\log{M_{\rm h}}<12.3$, $12.3 \le \log {M_{\rm h}}< 12.6$ and $12.6 \le \log {M_{\rm h}}< 12.9$. In these ranges, our model overestimates the CSMFs at $9.5 \le \log {M_{\ast}}< 10.5$. Thus the over predicted satellite galaxies shown in Fig. \[fig:smf\] are mainly in these Milky Way sized and group sized halos. While in Lu14 model, as seen for the satellite galaxy stellar mass function shown in Fig. \[fig:smf\], the CSMFs in halos of different masses all show an upturn at low mass end. ![Projected 2PCFs of galaxies in different stellar mass bins as indicated in each panel. Red filled circles with error bars are the 2PCFs of SDSS DR7 obtained by [@2012ApJ...752...41Y] and cyan curves are our EM results.[]{data-label="fig:2pcf"}](fig6.eps){width="9.0cm"} 2PCFs of galaxies ----------------- The two point correlation function which measures the excess of galaxy pairs as a function of distance is a widely used quantity to describe the clustering properties of galaxies. In terms of galaxy formation, it can be used to constrain the HOD of galaxies (@1998ApJ...494....1J) and to constrain the CLF of galaxies (@2003MNRAS.339.1057Y). Here we compare the model predictions of 2PCFs in our galaxy catalogs to observations. Fig. \[fig:2pcf\] shows the projected 2PCFs of galaxies in different stellar mass bins. Our model predictions are shown as the solid curves and the observational data obtained by [@2012ApJ...752...41Y] from SDSS DR7 are shown as the filled circles with error bars. Our overall model predictions are quite a good match to the observation in the stellar mass range $9.0<\log{M_{\ast}}< 11.0$. However, in the most massive stellar mass bin ($11.0 < \log {M_{\ast}}< 11.5$), our model results is higher than the observations for $r_{\rm p}\la 1h^{-1}\rm Mpc$. The too strong clustering at $r_{\rm p}<1{\>h^{-1}{\rm {Mpc}}}$ for these high mass objects is mainly caused by the fact that due to the insufficient prediction of the central galaxies, the satellite fraction in this mass bin is over predicted (see Fig. \[fig:smf\]). ![Luminosity functions of galaxies in $u, g, r, i, z$ band at $z=0.1$. The solid curve in each panel is the corresponding best fit Schechter form LF obtained by [@2003ApJ...592..819B] from SDSS DR1. []{data-label="fig:all_lf"}](fig7.eps){width="9.0cm"} ![Luminosity functions of central, satellite and all galaxies in $r$ band in the local universe. Here results are shown for observational measurements (red dots) and our fiducial model predictions (cyan dots), respectively. []{data-label="fig:lf_yang"}](fig8.eps){width="9.0cm"} ![image](fig9.eps){width="18.0cm"} ![image](fig10.eps){width="18.0cm"} The luminosity and gas properties of galaxies {#sec_result2} ============================================= Apart from the stellar masses of galaxies, we now turn to the luminosity and gas components of galaxies. Luminosities of galaxies in different bands ------------------------------------------- As detailled in section \[sec\_sfr\], from the halo merger histories derived from the L500 simulation, we model galaxies from a estimation of their sellar mass and SFR as a function of time. We use those information to predict the photometric properties of our model galaxies using the stellar population synthesis model of [@2003MNRAS.344.1000B] using a Salpeter IMF (@1955ApJ...121..161S). Since our model does not include the gas component in galaxies, we cannot directly trace the chemical evolution of the stellar population. To circumvent this problem, we follow the metallicity - stellar mass relation derived in Lu14 from observation of galaxies at all redshifts [*specified redshift range of ranges*]{}. We adopt the mean relation based on the data of [@2005MNRAS.362...41G], which can roughly be described as $$\log_{10} Z = \log_{10} Z_{\odot} + \frac{1}{\pi} \tan \left[\frac{\log_{10}(M_{\star}/10^{10}M_{\odot})}{0.4}\right] - 0.3 \,.$$ This observational relation extends down to a stellar mass of $10^9M_{\odot}$ and has a scatter of $0.2\,{\rm dex}$ at the massive end and of $0.5\,{\rm dex}$ at the low mass end. Using the stellar population synthesis model, we can obtain galaxy luminosities in different bands. We show in Fig. \[fig:all\_lf\], the luminosity functions of all galaxies in the five different SDSS bands ($u, g, r, i, z$) at $z=0.1$. For comparison, we also show in each panel the corresponding best Schechter functional LFs fit obtained by [@2003ApJ...592..819B] from SDSS DR1. The observational measurements and corresponding model fitting are roughly limited to absolute magnitude limit ($-16, -16.5, -17, -17.5, -18$) in ($u, g, r, i, z$) bands, respectively. Within these magnitude limits, our model predictions agree with the observational data fairly well with very slight under predictions at the bright ends. Only in $u$-band do we see a pro-eminent deficit of galaxies at ${\>^{0.1}{\rm M}_u-5\log h}\sim -16.0$. These behaviors indicate that the stellar compositions as a function of time as derived with our model are on average accurate. In addition to the LFs of the full galaxy population, we can distinguish the contribution from the centrals and the satellites. Fig. \[fig:lf\_yang\] shows the $r$ band luminosity functions of all (right panel), central (upper-left panel) and satellite (low-left panel) galaxies. Our fiducial model predictions are shown as the cyan dots with error bars obtained from 500 bootstrap re-samplings. Red points with error bars are obtained by [@2009ApJ...695..900Y] but were updated to SDSS DR7. Similar with Fig. \[fig:smf\], our model underestimates the central galaxy luminosity function at high luminosity end ($10.5 \la \log L \la 11.0$) and overestimates the satellite galaxy luminosity function in the luminosity range ($10.0\la \log L \la 10.5$). Similarly to the CSMFs, the conditional luminosity functions(CLFs) describe, as a function of luminosity $L$, the average number of galaxies that reside in dark matter halo of a given mass $M_{h}$. In Fig. \[fig:clf\_cen\] the CLFs obtained from our mock catalogs are compared to the observational measurements obtained by [@2009ApJ...695..900Y](also updated to SDSS DR7). As one could expect, the performances of CLFs of central galaxies is quite similar to the situation found for the CSMFs in Fig. \[fig:csmf\_cen\]. The central galaxy CLFs of our model agree well with the observational results in the $12.0 \le \log {M_{\rm h}}<13.5$ halo mass range still there is some discrepancy for $13.5\le\log{M_{\rm h}}<15.0$. As for the satellite galaxies shown in Fig. \[fig:clf\_sat\], the situation is somewhat different with respect to the CSMFs. Our model matches well with observations in $12.9 \le \log {M_{\rm h}}< 13.8$, while underestimate the number of satellite galaxies at the low luminosity end in high mass halos $13.8\le\log {M_{\rm h}}< 15.0$. These discrepancies are highly interesting as they differ from the one we found for the CSMFs (Fig \[fig:csmf\_sat\]), as it indicates that the colors of these galaxies are not entirely properly modelled. ![The HI mass functions: green dots are our fiducial model predictions, while cyan dots are the model predictions that taken into account the starburst. The black solid line shows the best fit observational results obtained by [@2005MNRAS.359L..30Z] with dashed lines indicate its $\pm1\sigma$ scatter. Magenta curve is the observational fitting formula obtained by [@2010ApJ...723.1359M].[]{data-label="fig:himf"}](fig11.eps){width="9.0cm"} ![The HI-to-stellar mass ratios as a function of galaxy stellar mass. Here red points are data from GASS (@2013MNRAS.436...34C) survey. Red curves are the median and 68% confidence range of the ratio in the GASS sample. Green and cyan curves represent the median and 68% confidence range of our fiducial and star-burst model predictions, respectively. []{data-label="fig:hi_frac"}](fig12.eps){width="9.0cm"} [HI]{} masses of galaxies ------------------------- Although our EM is limited to model the star components of galaxies, we can estimate the gas components within the galaxies. Here we focus on the cold gas that are associated with the star formation(@1959ApJ...129..243S). The star formation law most widely implemented in SAM was proposed by [@1998ApJ...498..541K] as follows: $$\begin{aligned} \label{eq:sfr} \Sigma_\mathrm{SFR} & = & (2.5 \pm 0.7) \times 10^{-4} \nonumber \\ & & (\frac{\Sigma_\mathrm{gas}}{1 {\>{\rm M_\odot}}\mathrm{pc}^{-2}})^ {1.4 \pm 0.15}{\>{\rm M_\odot}}\mathrm{yr}^{-1} \mathrm{kpc}^{-2} \, ,\end{aligned}$$ where $\Sigma_\mathrm{SFR}$ and $\Sigma_\mathrm{gas}$ are the surface densities star formation and gas, respectively. In this paper, we use the model proposed in [@2010MNRAS.409..515F] to estimate the cold gas within our galaxies. This method consists in following the build-up of stars and gas within a fixed set of 30 radial “rings”. The radius of each ring is given by the geometric series $$\begin{aligned} \label{eq:ri} r_{i} & = & 0.5 \times 1.2^{i}[h^{-1} \rm{kpc}](i=1,2...30) .\end{aligned}$$ According to [@1998MNRAS.295..319M], the cold gas is distributed exponentially with surface density profile $$\begin{aligned} \label{eq:sigma_gas} \Sigma_\mathrm{gas} (r) & = & \Sigma_\mathrm{gas}^\mathrm{0} \exp (-r/r_\mathrm{d})\,,\end{aligned}$$ where $r_\mathrm{d}$ is the scale length of the galaxy, and $\Sigma_\mathrm{gas}^\mathrm{0}$ is given by $\Sigma_\mathrm{gas}^\mathrm{0} = m_\mathrm{gas} / (2 \pi r_\mathrm{d}^\mathrm{2})$. With the above ingredients, we are able to predict the total amount of cold gas associated to each galaxy. However, observationally, we only have a relatively good estimate of HI mass in the local universe. Here we calculate HI masses associated to galaxies by assuming a constant ${\rm H}_{2}/{\rm HI}$ ratio of 0.4 and a hydrogen mass fraction $X=0.74$ (@2011MNRAS.416.1566L; @2004MNRAS.351L..44B; @2010MNRAS.406...43P). Fig. \[fig:himf\] shows the HI mass function of galaxies in the local universe obtained from our mock galaxy catalog (green dots). For comparison, we also show in Fig. \[fig:himf\], using black curve, the fitting formula of HI mass function obtained by [@2005MNRAS.359L..30Z] from HIPASS: $$\begin{aligned} \label{eq:hi} \Theta(M_\mathrm{HI})dM_\mathrm{HI} = \left(\frac{M_\mathrm{HI}}{M^{\ast}_\mathrm{HI}}\right)^{\alpha} \exp \left(-\frac{M_\mathrm{HI}}{M^{\ast}_\mathrm{HI}}\right) d \left(\frac{M_\mathrm{HI}}{M^{\ast}_\mathrm{HI}}\right) \, ,\end{aligned}$$ where $\alpha=-1.37 \pm 0.03$ and $\log (M^{\ast}_\mathrm{HI}) /{\>{\rm M_\odot}}= 9.80 \pm 0.03 h^{-2}_{75}$. Black dashed lines indicate the $\pm1\sigma$ scatter. An additional observational HI mass function is obtained by [@2010ApJ...723.1359M] using $1/V_{max}$ method (magenta curve). Our model only shows a fair agreement to these observational data, even though it under predicts the HI mass function at $\log {M_{\rm HI}}\la 9.6$ and over predicts the HI mass function at $\log {M_{\rm HI}}\ga 10.5$. These discrepancies are possibly caused by different factors. The first one am be, of course, the uncertainties in the SFR-cold gas mass ratios. In addition to this, as the SFR in low mass halos have much larger scatters than the ones we adopt here (see Fig. 1 in [@2013ApJ...770..115Y]), adopting a larger scatter may help to solve the HI mass function deficiency at low mass end. On the massive end of the HI mass function, the difference may be connected to starburst galaxies (with high SFR). However, in reality, starburst is not necessary associated with the largest cold gas component. As [@2014ApJ...789L..16L] have checked the morphologies of star-burst galaxies (with SFRs 5 time higher than the median for the given stellar mass), and found that more than half of them are associated with gas rich major mergers. To partly take this into account, we adopt the collisional star-burst model proposed by [@2001MNRAS.320..504S] used in many SAMs (@2008MNRAS.391..481S; @2011MNRAS.413..101G). During the star-burst process, the increased stellar mass of the central galaxy is $$\begin{aligned} \label{eq:burst} \delta m_\mathrm{starburst} & = & (m_\mathrm{gas,\;sat}+m_\mathrm{gas,\;cen})\nonumber \\ & & e_\mathrm{burst} \left (\frac{m_\mathrm{sat}}{m_\mathrm{cen}}\right)^\mathrm{\gamma_{burst}} \, ,\end{aligned}$$ where $m_\mathrm{gas,\;cen}$ ($m_\mathrm{gas,\;sat}$) is the cold gas mass of central (satellite) galaxy, $m_\mathrm{cen}$ ($m_\mathrm{sat}$) is the sum of stellar mass and cold gas mass of central (satellite) galaxy, $e_\mathrm{burst}= 0.55$, and $\gamma_\mathrm{burst}=0.69$. The values of $e_\mathrm{burst}$ and $\gamma_\mathrm{burst}$ are determined from isolated galaxy merger simulations performed by [@2008MNRAS.384..386C]. Within our merger trees, we identify these star-burst galaxies and swap their SFRs to the highest ones in similar mass halos. The cold gas for these galaxies are updated using this Eq. \[eq:burst\]. We show in Fig. \[fig:himf\] using cyan dots how this starburst implementation successfully corrects the over-estimation of HI mass function at massive end. Apart from the HI mass functions, we also compare the HI-to-stellar mass ratios of galaxies. Fig. \[fig:hi\_frac\] illustrates the HI-to-stellar mass ratio $\log[M_{\rm HI}/M_{\ast}]$ as a function of galaxy stellar mass. Red points are from GASS survey (@2013MNRAS.436...34C) while the red curve represents the median value and red dashed curves indicate the $16^{th}$ and $84^{th}$ percentile ranges of $\log[M_{\rm HI}/M_{\ast}]$. The green solid and dashed curves represent the median and $16^{th}$ and $84^{th}$ percentile ranges of our fiducial model prediction from the L500 simulation. While the cyan curves are obtained from the star-burst variation of the model. We can see that both our models reproduce the average trends of HI-to-stellar mass ratios as a function of stellar mass quite well. But the scatter of the model prediction is smaller than the observation at low masses. We think that this may caused by the relation between star formation rate and cold gas used in our model. Summary {#sec_dis} ======= Based on the star formation histories of galaxies in halos of different masses derived by [@2013ApJ...770..115Y], we an empirical model to study the galaxy formation and evolution. Compared to traditional SAMs, this model has few free parameters, each of which can be associated with the observational data. Applying this model to merger trees derived from $N$-body simulations, we predict several galaxy properties that agree well with the observational data. Our main results can be summarized as follows. 1. At redshift $z=0$, the SMFs of all galaxies agree well with the observation within $8.0<\log{M_{\ast}}<11.3$ but our estimate is slightly low in high stellar mass end ($11.3 < \log {M_{\ast}}< 12.0$). 2. Our SMFs show generally a fair agreement with the observational data at higher redshifts up to 4. While in redshift $1.0<z<2.0$, the SMFs at the low mass end are somewhat over-estimated. 3. At redshift $z=0$, the CSMFs of central galaxies agree well with the observations in the $12.0\le\log{M_{\rm h}}<13.8$ halo mass range and somewhat shifted to lower masses in halo mass range $13.8 \le \log{M_{\rm h}}< 15.0$. In the meantime, the CSMFs of satellite galaxies agree quite well with the observations. 4. The projected 2PCFs in different stellar mass bins calculated from our fiducial galaxy catalog can match well the observations. Only in the most massive stellar mass bin the correlation is over predicted at small scales. 5. We can derive from our model LFs in the $^{0.1}u$, $^{0.1}g$, $^{0.1}r$, $^{0.1}i$ and $^{0.1}z$ bands. They prove to be roughly consistent with the SDSS observational results obtained by [@2003ApJ...592..819B]. 6. The central galaxy CLFs of our model agree well with the observational results in halo mass range $12.0\le\log{M_{\rm h}}< 13.5$, quite similar to the SMFs. However, the satellite galaxy CLFs are somewhat underestimated at faint end in halos with mass $12.9 \le \log {M_{\rm h}}< 13.8$. 7. Our prediction of HI mass function agree with the observational data at roughly $\pm1\sigma$ level at $\log {M_{\rm HI}}\ga 9.6$, and somewhat underestimated at lower mass ends. Our model predicts roughly consistent, although not perfect, stellar mass, luminosity and HI mass components of galaxies. Such a method is a potential tool to study the galaxy formation and evolution as an alternative to SAMs or abundance matching methods. The galaxy and gas catalogs here constructed can be used to construct redshift surveys for future deep surveys. This work is supported by 973 Program (No. 2015CB857002), national science foundation of China (grants Nos. 11203054, 11128306, 11121062, 11233005, 11073017, 11421303), NCET-11-0879, the Strategic Priority Research Program “The Emergence of Cosmological Structures" of the Chinese Academy of Sciences, Grant No. XDB09000000 and the Shanghai Committee of Science and Technology, China (grant No. 12ZR1452800). SJL thanks Ming Li for his help in dealing with the simulation data, Ting Xiao for her useful discussion concerning HI gas and Jun Yin for her help in stellar population synthesis modeling. A computing facility award on the PI cluster at Shanghai Jiao Tong University is acknowledged. This work is also supported by the High Performance Computing Resource in the Core Facility for Advanced Research Computing at Shanghai Astronomical Observatory.
--- abstract: 'We present an effective model where the inflaton is a relaxion that scans the Higgs mass and sets it at the weak scale. The dynamics consist of a long epoch in which inflation is due to the shallow slope of the potential, followed by a few number of e-folds where slow-roll is maintained thanks to dissipation via non-perturbative gauge-boson production. The same gauge bosons give rise to a strong electric field that triggers the production of electron-positron pairs via the Schwinger mechanism. The subsequent thermalization of these particles provides a novel mechanism of reheating. The relaxation of the Higgs mass occurs after reheating, when the inflaton/relaxion stops on a local minimum of the potential. We argue that this scenario may evade phenomenological and astrophysical bounds while allowing for the cutoff of the effective model to be close to the Planck scale. This framework provides an intriguing connection between inflation and the hierarchy problem.' author: - Walter Tangarife - Kohsaku Tobioka - Lorenzo Ubaldi - Tomer Volansky bibliography: - 'Relax.bib' title: Relaxed Inflation --- =1 Introduction ============ The mass of the Higgs boson, $m_h$, is sixteen orders of magnitude smaller than the Planck mass. This poses a puzzle, which goes under the name of the naturalness problem. In the Standard Model (SM) of particle physics, we expect large quantum corrections that would raise $m_h$ roughly up to the Planck scale. One way to avoid such corrections is to impose additional symmetries to protect $m_h$, and keep it naturally small. Supersymmetry [@Martin:1997ns] is the most studied extension in this direction and, like most other solutions, predicts the presence of new physics at around the TeV scale that can potentially be accessible at the Large Hadron Collider. Another direction in addressing the problem of naturalness has been put forward in Ref. [@Graham:2015cka]. The smallness of $m_h$ could result from the cosmological evolution of another scalar field, the relaxion, that couples to the Higgs, scans its mass and eventually sets it to the observed value. This solution is based on dynamics rather than symmetry [^1], and provides an intriguing connection between naturalness and cosmology. The model is described by an effective Lagrangian valid up to a cutoff scale $\Lambda$, and the success in addressing the small Higgs mass is measured by how high $\Lambda$ is compared to $m_h$, once the constraints from the dynamics are taken into account. In the original proposal, the highest $\Lambda$ is of order $10^8$ GeV, and can be achieved in a scenario where the relaxation dynamics take place during inflation. Various features of this class of models have been explored in Refs. [@Espinosa:2015eda; @Hardy:2015laa; @Jaeckel:2015txa; @Gupta:2015uea; @Batell:2015fma; @Matsedonskyi:2015xta; @Marzola:2015dia; @Choi:2015fiu; @Kaplan:2015fuy; @DiChiara:2015euo; @Ibanez:2015fcv; @Hebecker:2015zss; @Fonseca:2016eoo; @Fowlie:2016jlx; @Evans:2016htp; @Huang:2016dhp; @Kobayashi:2016bue; @Hook:2016mqo; @Higaki:2016cqb; @Choi:2016luu; @Flacke:2016szy; @McAllister:2016vzi; @Choi:2016kke; @Lalak:2016mbv; @You:2017kah; @Evans:2017bjs]. In this letter, we take the idea of Ref. [@Graham:2015cka] a step further by promoting the relaxion to an inflaton. The advantages of doing so are that (i) the model is more minimal, as it does not have to rely on an unspecified inflaton sector, and (ii) it evades numerous constraints, allowing the cutoff to lie close to the Planck scale. In the rest of the paper we describe the model, the dynamics of inflation, a novel reheating mechanism, and the relaxation of the electroweak (EW) scale, which happens after reheating. The interested reader can find more details in a longer companion paper [@longpaper]. The model ========= We consider the effective Lagrangian $$\begin{aligned} &\mathcal{L} = -\frac{1}{2} \partial_\mu \phi \partial^\mu \phi -\frac{1}{4} F_{\mu\nu}F^{\mu\nu} - c_\gamma \frac{\phi}{4f} F_{\mu\nu} \tilde F^{\mu\nu} \nonumber \\ &\quad\quad - (g_h m \phi - \Lambda^2) {\mathcal{H}}^\dagger {\mathcal{H}}- \lambda ({\mathcal{H}}^\dagger {\mathcal{H}})^2 \nonumber \\ &\quad\quad - V_{\rm roll} (\phi) - V_{\rm wig}(\phi) - V_0 \, , \label{eq:Lagrangian} \\ & V_{\rm roll} (\phi) = m \Lambda^2 \phi \, , \label{Vrolldef} \quad V_{\rm wig}(\phi) = \Lambda_{\rm wig}^4 \cos \frac{\phi}{f} \, , $$ defined in a Friedmann-Robertson-Walker (FRW) metric, $ds^2 = - dt^2 + a^2(t) d\vec x^2$. Here, $\phi$ is the relaxion/inflaton, ${\mathcal{H}}$ the Higgs doublet, $F_{\mu\nu}$ the field strength of an Abelian gauge field, $\tilde F_{\mu\nu}$ its dual. $f$ is the scale of spontaneous breaking of a global $U(1)$, of which $\phi$ is the Goldstone boson. $g_h$ is a dimensionless coupling of order one, $c_{\gamma}$ is model dependent and can span a large range of values. $\Lambda$ is the bare Higgs mass and the cutoff of the effective theory. The relaxion potential has three terms: $V(\phi) =V_{\rm roll} (\phi) + V_{\rm wig} (\phi) +V_0$. The first is responsible for the rolling, and is linear in $\phi$ (we neglect higher powers, which would come with correspondingly higher powers of the small mass parameter $m$). The second is responsible for the periodic potential (“wiggles"), which grows proportionally to the Higgs vacuum expectation value (VEV), $v$, as $\Lambda_{\rm wig}^4 \sim (yv)^n M^{4-n}$. Here, $y$ is a Yukawa coupling and $M$ is a mass scale smaller than $4\pi v$. Note that for $n$ odd the wiggles are present only when ${\mathcal{H}}$ has a nonzero VEV, while for $n$ even they are present also in the unbroken EW phase [@Espinosa:2015eda; @Gupta:2015uea]. In what follows, we concentrate, for simplicity, on the QCD-like case, $n=1$. The third term, $V_0$, is a constant that we choose to set $V(\phi)=0$ at the local minimum where we obtain the correct EW scale, ||v , \_[EW]{} (\^2 - m\_W\^2) . One finds \[V0def\] V\_0 = -+ + . Choosing this $V_0$ corresponds to tuning the cosmological constant. This is crucial, as it determines the dynamics of the field and ensures the exit of inflation before the relaxion settles into the EW vacuum. An important ingredient is that the mass parameter $m$, which controls the slope of the rolling potential, is tiny. This is technically natural, since in the limit $m \to 0$ the Lagrangian recovers the discrete shift symmetry $\phi \to \phi + 2\pi f$. The scales in the model have the following hierarchic structure m \_[wig]{} &lt; 4m\_W \_0 $, where $$ has no VEV, and consequently there is no periodic potential. With our conventions, $$ moves from right to left. In the first stage, the EOM is \be \label{EOMreg1} 3 H \dot\phi + V'(\phi) = 0 \, \ee to a very good approximation, and the relaxion rolls slowly due to the shallow linear slope. The speed, $|| = $, slowly increases as $H$ decreases going down the potential, but stays small enough so that $E B $ is negligible at this stage [see Eq.~\eqref{edotb} below]. This regime involves trans-Planckian field excursions, lasts for a very large number of e-folds, $N &gt; 10\^[30]{}$, and continues into the broken EW phase, $ 1$, and we can use it to compute \begin{align} \langle \vec E\cdot \vec B \rangle & \simeq 2.4\times 10^{-4} \frac{H^4}{|\xi|^4} e^{2\pi |\xi|} , \label{edotb} \\ \langle \vec E^2 \rangle & \simeq 10^{-4} \frac{H^4}{|\xi|^3} e^{2\pi |\xi|} \, , \quad \langle \vec B^2 \rangle \simeq 10^{-4} \frac{H^4}{|\xi|^5} e^{2\pi |\xi|} . \label{rhog} \end{align} Once $||$, and hence $||$, grow large enough, we smoothly switch from Eq.~\eqref{EOMreg1} to the EOM \be \label{EOMreg2} V'(\phi) = \frac{c_\gamma}{f} \langle \vec E \cdot \vec B \rangle \, , \ee where the dissipation is due to gauge-boson production. The solution now is \begin{align} |\dot\phi| = 2|\xi| \frac{H f}{c_\gamma} \simeq \frac{H f}{\pi c_\gamma} \ln \left[\frac{|\xi|^4}{2.4 \times10^{-4}H^4} \frac{f V'(\phi)}{c_\gamma} \right] \, . \end{align} In this regime, $|| \~20$ is roughly constant (only varies logarithmically), and $||$ decreases with the decreasing $H$. The energy density of the gauge bosons, $\_= E\^2 + B\^2 $, is roughly constant, and using Eq.~\eqref{EOMreg2} we have the relation $\_ f V’()$. One can show that the slow-roll conditions are now satisfied as long as~\cite{longpaper} \be \label{fupper} \frac{f}{c_\gamma} < \frac{{M_{\rm Pl}}}{|\xi|} \, . \ee When the potential $V()$ attains a value smaller than $\_$, the energy density is no longer dominated by the inflaton and we exit inflation. The following evolution is still described by Eq.~\eqref{EOMreg2}, the relaxion keeps slowing down and its kinetic energy remains smaller than $\_[wig]{}\^4$. This implies that as the periodic wiggly potential becomes sufficiently large to balance the linear slope, the field stops. Specifically, this condition reads \be m \Lambda^2 \sim \frac{\Lambda_{\rm wig}^4}{f} \, . \ee This must happen when $= \_[EW]{}$. By taking $m$ very small, we can achieve a very large $$, the only bound being $ ( [M\_[Pl]{}]{}m\_e\^4 )\^[1/5]{}. Here $\alpha = e^2/(4\pi)$. At the beginning of the Schwinger production, the energy density of $e^+ e^-$ is of order $m_e^4$, while that of the dark electric field is $(\kappa e)^{-2}$ larger. As $|\vec E_D|$ keeps growing to its maximum value, it shares its energy with the $e^+ e^-$ pairs by accelerating them classically. At the end of the process we have $\rho_{e^+ e^-} \sim \rho_{\gamma_D}$. This is the energy density available for reheating the visible sector. We can thus achieve a reheat temperature $T_{\rm RH} \sim \left( \frac{|\xi|}{c_{\gamma_D}} \right)^{1/4} \Lambda_{\rm wig}$, safely above BBN. Due to the lack of thermal suppressions, the EOM of the relaxion is still described by Eq.  after reheating. Therefore, the continued friction provided by unsuppressed dark photon production crucially slows down the motion of $\phi$ and allows it to settle at the EW vacuum. Given the small values of $\kappa e$ under consideration, the dark photons never reach equilibrium with the visible sector and remain cold (they have very low momentum) throughout the thermal history of the universe. In this way, cosmological bounds on relativistic species are evaded. What we have is a cold dark electric field, whose energy density, $\rho_{\gamma_D}$, redshifts like radiation and remains comparable to that of the visible sector until the time of matter - radiation equality. After that point the universe enters the matter dominated era, and $\rho_{\gamma_D}$ eventually becomes a negligible component of the energy density budget. There is one more constraint we need to impose on the model. If the gauge-boson production regime lasts too long, we overproduce curvature perturbations, non-Gaussianities and primordial black holes [@Anber:2009ua; @Barnaby:2011vw; @Linde:2012bt; @Garcia-Bellido:2016dkw]. To comply with the corresponding CMB bounds we require that we enter this regime only in the last five e-folds of inflation. This sets a lower bound on $f/c_{\gamma_D}$, and together with the condition of Eq.  restricts it to the window 0.2 &lt; . The above fixes $f$ to be of order $f \simeq c_\gamma M_{\rm Pl}/|\xi|$. For values of $c_{\gamma_D}$ of order one or larger, $f$ can be close to the Planck scale. This, in turn, allows for a large cutoff $\Lambda$. Summary ======= We have presented a model where the relaxion, coupled to the Higgs and to a dark photon, drives inflation and relaxes the EW scale after reheating. Inflation proceeds in two stages. In the first, which lasts very long, the relaxion slowly rolls down a shallow slope. In the second, which takes place only in the last five e-folds, the slow-roll is maintained thanks to dark photon production, that provides dissipation. The dark photons, kinetically mixed with the SM photons, form a very large dark electric field which produces SM $e^+ e^-$ pairs via the Schwinger effect. The $e^+ e^-$ thermalize the visible sector to a temperature above BBN. After the reheating process, the relaxion keeps rolling and slowing down, due to the continued dark photon dissipation, until it stops on the periodic potential and relaxes the EW scale. The mechanism realizes a low-scale model of inflation (with $H \sim \Lambda^2_{\rm wig}/{M_{\rm Pl}}< m^2_W/{M_{\rm Pl}}$ in the final observable e-folds) that fully addresses at the same time the hierarchy problem of the Standard Model. Additional details are presented in a companion paper [@longpaper]. The associated CMB signatures deserve further detailed studies, as does the novel reheating mechanism. Both will be presented in a future publication. Acknowledgments {#acknowledgments .unnumbered} =============== We would like to thank Tim Cohen, Erik Kuflik, Josh Ruderman, and Yotam Soreq for collaboration at the embryonic stages of this work. We benefited from a multitude of discussions with P. Agrawal, B. Batell, C. Csaki, P. Draper, S. Enomoto, W. Fischler, R. Flauger, P. Fox, R. Harnik, A. Hook, K. Howe, S. Ipek, J. Kearney, H. Kim, G. Marques-Tavares, L. McAllister, M. McCullough, S. Nussinov, S. Paban, E. Pajer, M. Peskin, G. Perez, D. Redigolo, A. Romano, R. Sato, L. Sorbo, and M. Takimoto. This work is supported in part by the I-CORE Program of the Planning Budgeting Committee and the Israel Science Foundation (grant No. 1937/12), by the European Research Council (ERC) under the EU Horizon 2020 Programme (ERC- CoG-2015 - Proposal n. 682676 LDMThExp) and by the German-Israeli Foundation (grant No. I-1283- 303.7/2014). The work of LU was performed in part at the Aspen Center for Physics, which is supported by National Science Foundation grant PHY-1066293, and was partially supported by a grant from the Simons Foundation. [^1]: This is true modulo the fact that it relies on an argument of technical naturalness, on which we elaborate further in the next section.
--- abstract: 'The phonon dispersion, density of states, Grüneisen parameters, and the lattice thermal conductivity of single- and multi-layered boron nitride were calculated using first-principles methods. For the bulk [*h*]{}-BN we also report the two-phonon density of states. We also present simple analytical solutions to the acoustic vibrational mode-dependent lattice thermal conductivity. Moreover, computations based on the elaborate Callaway-Klemens and the real space super cell methods are presented to calculate the sample length and temperature dependent lattice thermal conductivity of single- and multi-layered hexagonal boron nitride which shows good agreement with experimental data.' author: - 'Ransell D’Souza' - Sugata Mukherjee title: 'Length dependent lattice thermal conductivity of single & multi layered hexagonal boron nitride: A first-principles study using the Callaway-Klemens & real space super cell methods' --- Introduction ============ Single and multilayered boron nitride are $sp^2$ bonded boron and nitrogen atoms arranged in a hexagonal honeycomb lattice arranged in ABAB stacking in multilayered and bulk materials. In spite of the fact that they are isomorphic to the multilayered graphene and graphite with similar lattice constants, unit cell masses and Van der Waals type bonding between the layers, their phonon properties are quite different. Consequently their physical properties such as the lattice thermal conductivity derived from the phonon dispersion should shed light on the fundamental physics of phonon transport of such two-dimensional (2D) nanomaterials. These nanomaterials in the form of semiconductor multilayers and other superstructures are promising candidates of materials with enhanced thermoelectrical properties and have been a topic of intensive research in recent years[@duana16]. In contrast to a large amount of theoretical and experimental work carried out on electron transport, only few studies on phonon transport have been reported. For example, using density functional theory (DFT) with quantum transport device simulation based on non-equilibrium Green’s function (NEGF), Fiori $et\ al$ [@fiori11] have proposed and investigated 2-D graphene transistors based on lateral heterobarriers. $Ab\ initio$ atomistic simulations on vertical heterobarrier graphene transistors have been analysed [@sciambi11; @mehr12]. Britnell $et\ al$ [@britnell12] have modelled graphene heterostructures devices with atomically thin boron nitride as a vertical transport barrier. Performance of any thermoelectric material is characterised by a dimensionless parameter termed as figure of merit, denoted by $ZT$, which is inversely proportional to total lattice thermal conductivity, including contributions due to electrons and phonons. However, for the materials investigated here, electron contribution is negligible compared to that of phonons owing to a considerable electronic band gap between their conduction and valence bands. Experiments to study the effects of grain-boundaries on the thermal transport properties of graphene have been carried out by a few groups [@xu14; @ma17]. These experiments show that a smaller sample length decreases the thermal conductivity, a necessity for a good thermoelectric material. Graphene has a higher thermal conductivity compared to graphite due to the long mean free path (MFP) of the phonons in the 2D lattices. The MFP can thus be reduced by creating defects in the sample. Recent studies by Malekpour [*el al.*]{} [@malekpour16] has shown that vacancies reduces the lattice thermal conductivity in graphene. Lattice thermal conductivity of a material is highly correlated to the thickness of the sample (or number of layers). For example, graphene has a much larger thermal conductivity than bilayer graphene and graphite. [@RDSM17; @hongyang2014]. Recently reported lattice thermal conductivity for In$_2$Se$_3$ exhibits [@zhou16] a strong dependence on the thickness or number of layers, with a value of 4 W/mK for a thickness of 5nm which increases to 60 W/mK for the sample with thickness 35nm. These results suggest that in order to manipulate the lattice thermal conductivity $\kappa_L$, a proper understanding of its dependence on the grain size, temperature and thickness dependence is essential. However, not many experiments on grain size, temperature and thickness have been carried out so far in single and multilayered boron nitride. We believe our present work will motivate experiments in the direction of tuning $\kappa_L$ in such 2D materials. Heat flow in single and multilayered boron nitride (SLBN and MLBN) is of great significance not only for fundamental understanding of such materials in terms of lattice thermal conductivity or thermoelectrics but also for technological applications. Single and MLBN are extremely atomically stable materials and can be easily supported between two leads. Besides, these materials exhibit a comparatively lower $\kappa_L$ in bulk than in single and multi-layered graphene. This makes SLBN and MLBN a good testing ground to study the length and temperature dependence of thermal conductivity. Manipulating the lattice thermal conductivity by varying its temperature and dimensions (through grain size engineering) will shed light on the fundamental understanding of thermoelectricity in such 2D materials and help in designing new novel materials for technological applications. Hexagonal boron nitride (h-BN) is relatively inert as compared to graphene due to its strong, in-plane, ionic bonding of its planar lattice structure and hence is a favourable substrate dielectric to improve graphene based devices [@dean10]. Although h-BN has appealing thermal properties, most studies, both experimentally and theoretically, are confined to single and multi-layered graphene [@chen2011; @chen2012; @balandin08; @ghosh08; @cai10; @jauregui10; @hongyang2014; @nika2009; @RDSM17]. Some experiments on lattice thermal conductivity ($\kappa_L$) have been reported by Jo $et\ al$ [@jo13] for multi-layered boron nitride (MLBN). Also, theoretical studies on thermal conductivity ($\kappa_L$) [@lindsay11] and conductance [@RDSM16] on such materials have been carried out using Tersoff empirical interatomic potential. However, first principle theoretical studies of $\kappa_L$ such as using the Boltzmann transport equations (BTE) for phonons from density functional perturbation theory (DFPT) on SLBN and MLBN are apparently not available. In this paper, we investigate numerically the sample length and temperature dependence of the thermal conductivity ($\kappa_L$) of single and multilayer h-BN by solving the phonon BTE beyond the relaxation time approximation (RTA) using the force constant derived from a real space super cell method, and also by solving the phonon BTE in the RTA using the Callaway-Klemens approach. A long standing puzzle has been to answer which acoustic phonon mode dominates the total lattice thermal conductivity for such 2D materials [@nika11]. There have been arguments on whether the out-of-plane ZA vibrational mode contributions to $\kappa_L$ are the most dominant or the least in comparison to the other acoustic modes. Owing to the selection rules restricting the phase space for phonon-phonon scattering in ideal graphene [@lindsay2010; @lindsay10; @seol10] and boron nitride [@lindsay11], the ZA mode seem to be the most dominant. In a rather sharp contrast, references [@nika2009; @nika09] suggest that since in the long wavelength limit ($q \rightarrow 0$), the phonon dispersion of the ZA modes seem to be flat thus making the phonon velocities small, and also the fact that the Grüneisen parameters are large, would make the ZA contributions to $\kappa_L$ the least in comparison to other acoustic modes. Here, using the Callaway-Klemens approach, we examine this discrepancy from analytical solutions to the phonon BTE for each of the acoustic modes using a closed form for the scattering rate for the three-phonon processes derived by Roufosse [*et. al.*]{} [@roufosse73] and an exact numerical solution for the phonon BTE beyond the relaxation time approximation (RTA) in which the phonon lifetimes are formed in terms of a set of coupled equations and solved iteratively. We also examined the sample length ($L$) dependence of $\kappa_L$ and found this to be very sensitive to $L$, which may justify the application of multilayered h-BN in thermoelectric devices by manipulating $\kappa_L$. In the next section we describe the theoretical framework and the first-principles DFT based calculational methods of $\kappa_L$. This is followed by the results obtained using the real space supercell method and the Callaway-Klemens method and a summary in the subsequent sections. Theoretical framework and Method of calculation =============================================== Electronic and phonon bandstructure calculations ------------------------------------------------ First-principles DFT and DFPT calculations were carried out on a hexagonal supercell for the monolayer, bilayer and bulk boron nitride, whereas an orthorhombic supercell was used for five layers h-BN sample, using the plane wave pseudopotential method as implemented in the QUANTUM ESPRESSO code [@giannozzi09]. We have used 2 atoms in the unit cell for SLBN, 20 atoms for five layered BN and 4 atoms in both bilayer and bulk boron nitride. To prevent interactions between the layers, a vacuum spacing of 20 Åwas introduced along the perpendicular direction to the layers ($z$-axis) mimicking an infinite BN sheet in the $xy$ plane. For MLBN and bulk-[*h*]{}BN, the Van der Waals interaction as prescribed by Grimme [@grimme1], was used between the layers. For the electronic structure calculations, Monkhorst-Pack grids of $16 \times 16 \times 1$ and $16 \times 16 \times 4$ were chosen for SLBN and MLBN, resepectively, for the $k$-point sampling. Self-consistent calculations with a 40 Ry kinetic energy cut-off and a 160 Ry charge density energy cutoff were used to solve the Kohn-Sham equations with an accuracy of 10$^{-9}$ Ry for the total energy. We used ultrasoft pseudopotential to describe the atomic cores with exchange-correlation potential kernel in the local density approximation [@rrkj90]. The electronic structure and total enrgy calculations were used to obtain the groundstate geometry before persuing the phonon calculations. For the phonon bandstructure calculations, the $q$-grid used in the calculations were $6 \times 6 \times 1$ for SLBN, $6 \times 6 \times 2$ for BLBN and bulk h-BN and $4 \times 4 \times 2$ for 5-layer BN, respectively. The density functional perturbation theory (DFPT) [@dfpt87], as implemented in the plane wave method [@giannozzi09], was used to calculate the phonon dispersion and phonon density of states (DOS) along the high-symmetric $q$-points. Calculation of the lattice thermal conductivity ----------------------------------------------- The calculation of lattice thermal conductivity $\kappa_L$ involves evaluation of second-order harmonic interatomic force constants (IFCs) as well as the third-order anharmonic IFCs. We have used first a real space supercell method which evaluates the third-order IFCs in a real space grid using DFT [@ShengBTE], whereas the second-order IFCs are obtained from the DFPT method [@giannozzi09; @dfpt87]. Secondly, using the Callaway-Klemens method [@callaway59; @klemens58], the relaxation times were obtained from the Grüeisen parameters. Finally, the length, thickness and temperature dependence of $\kappa_L$ were studied. ### Real space super cell approach In this method the third order anharmonic IFCs are calculated from a set of displaced supercell configurations depending on the size of the system, their symmetry group and the number of nearest neighbour interactions. A $4 \times 4 \times 2$ supercell including upto third nearest neighbour interactions were used to calculate the anharmonic IFCs for all the structures, generating 128 configurations for single and bulk BN, 156 for bilayer BN (BLBN) and 828 for five-layered BN (5LBN). The third order anharmonic IFCs are constructed from a set of third-order derivatives of energy, calculated from these configurations using the plane wave method [@giannozzi09]. The phonon lifetimes are calculated from the phonon BTE which are limited by phonon-phonon, isotropic impurity and boundary scattering [@lindsay11]. The three-phonon scattering rates are incorporated in this method, as implemented in the the ShengBTE code [@ShengBTE]. Elaborate details on the work-flow of the three-phonon scattering rates can be found in reference [@ShengBTE] while Lindsay [@lindsay11] specifically discuses this for bulk h-BN. The thermal conductivity matrix $\kappa_L^{\alpha \beta}$ is given as, $$\begin{aligned} \label{kl} \kappa_L^{\alpha \beta}=\frac{1}{k_BT^2\Omega N}\sum_{s}f_0(f_0+1)(\hbar \omega_s)^2v_{s}^{\alpha} \tau_{s}^0 (v_s^{\beta}+\Delta_s^{\beta}).\end{aligned}$$ $\kappa_L^{\alpha \beta}$ is then diagonalized to obtain the scalar lattice thermal conductivity $\kappa_L$ in a preferred direction in the $xy$ plane. In Eq. \[kl\] $\Omega$ is the volume of the unit cell, $N$ denotes the number of $q$-points in the Brillouin zone sampling. $f_0 = {1 / (e^{\hbar \omega_s/k_B T} - 1)}$ is the Bose-Einstein distribution function, $\tau_s^0$ is the relaxation time for the mode $s$ with phonon frequency $\omega_s$, $v_s$ is the phonon group velocity, and $\Delta_s$ denotes the measure of how much associated heat current deviates from the relaxation time approximation. Mathematically, $\Delta_s$ and $\tau_{\lambda}^0$ is expressed as [@ShengBTE], $$\begin{aligned} \Delta_{\lambda} &=& \frac{1}{N}\sum_{i=+,-}\sum_{\lambda^{'} \lambda^{''}} \Gamma^{i}_{\lambda \lambda^{'} \lambda^{''} } (\xi_{\lambda \lambda^{''}}F_{\lambda^{''}}-\xi_{\lambda \lambda^{'}}F_{\lambda^{'}}) \nonumber \\ &+& \frac{1}{N}\sum_{\lambda^{'}}\Gamma_{\lambda \lambda^{'}}\xi_{\lambda \lambda^{'}}F_{\lambda^{'}} \\ \frac{1}{\tau_{\lambda}^0} &=& \frac{1}{N}(\sum_{\lambda^{'} \lambda^{''}}^{+}\Gamma_{\lambda \lambda^{'} \lambda^{''}}^{+} + \frac{1}{2}\sum_{\lambda^{'} \lambda^{''}}^{-}\Gamma_{\lambda \lambda^{'} \lambda^{''}}^{-} + \sum_{\lambda^{'}}\Gamma_{\lambda \lambda^{'}})\end{aligned}$$ here $\lambda$($\lambda^{'}$,$\lambda^{''}$) represents the phonon branch index $s$($s^{'}$,$s^{''}$) and wave vector $q$($q^{'}$,$q^{''}$) while $\xi_{\lambda \lambda^{'}}$ and $F_{\lambda}$ is short-hand for $\frac{\omega_{\lambda^{'}}}{\omega_{\lambda}}$ and $\tau_{s}^0 (v_s^{\beta}+\Delta_s^{\beta})$ respectively. The three-phonon scattering rates denoted by $\Gamma^{i}_{\lambda \lambda^{'} \lambda^{''} }$($i = +,-$) and the scattering probabilities due to isotopic disorder denoted by $\Gamma_{\lambda \lambda^{'}}$ have the following expressions, $$\begin{aligned} \hspace{-2em} \Gamma^{\pm}_{\lambda \lambda^{'} \lambda^{''}} &=& \frac{\hbar \pi}{4 \omega_\lambda \omega_{\lambda^{'}}\omega_{\lambda^{'}}} \Big[\substack{f_0(\omega_{\lambda^{'}})-f_0(\omega_{\lambda^{''}}) \label{ae} \\ f_0(\omega_{\lambda^{'}})+f_0(\omega_{\lambda^{''}}+1)}\Big] \nonumber \\ &\times& \big|V_{\lambda \lambda^{'} \lambda^{''}}\big|^2\delta(\omega_\lambda \pm \omega_{\lambda^{'}} + \omega_{\lambda^{''}}) \\ \Gamma_{\lambda \lambda^{'}} &=& \frac{\pi \omega^2}{2}\sum_{i}f_s(i)\bigg[1-\frac{M_s(i)}{\overline{M}(i)}\bigg]^2 \nonumber \\ &\times& \big|e^{*}_{\lambda}\cdot e_{\lambda}\big|^2 \delta (\omega_{\lambda} - \omega_{\lambda^{'}}).\end{aligned}$$ Where $V^{\pm}$ is the scattering matrix element and is expressed in terms of the anharmonic IFCs ($\Phi$), eigen functions ($e$) and mass ($M$) of an atom as $$\begin{aligned} V_{\lambda \lambda^{'} \lambda^{''}} = \sum_{i,j,k}\sum_{\alpha \beta \gamma} \frac{\Phi_{ijk}^{\alpha \beta \gamma}e_{\lambda}^{\alpha}e_{\lambda^{'}}^{\beta}e_{\lambda^{''}}^{\gamma}}{\sqrt{M_i M_j M_k}}.\end{aligned}$$ In the above expression, $i,j,k$ run over the atomic indices and $\alpha, \beta, \gamma$ are the Cartesian coordinates. $\overline{M} = \sum_s f_s(i) M_s(i)$ is the average of masses ($M_s(i)$) of isotopes $s$ of the atoms $i$ having a relative frequency $f_s$. $\Gamma^{+(-)}$ represents the absorption (emission) processes. A phonon which is a result of the absorption process is a combined energy of two incident phonons, [*i.e.*]{} $\omega_{\lambda} + \omega_{\lambda^{'}} = \omega_{\lambda^{''}}$. Similarly, the emission process depicts the energy of an incident phonon being separated among two phonons, $\omega_{\lambda} = \omega_{\lambda^{'}} + \omega_{\lambda^{''}}$. Therefore in eq.\[ae\] it is easy to see that the Dirac delta function, $\delta(\omega_\lambda \pm \omega_{\lambda^{'}} + \omega_{\lambda^{''}})$, imposes the conservation of energy in the absorption and emission processes. It should be noted that the relaxation times is calculated in the ShengBTE code using an iterative approach by solving the phononBTE starting with the zeroth-order approximation, $\Delta_{\lambda} = 0$, also known as the RTA solution. These iterations continue till two successive values of $\kappa_L$ differ by $10^{-5}$ Wm$^{-1}$K$^{-1}$. The interatomic third-order force constants (IFCs) are calculated using a real space supercell approach. Length dependent thermal conductivity is then calculated by taking into account only phonons with a mean free path (MFP) below a certain threshold value. This is done by calculating the cumulative lattice thermal conductivity with respect to the allowed MFP. Furthermore, there have been recent advanced experimental techniques proposed to measure the cumulative $\kappa_L$ as a function of phonon mean free path [@minnich11; @regner13; @johnson13]. In order to compare our calculations to the lengths corresponding to experimental measurements, we fit the cumulative thermal conductivity in the form [@ShengBTE], $$\begin{aligned} \label{cum-k} \kappa_L(L) = \frac{\kappa_{L_{max}}}{1+\frac{L_0}{L}}.\end{aligned}$$ where $L_0$ is a fitting parameter. $\kappa_L$ corresponding to a given length is calculated over a temperature range using Eq. \[cum-k\] and the thermodynamic limit of the thermal conductivity ($\kappa_{L_{max}}$) is the value of $\kappa_L$ as $L \rightarrow \infty$. ### Callaway-Klemens approach (Analytical and numerical solutions) In the Callaway-Klemens’s [@callaway59; @klemens58] approach which has been modified by Nika [*el al*]{} [@nika2009], the expression for thermal conductivity along $x$ and $y$ directions for two-dimensional layered materials, according to the relaxation time approximation (RTA) to BTE and isotropic approximation to phonon dispersion is given by, $$\begin{aligned} \label{k} \kappa &=& {1\over 4\pi k_B T^2 N \delta} \nonumber \\ &\times& {\sum\limits_{s} \int\limits_{q_{min}}^{q_{max}}[\hbar \omega_s(q)]^2 v_s^2(q) \tau_{U,s}(q)\frac{e^{\frac{\hbar \omega_s(q)}{k_B T}}}{[e^{\frac{\hbar \omega_s(q)}{k_B T}}-1]^2} q dq},\end{aligned}$$ where $k_B$ is the Boltzmann constant, $\hbar$ is the reduced Planck constant, $T$ is the absolute temperature, $N$ is the number of layers, $\delta$ is the distance between two consecutive layers, $\omega_s (q)$ and $v_s(q)$ are the phonon frequency and velocity corresponding to the branch $s$ at phonon wave vector $q$. The wave vector corresponding to the Debye frequency and low cut-off frequency are denoted by $q_{max}$ and $q_{min}$, respectively. The method to calculate the low cut-off frequency will be discussed shortly. $\tau_{U,s}$ is the three-phonon Umklapp scattering corresponding to branch $s$ at the wave vector $q$ expressed as, $$\begin{aligned} \label{tau} \tau_{U,s} = \frac{Mv_s^2(q)\omega_{D,s}}{\gamma_s^2(q) k_B T \omega_s(q)^2}.\end{aligned}$$ Here, $M$ is total mass of the atoms in the unit cell, $\gamma_s(q)$ is the mode and wave vector dependent Grüneisen parameter. The validity of the form of relaxation time in the Umklapp scattering in eq. \[tau\] for a 2D and 3D material was originally proposed by Klemens [*et. al.*]{}[@klemens94], where phonons were treated by a two-dimensional Debye model. This sets up a mode for the thermal conductivity in terms of a 2D phonon gas. On the basis of the phonon frequency dependence of the specific heat and mean free path, the form of $\tau_{U,s}$ in eq. \[tau\] is valid for both 2D and 3D. Moreover, the calculations by Shen [*et. al.*]{} [@shen14] use the same form to describe the relaxation time of the Umklapp process for graphene and their results, when $\tau_{U,s}$ is multiplied by a factor of 3, are consistent with the paper of Lindsay [*et. al.*]{} [@lindsay11] which solves the phonon BTE beyond the RTA. Since eq. \[tau\] cannot determine whether the U-processes are forbidden or not, the factor of 3 is added due to the symmetries seen in graphene which is explained in detail later. Grüneissen parameter ($\gamma_s(q)$) and the Debye frequency ($\omega_{D,s}$) corresponding to the branch $s$ is calculated by solving, $$\begin{aligned} \label{wD} \frac{A}{2\pi}\int\limits_0^{\omega_{D,s}}q\Big|\frac{dq}{d\omega}\Big|d\omega = 1,\end{aligned}$$ where $A$ is the area of the unit cell. The acoustic branches for in-plane modes for SLBN, BLBN, 5LBN and Bulk-[*h*]{}BN are linear whereas the out-of-plane acoustic mode have a quadratic behavior and hence for a simplified analytical solution we express the phonon frequencies as $$\begin{aligned} \omega_s(q) &=& v_s q \Rightarrow [s = {\rm LA, TA}] \label{wLATA} \\ &=& \alpha q^2 \Rightarrow [s = {\rm ZA}] \label{wZA}\end{aligned}$$ Substituting these values in Eq. \[wD\], we find the Debye frequency is given by $$\begin{aligned} \ \omega_{D,s} &=& 2v_s\sqrt{\frac{\pi}{A}} \Rightarrow [s = {\rm LA, TA}] \\ &=& \frac{4\pi \alpha}{A} \Rightarrow [s = {\rm ZA}]\end{aligned}$$ The mode dependent anharmonic (Grüneissen) parameters were calculated by applying a biaxial strain of $\pm$ 0.5% to each of the structures. Fig. \[gp\] shows that the Grüneisen parameter for the in-plane modes have a slight deviation from its average value along the $\Gamma$ to K direction. Therefore assuming a constant value for $\gamma_s$ ($s$=LA,TA), Nika [*et al*]{} [@nika2009] have derived the following analytical solution for $\kappa$ associated with a particular mode $s$. $$\begin{aligned} \label{LATA} \kappa_s=\frac{M\omega_{D,s} v_s^2}{4\pi T (N\delta) \gamma_s^2}\ [{\rm ln}(e^x-1)+\frac{x}{1-e^x}-x]\Bigg|^\frac{\hbar \omega_{D,s}}{k_B T}_{\frac{\hbar \omega_{min,s}}{k_B T}}\end{aligned}$$ Since there is no ZO$'$ branch in SLBN, the low bound cut-off frequency cannot be introduced in analogy to that of bulk graphite. One can however avoid the logarithmic divergence by restricting the phonon mean free path on the boundaries of the sheets [@nika09]. This is accomplished by selecting the mode dependent low cut-off frequency ($\omega_{s,min}$) from the condition that the mean free path cannot be greater in size than physical length $L$ of the sheet, [*i.e,*]{} $$\begin{aligned} \label{wmin} \omega_{s,min}=\frac{v_s}{\gamma_s}\sqrt{\frac{Mv_s\omega_{D,s}}{k_B T L}}\end{aligned}$$ In the spirit of in-plane thermal conductivity study we extend our calculations to find an analytical form to the flexural phonons modes since the contribution from these branches are vital to the total thermal conductivity. Unlike for the case of in-plane modes, the Grüneisen parameters for the acoustic out-of-plane ZA modes have a strong $q$-dependence. From Fig. \[gp\] it can be seen that the expression $$\begin{aligned} \label{gZA} \gamma_{ZA}=\frac{\beta}{q^2},\end{aligned}$$ is a very good fit to the actual wave vector dependent Grüneisen parameters. Substituting eq. \[gZA\] and eq. \[wZA\] into eq. \[k\] and making a transformation, $x=\frac{\hbar \omega}{k_B T}$, the analytical form for $\kappa_{ZA}$ is given by $$\begin{aligned} \label{ZA} \kappa_{ZA} &=& \frac{2M\omega_D k_B^3T^2}{\pi N \delta \beta^2 \hbar^3 \alpha} \int\limits_0^{\frac{\hbar \omega_D}{k_B T}}x^4 \frac{e^x}{[e^x-1]^2}dx \nonumber \\ &=& \frac{2M\omega_D k_B^3T^2}{\pi N \delta \beta^2 \hbar^2 \alpha}\; G\Big(\frac{\hbar \omega_D}{k_BT}\Big), \end{aligned}$$ where the function $G(z)$ is expressed as $$\begin{aligned} G(z) &=& \frac{-4\pi^4}{15} + \frac{e^z z^4}{1-e^z} + 4z^3{\rm ln}(1-e^z) \nonumber \\ &+& 12z[z {\rm Li}_2(e^z)- 2{\rm Li}_3(e^z)] + 24{\rm Li}_4(e^z).\end{aligned}$$ Here, the polylogarithm function is defined as, ${\rm Li}_n(z)=\sum\limits_{i=1}^{\infty} \frac{z^i}{i^n}$. Results and discussions ======================= Phonon dispersion and density of states --------------------------------------- Accurate calculations of the harmonic second order IFCs are necessary for a precise description and understanding of the thermal conductivity. Deviations due to numerical artifacts from the expected behavior of acoustic modes can lead to incorrect results especially for 2D marterials [@jesus16]. The full structural relaxation of SLBN, BLBN, 5LBN and Bulk-[*h*]{}BN yield a lattice constant ($a_0$) of 2.49 Å. The interlayer spacing ($c$) for MLBN is found to be 3.33 Å. The experimentally measured $a_0$ is 2.50 Å [@kern99] and the ratio of interlayer spacing and the lattice constant ($\frac{a_0}{c}$) is 1.332 [@kern99] which is in excellent agreement with our calculated value of 1.337. The calculated phonon dispersion and phonon density of states are shown in Fig. \[phdos\] for (a) SLBN, (b) BLBN, (c) 5LBN and (d) Bulk-[*h*]{}BN along the high symmetric $q$-points in the irreducible hexagonal and orthogonal Brillouin zone (BZ) together with some available experimental data for Bulk-[*h*]{}BN [@serrano07]. As usually seen for acoustic modes, the in-plane longitudinal (LA) and transverse (TA) modes show a linear $q$ dependence at the long-wavelength limit while the out-of-plane (ZA) mode shows a quadratic ($q^2$) dependence. This quadratic dependence, which is a typical feature of layered crystals, is due the rotational symmetries of the out-of-plane phonon modes. For SLBN, there are six modes for each wave vector, three acoustic (LA,TA,ZA) and three optical (LO,TO,ZO). At the $\Gamma$ point the optical LO and TO modes are degenerate. For BLBN, if the two SLBN layers are far apart, effects due to their interlayer coupling can be neglected and the phonon dispersion will be exactly as what is seen in SLBN. However, when these two SLBN come closer, due to the interlayer coupling, the two-fold degeneracy is removed giving rise to in-plane and out-of-plane phase modes. The LA and TA modes are not perturbed much implying that the main effect of the interlayer interactions is due to the ZA modes. This is because the transverse motion of atoms in both the layers associated with these modes interact strongly with each other. The same reasons hold on why 5LBN has one zero and four raised frequencies at the $\Gamma$ point. In Bulk-[*h*]{}BN, there are four atoms per unit cell and the two atoms in each layer are now inequivalent therefore doubling each of the acoustic and optical modes. The acoustic modes at the zone boundaries fold back to the zone centre as two rigid layer modes [@tan12], [*viz*]{}, an optically Raman inactive and an Raman active mode. The Raman active LA$_2$ and TA$_2$ modes are doubly degenerate at the $\Gamma$ point having a finite value mentioned in Table \[pd\]. The layered breathing modes for MLBN are denoted by ZO$^{'}$ for BLBN and Bulk-[*h*]{}BN and ZO$^{'}_{i}$ ($i=1,2,3,4$) for 5LBN. ![\[phdos\] The calculated phonon dispersion (left) and phonon density of states (right) of (a) SLBN, (b) BLBN, (c) 5LBN and (d) Bulk-[*h*]{}BN along with experimental data (orange circles) [@serrano07]. The phonon dispersion were calculated along the high-symmetry points of the 2D Brillouin zone ($q_z = 0$) corresponding to the hexagonal cell for SLBN, BLBN, and Bulk-[*h*]{}BN and orthorhombic cell for 5LBN. We also plot in (d) the two-phonon DOS shown for Bulk-[*h*]{}BN in red dashed line. The cyan, magenta and green curves in (a,b,c,d) are the best linear and quadratic fit to the phonon dispersion referring to LA, TA and ZA modes, respectively.](layered-phonon-dos-hbn.pdf) The symmetries of SLBN, BLBN, 5LBN and Bulk-[*h*]{}BN structures at $\Gamma$ can be described using the character table shown in table \[ct\]. Using a standard group theoretical technique (see Appendix), it can be shown that for Bulk-[*h*]{}BN and BLBN that the 12 phonon modes are decomposed into the following irreducible representations: 2(A$_{\rm 2u}$ + B$_{\rm 1g}$ + E$_{\rm 2g}$ + E$_{\rm 1u}$) and 2(A$_{\rm 2u}$ + E$_{\rm g}$ + A$_{\rm 1g}$ + E$_{\rm u}$). Similarly for SLBN, the irreducible representation is A$_{\rm 2u}$ + B$_{\rm 1g}$ + E$_{\rm 2g}$ + E$_{\rm 1u}$ for the six phonon modes and 5LBN has an irreducible presentation given by 4(A$_1^{'}$ + E$^{''}$) + 6(A$_2^{''}$ + E$^{'}$). Transitions corresponding to the basis $x,y,z$ ($xy,yz,z^2$, etc.) are Infrared (Raman) active. Those that are neither Infrared or Raman are the silent modes. Due to the momentum conservation requirement ($q=0$), the first-order Raman scattering process is limited to the phonons at the center of the Brillouin zone. We therefore compare our calculated frequencies at the $\Gamma$ point corresponding to A$_{\rm 2u}$, E$_{\rm 1u}$, A$_{\rm 2}^{''}$, E$^{'}$, and E$_{\rm u}$ to the infrared experimental data and E$_{\rm 2g}$, E$^{''}$, A$_{\rm 1g}$, E$_{\rm g}$, and A$_{\rm 1}^{''}$ to the Raman experimental data as shown in table \[pd\]. [lcccc]{} Mode & --------------------------------------- Expt. (& Prev. calculated$^{\rm a}$ ) $\omega$ (cm$^{-1}$) --------------------------------------- & ---------------- Bulk-[*h*]{}BN (Sym.) ---------------- & -------- BLBN (Sym.) -------- & -------- SLBN (Sym.) -------- \ LA$_2$ & TA$_2$ & 51.62$^{\rm b}$ (52.43) & 58.55 (E$_{{\rm 2g}}$) & 25.73 (E$_{{\rm g}}$) & -\ ZO$'$ & Silent (120.98) & 85.01 (B$_{{\rm 1g}}$) & 66.54 (A$_{{\rm 1g}}$) & -\ ZO & 783.16$^{\rm c}$ (746.87) & 784.05 (A$_{{\rm 2u}}$) & 803.01 (A$_{{\rm 2u}}$) & 819.37 (A$_{{\rm 2u}}$)\ ZO$_2$ & Silent (809.78) & 823.17 (B$_{{\rm 1g}}$) & 818.25 (A$_{{\rm 1g}}$) & -\ LO & 1366.30$^{\rm b}$, 1370.33$^{\rm c}$, 1363.88$^{\rm d}$ (1379.20) & 1363.80 (E$_{{\rm 2g}}$) & 1364.45 (E$_{{\rm g}}$) & 1363.88 (E$_{{\rm 2g}}$)\ TO & 1367.10$^{\rm c}$ (1378.4) & 1366.95 (E$_{{\rm 1u}}$) & 1365.66 (E$_{{\rm u}}$) & 1363.88 (E$_{{\rm 1u}}$)\   & ------------------------ LA & TA (cm$^{-1}$) (Point Group Symmetry) ------------------------ & ----------------- LO (cm$^{-1}$) (P.G. Symmetry) ----------------- & ----------------- TO (cm$^{-1}$) (P.G. Symmetry) ----------------- & ----------------- ZO (cm$^{-1}$) (P.G. Symmetry) ----------------- \  & 14.60 (E$^{''}$) & 1409.46 (E$^{'}$) & 1405.23 (E$^{'}$) & 817.59 (A$^{'}$)\  & 31.10 (E$^{'}$) & 1408.91 (E$^{''}$) & 1404.94 (E$^{''}$) & 814.58 (A$^{''}$)\ 5LBN & 38.95 (E$^{''}$) & 1408.71 (E$^{'}$) & 1404.81 (E$^{'}$) & 812.58 (A$^{'}$)\  & 47.43 (E$^{'}$) & 1408.36 (E$^{''}$) & 1404.49 (E$^{''}$) & 810.27 (A$^{''}$)\  & & 1405.57 (E$^{''}$) & 1404.40 (E$^{'}$) & 803.27 (A$^{'}$)\ $^{\rm a}$ From [*ab initio*]{} dispersion calculations, Ref. [@serrano07].\ $^{\rm b}$ Experimental Raman data, Ref. [@nemanich81].\ $^{\rm c}$ Experimental Raman and Infrared data, Ref. [@geick66].\ $^{\rm d}$ Experimental Raman data, Ref. [@reich05]. Raman spectroscopy is the most adaptable tool that offers a direct probe for multi-layered samples [@tan12]. Table \[pd\] shows the transitions corresponding to the Inflared (E$^{'}$ and A$^{''}$) and Raman (E$^{''}$ and A$^{'}$) active modes in the case of 5LBN. Further experiments for layered boron nitride would be required to verify the correctness of calculations. However, LDA with VdW interaction have shown to accurately describe the phonon dispersions for layered graphene when the geometry ([*i.e.*]{} interlayer distance) is represented correctly even though the local or semi-local exchange correlation functionals may not represent the interactions correctly [@tan12]. Another experimental technique to analyse the modes of a system is the second-order Raman spectroscopy in which the peaks are seen over the entire frequency range. Most of these peaks are in agreement with the phonon density of states when the frequency is scaled by a factor of $2\,$ [@serrano07; @kern99]. We have hence plotted, to the right of our phonon dispersion, the frequency scaled DOS. However, as pointed out by Serrano [*et al.*]{}, peaks which are absent in the DOS can be seen in the second-order spectroscopy because the DOS does not take both overtones, [*i.e.*]{} summation of modes having the same frequencies, into account. The two phonon density of states (DOS$_{2ph}$) are also essential for the understanding of phonon anharmonic decay [@cusco16]. Experiments on the second-order Raman spectrum of h-BN has been performed by Reich [*et al*]{} [@reich05]. We show in Fig. \[phdos\](d) the two-phonon DOS [@esfarjani11], $$\begin{aligned} \label{tpdos} \hspace{-2em}{\rm DOS}_{2ph}(\omega) = \sum_{i,j} \delta(\omega - \omega_i - \omega_j) + \delta(\omega - \omega_i + \omega_j), \end{aligned}$$ for Bulk-[*h*]{}BN using our calculated harmonic interactions. The peaks seen experimentally [@reich05] at 1639.4 cm$^{-1}$, 1809.907 cm$^{-1}$ and 2289.8068 cm$^{-1}$ are absent in the DOS. However, these large spectral features are now observed at 1680.4 cm$^{-1}$, 1821.2 cm$^{-1}$ and 2306.7 cm$^{-1}$, due to two phonon DOS (DOS$_{2ph}$). Thermal conductivity calculated using real space supercell approach ------------------------------------------------------------------- ![\[kfig\] Calculated thermal conductivity of single and multilayer BN shown as a function of (a) temperature and (b) length, using the real space approach. In (a) the curves refer to the thermodynamic limit ($L\rightarrow \infty$). In (b) the sample length is in logarithmic scale. The square and triangle data points refer to experimental measurements for BLBN [@wang2016] and 5LBN [@jo13], respectively.](layered-kappa-L.pdf) In Fig. \[kfig\] (a) and (b) we show the variation of thermal conductivity as a function of temperature ($T$) and sample length, respectively. The sample length is measured along the direction of the heat flow. The theoretical computation was carried out using the interatomic force constants obtained from the real space approach and an iterative method in calculating the relaxation times as implemented in the ShengBTE code [@ShengBTE]. To have a broad understanding of the thermal conductivity, we study different types of possible unit cells, [*i.e.*]{}, MLBN considered here have even, odd and infinite number of layers since each unit cell has a different character table. Calculations were done using orthogonal cell for 5LBN and hexagonal cells for SLBN, BLBN and bulk-[*h*]{}BN. The study was carried out over a wide range of sample lengths between 0.01 $\mu$m and 1000 $\mu$m with 0.1$\mu$m grid. The temperature of each sample was varied between 10 K to 1000 K with a grid of 10 K. On plotting the thermodynamic limit ($L \rightarrow \infty$) for each of the system we find that $\kappa_L$ is practically independent of length for lengths greater than 100 $\mu$m. Our recent results of $\kappa_L$ in the thermodynamic limit ($L\rightarrow \infty$) for monolayer and bilayer graphene [@RDSM17] are in excellent agreement with the recent experimental work of Li [*et. al*]{} [@hongyang2014], whereas the thermodynamic limit for MLBN is much larger than some recent experimental measurements [@jo13; @wang2016]. Sample lengths used by Li [*et. al.*]{} were of the order of millimetres for the measurement of single and bilayer graphene while Jo [*et. al.*]{} and Wang [*et. al.*]{} have used sample lengths of 5 $\mu$m and 2 $\mu$m for 5LBN and BLBN, respectively. As mentioned earlier, $\kappa_L$ does not vary much for lengths larger than 100 $\mu$m but is extremely sensitive when the lengths are between 1 and 100 $\mu$m. Not surprising therefore, our thermodynamic limit of $\kappa_L$ are in good agreement for graphene but not for MLBN. In order to compare our calculations to that of experiments, we calculate the cumulative lattice thermal conductivity at lengths corresponding to the sample lengths used in the experiments. The cumulative $\kappa_L$ was calculated in the temperature range 10-1000 K. Fig. \[kfig\] (b) shows the cumulative thermal conductivity at room temperature (RT). ![\[kLexpt\] Calculated thermal conductivity of single and multilayer BN shown as a function of temperature at a constant length, using the real space approach. The square, circle, triangle data points refer to experimental measurements for BLBN [@wang2016], Bulk [*h*]{}-BN [@sichel76] and 5LBN [@jo13], respectively.](kappa-L-expt.pdf) The curves in Fig. \[kLexpt\], are the calculated values of $\kappa_L$ at constant lengths which are compared with the experimental observations [@wang2016; @sichel76; @jo13]. For the lengths used in the experiments the magnitudes of $\kappa_L$ for Bulk-[*h*]{}BN and Bi-layer lie in between SLBN and 5LBN with SLBN (5LBN) being the highest(lowest). The maxima of $\kappa_L$ of $\sim$ 500 Wm$^{-1}$K$^{-1}$ for Bulk-[*h*]{}BN is found in the temperature range 250-300 K and tends to saturate to a value $\sim$ 450 Wm$^{-1}$K$^{-1}$. Experimentally [@sichel76] the maxima is found between 150-200 K and tends to saturate to a value $\sim$ 400 Wm$^{-1}$K$^{-1}$. Lindsay [*et. al.*]{} [@lindsay11] varies the sample length and finds an excellent fit with the experimental data for $L=1.4 \mu$m. It must be noted that the sample length is not mentioned in the experimental reference [@sichel76] for Bulk-[*h*]{}BN. As the length of the sample increases, the maxima of $\kappa_L(T)$ shifts towards the left, [*i.e.*]{} the maxima is found at a lower temperature. Therefore for BLBN and 5LBN, where the lengths used in the experiments are larger than 1.4 $\mu$m, the maxima would be at lower temperatures, in total disagreement with the experiments [@wang2016; @jo13]. Our calculations for BLBN and 5LBN are in excellent agreement with experiments for the same lengths. Even though our calculated values diverge from the experimental measurements by Sichel [*et. al.*]{} [@sichel76] at higher temperatures, we believe that the behavior of $\kappa_L$ as calculated by us for bulk-[*h*]{}BN is correct. However, further experiments should throw more light on these discrepancies. It is our conjecture that $\kappa_L$ of Bulk-[*h*]{}BN should be similar to that of BLBN since the phonon dispersions in the two cases are very similar. ![\[kLmode\] Contribution to the thermal conductivity of single and multilayer BN from the acoustic modes; (a) ZA, (b) TA and (c) LA, shown as a function of temperature in thethermodynamic limit ($L\rightarrow \infty$), and as a function sample length at $T=300$K (d,e,f) using the real space approach.](mode-dependent-k-vs-L-T.pdf) In Fig. \[kLmode\] we show the acoustic mode dependent contributions to the total thermal conductivity for SLBN and MLBN by solving the phonon BTE beyond the RTA. The out of plane mode is clearly seen to contribute the most to the lattice thermal conductivity for all the mentioned structures. For SLBN the contributions from the ZA, TA and LA modes to $\kappa_L$ at room temperature are $\sim$ 86.1 %, 7.4 % and 6.5 %. A similar trend is observed in graphene [@lindsay2010]. Qualitatively one can understand why the ZA mode contributes the most to $\kappa_L$ by calculating the number of modes per frequency for each of the acoustic mode. Now the number of modes per frequency is proportional to the 2D density of phonon modes, $D_s(\omega) \propto \frac{q}{2\pi}\frac{dq}{d\omega}$, and hence the ratio of $D_{ZA}(\omega)$ and $D_{TA(LA)}(\omega)$ would give a measure of the contribution of the respective phonon modes. Assuming a quadratic fit to the ZA dispersion, $\omega_{ZA}=\alpha q_{ZA}^2$, and a linear fit to the in-plane TA and LA phonon dispersion, $\omega_{TA(LA)}= v_{TA(LA)}q_{TA(LA)}$, the ratio of the density of phonon modes is $\frac{D_{ZA}}{D_{TA(LA)}} = \frac{v_{LA(TA)}^2}{2\alpha\omega_{LA(TA)}}$. Here $\alpha$ and $v_{LA(TA)}$ are fitting parameters to the phonon dispersions shown in Fig. \[phdos\] and their values are shown in table \[para\]. Substituting these values, it is evident that $\frac{D_{ZA}}{D_{TA(LA)}} \gg 1$ at the long wavelength limit suggesting that the major contributions to the lattice thermal conductivity are due to the out of plane modes. Representing the ZA contribution of the thermal conductivity of MLBN at room temperature with respect to SLBN, we observe that $\kappa^{SLBN}_{ZA} = 1.28\kappa^{BLBN}_{ZA} = 2.17\kappa^{5LBN}_{ZA}$, suggesting that the significant decrease of $\kappa_L$ from SLBN to MLBN is because of the additional raised frequencies of the ZA layered breathing modes. Kong [*et. al.*]{} [@kong2009] reported that the lattice thermal conductivity of single layer graphene and bilayer are similar, $\kappa_L^{graphene} \approx \kappa_L^{bilayer}$, while Lindsay [*et. al*]{} [@lindsay2011] reported $\kappa_L^{graphene} \approx 1.37\kappa_L^{bilayer}$. The difference in their methodologies is that the latter has taken graphene symmetry into account, which is discussed in detail by Seol [*et.al.*]{} [@seol10] and Lindsay [*et.al.*]{} [@lindsay2011]. Besides the contribution due to the layer breathing out of plane modes, a decrease in $\kappa_L$ is also due to the violation of the selection rule [@seol10; @lindsay2011] which is incorporated in the formalism in the super-cell real space approach. In Fig. \[kLmode\] (d,e,f), we show the mode dependent $\kappa_L$ at room temperature as a function of sample length. At any given length, the maximum difference in $\kappa_L$ contributed from LA and TA modes for all the mentioned structure is $\sim$ 47 and 65 Wm$^{-1}$K$^{-1}$ respectively while that from the ZA mode is $\sim$ 750 Wm$^{-1}$K$^{-1}$, an order of magnitude larger, implying that the contribution from the in-plane thermal conductivity is almost independent of the number of layers. This characteristic has been seen using a Tersoff potential in the case of single and multilayered graphene and boron nitride [@lindsay2011; @lindsay12]. This rapid decrease in $\kappa_L$ by increasing the number of layers, which is mainly due to the ZA mode, suggests that the interlayer interactions are short ranged, [*i.e.*]{}, the BN layers only interact with neighbouring BN layers [@lindsay2011]. In all of the structures, the contribution to $\kappa_L$ from the ZA mode have a stronger $L$ dependence as compared to the TA and LA modes, [*i.e.*]{}, the contributions from the in-plane modes saturate to their thermodynamic limit at a lower $L$ value as compared to the contributions from the out-of-plane modes. This is due to the larger intrinsic scattering times allowing the ZA phonons to travel ballistically and the relatively smaller scattering time which reflects the diffusive transport of the TA and LA phonons [@lindsay2011]. Calculations based on the mode dependent contributions to $\kappa_L$ as a function of mean free path and recent advanced experimental techniques [@minnich11; @regner13; @johnson13] should motivate further studies in these directions. The in-plane phonon contributions having a small $L$ dependence in comparison to the contributions from the out of plane has been calculated for graphene recently using the Tersoff potential [@lindsay14] and their calculated cumulative mode dependent thermal conductivity behavior is in good agreement with our calculations for SLBN. Grüneisen parameter ------------------- Besides providing important information on the phonon relaxation time, the Grüneisen parameter ($\gamma$) also provides information on the degree of phonon scattering and anharmonic interactions between lattice waves. Therefore, an accurate calculation of the lattice thermal conductivity ($\kappa_L$) would require a precise calculation of $\gamma$ since anharmonic lattice displacements play a vital role in calculations of $\kappa_L$. Fig. \[gp\] displays the mode dependent $\gamma$ for SLBN, BLBN, 5LBN and Bulk-[*h*]{}BN along the high symmetric $q$ points. The anharmonic lattice displacements are carried out by dilating the unit cell by applying a biaxial strain of $\pm$ 0.5 % and is expressed as, $$\begin{aligned} \label{gamma} \begin{split} \gamma_s(q) & = \frac{-a_0}{2\,\omega_s(q)}\frac{\delta \omega_s(q)}{\delta a} \\ & \approx \frac{-a_0}{2\,\omega_s(q)} \Big[\frac{w_+ - w_-}{da}\Big] \end{split}\end{aligned}$$ This method has been used previously for graphite [@marzari05], single and bi-layer graphene [@RDSM17] and MoS$_2$ [@cai10]. ![\[gp\] Grüenisen parameters of each mode for (a) SLBN, (b) BLBN, (c) 5LBN and (d) Bulk-[*h*]{}BN. The colour representation of each mode and fit are shown on the right. The magenta curves are the best fit to the ZA mode along the direction in the BZ chosen to calculate the lattice thermal conductivity.](layered-grueneisen.pdf) where $\omega_s(q)$, $\omega_+$ and $\omega_-$ are the wave vector dependent phonon frequency of mode $s$, phonon frequency under positive and negative biaxial strain respectively. $a_0$ and $d a$ are respectively the relaxed lattice constant and difference in lattice constants when under positive and negative biaxial strain. We find that the acoustic modes correspond to the lowest Grüneisen parameters which is in-line with experimentally measured $\gamma$ [@sanjurjo83]. As in the case of graphene, the out-of-plane acoustic transverse mode has the largest negative $\gamma$ parameters. Positive (negative) Grüneisen parameters suggest a decrease (increase) in phonon frequencies as the lattice constant increases. Near the long-wavelength limit, $\gamma_{ZO^{'}}$ for 5LBN is positive but becomes negative as we move along the $\Gamma$ to Y direction in the BZ. $\gamma_{ZO^{'}}$, associating with the layer breathing mode suggests that due to the additional layers the atom vibrations along the perpendicular direction between them lose their coherence and hence decreases the phonon frequencies when the system is under a biaxial strain. As described in table \[ct\], E$_{\rm 2g}$, E$^{''}$, A$_{\rm 1g}$, E$_{\rm g}$, and A$_{\rm 1}^{''}$ are Raman active and hence in principle their Grüneisen parameter can be calculated experimentally using Raman spectroscopy. There exist experimental data for bulk [*h*]{}-BN but to the best of our knowledge there does not exist experimental data for single or MLBN. We therefore compare our results to that of bulk-[*h*]{} BN. The lowest Grüneisen parameters along the $\Gamma$-K-M directions for the TO and LO modes were found to be 1.72 and 1.59, respectively. Our calculations for these modes are only $\sim$ 1.1% and $\sim$ 1.3% larger than the experiment values of Sanjurjo [*et al.*]{} [@sanjurjo83] who have obtained the Grüneisen parameters by measuring the pressure dependence of Raman lines. The slight deviance from the experimental measured value could be because the measured values are for Zinc-blende-BN and not hexagonal BN. Analytical solutions to the Callaway-Klemens’s Approach ------------------------------------------------------- In order to compare the results obtained from the real space super cell approach (ShengBTE), we now study the mode, temperature and length dependence of single and MLBN calculated using the Callaway-Klemens’s approach as described earlier. We first obtain analytical solutions for each acoustic mode of the Phonon BTE by making some reasonable approximations to understand the basic behavior of temperature and length dependence of $\kappa_L$. In order to compare with the experimental results, we resort to exact numerical computation. We have carried out all the length dependent calculations at a constant temperature for MLBN at RT. The corresponding length dependent curves for MLBN are plotted in Fig. \[kaLATAZA\] (e). The parameters used in our study are shown in Table \[para\]. [c|c|c|c|c|c|c]{} System & ---------- $v_{LA}$ (m/s) ---------- : \[para\] Relevant parameters used in the calculations for the analytical solutions of the lattice thermal conductivity. & ---------- $v_{TA}$ (m/s) ---------- : \[para\] Relevant parameters used in the calculations for the analytical solutions of the lattice thermal conductivity. & $\gamma_{LA}$ & $\gamma_{TA}$ & ----------------------------- $\alpha$ $\times$ 10$^{-7}$ (m$^2$/s) ----------------------------- : \[para\] Relevant parameters used in the calculations for the analytical solutions of the lattice thermal conductivity. & ----------------------------- $\beta$ $\times$ 10$^{-20}$ (1/m$^2$) ----------------------------- : \[para\] Relevant parameters used in the calculations for the analytical solutions of the lattice thermal conductivity. \ SLBN & 17020.1 & 11599.8 & 1.546 & 0.452 & 3.99 & -6.827\ BLBN & 16379.4 & 11474.9 & 1.585 & 0.5673 & 3.75 & -6.086\ 5LBN & 21095 & 11420.6 & 1.48 & 0.424 & 4.2 & -6.348\ Bulk-[*h*]{}BN & 16379.4 & 11474.9 & 1.57 & 0.59 & 3.72 & -7.18\ Equations \[wLATA\] and \[wZA\] are plotted in Fig. \[phdos\] and Equation \[gZA\] is plotted in Fig. \[gp\] to compare the analytical fit to the actual phonon dispersion and Grüenisen parameters. ![\[kaLATAZA\] Acoustic modes and temperature dependence of lattice thermal conductivity for (a) SLBN (b) BLBN (c) 5LBN and (d) Bulk-[*h*]{}BN at a constant length. The theoretical calculations are carried out by using Eq. \[LATA\] for the LA and TA modes while Eq. \[ZA\] was used for the ZA mode. The parameters used in our calculations are shown in Table \[para\]. The colour representation for each mode are shown on the right. The black dots are the experimental measurements [@sichel76; @jo13; @wang2016]. Length dependence is worked out by varying $L$ in Eq. \[wmin\]](kappa-analytical-L.pdf) The individual contributions of each of the acoustic modes LA, TA, ZA and the sum of these, [*i.e.*]{} $\kappa_L$, for single and multilayered [*h*]{}-BN are shown in Fig. \[kaLATAZA\] (a,b,c,d). The variation of $\kappa_L$ values for BLBN and Bulk-[*h*]{}BN are quite similar but are lower for 5LBN. This is in good agreement with experiments [@sichel76; @jo13; @wang2016]. In all cases it is observed that amongst the acoustic modes the TA contribution is the largest, ZA to be the least whereas LA contribution is somewhere in between. It has been quite controversial as to which acoustic mode contributes the most to the total lattice thermal conductivity. For example, while some reports [@kuang16; @lindsay11; @lindsay2011; @seol10; @lindsay2010] show that the contributions from ZA to be the most dominant, there are many other reports [@shen14; @alofi13; @kong2009; @aksamija11; @nika2009; @nika11; @nika12; @wei14] that show exactly the opposite. Our analytical results concur with the latter, [*i.e.*]{} the contribution from the ZA mode is the least. The thermal conductivity for two-dimensional layered materials given by Eq. \[k\] is derived assuming both phonon energy dispersions and phonon scattering rates are weakly dependent on the direction of the Brillouin zone [@nika2009]. The calculation of $\kappa_L$ should be independent of the direction chosen resulting in an isotropic in-plane scalar $\kappa_L$. Calculation of $\kappa_L$ should therefore be independent of direction chosen. We move along the $\Gamma$ to K direction for systems in which a hexagonal unit cell is used and along $\Gamma$ to Y in the case of an orthorhombic unit cell. SLBN has the highest calculated $\kappa_L$, 5LBN has the least while $\kappa_L$ lies in between BLBN and Bulk-[*h*]{}BN. From Fig. \[kCK\] it can be easily seen that for temperatures below 100K, the contribution to the total $\kappa_L$ is mainly due to the flexural ZA modes. As in the case of graphene, SLBN can have a total of 12 processes involving the flexural phonons. However, Seol [*el al*]{} [@seol10] obtained a selection rule for the three-phonon scattering. This rule states that only an even number of ZA phonons is allowed to be involved in each process. Shen [*et al*]{} [@shen14] have listed four flexural allowed processes. Hence, the scattering rate of the Umklapp phonon-phonon process for the acoustic flexural branch is multiplied by a factor of $\frac{4}{12}$ and the relaxation time for the ZA mode becomes 3 times of that of Eq. \[tau\]. Therefore besides having a larger velocity and a smaller averaged Grüneisen parameters compared to the other systems, the major contribution for an increased $\kappa_L$ is due to the symmetry of the ZA mode. Phonon dispersions and Grüneisen parameters for BLBN and Bulk-[*h*]{}BN are very similar which explains why their calculated $\kappa_L$ have the same magnitude. In the case of 5LBN, there are additional five low frequency modes (also termed as layer-breading modes), which arise due to the interlayer moment. Due to this change in phonon dispersion, more phase-space states become available for phonon scattering and therefore decreases $\kappa_L$ [@balandin11]. Numerical solutions to the Callaway-Klemens’s Approach ------------------------------------------------------ Numerical calculations are carried out using the exact form of the phonon dispersion and Grüneisen parameters as displayed in Fig. \[phdos\] and Fig. \[gp\] rather than the analytical form of the acoustic modes and averaged Grüneisen parameters. We numerically solve Eq. \[k\] for each of the modes at a constant sample length varying temperature as well as at a constant temperature varying lengths between 0.1 to 10 $\mu$m. These results are compared with experimental data [@sichel76; @jo13; @wang2016] and shown in Fig. \[kCK\]. Numerically calculated values of $\kappa_L$ are in better agreement with the experimental data as compared to the analytical form. We find the contribution from the ZA modes to dominate at lower temperatures but rapidly decreases as the temperature increases making the flexural modes contribute the least at relatively higher temperatures. This is in line with previous theoretical calculations [@aksamija11]. ![\[kCK\] Acoustic modes and temperature dependence of lattice thermal conductivity for (a) SLBN (b) BLBN (c) 5LBN and (d) Bulk-[*h*]{}BN at a constant length. The theoretical calculations are carried out by solving Eq. \[k\] numerically for each of the modes. The colour representation for each mode are shown on the right. The black dots are the experimental measurements [@sichel76; @jo13; @wang2016]. Length dependence is worked out by varying $L$ in Eq. \[wmin\].](kappa-numerical-L.pdf) Summary ======= Phonon dispersions using a LDA pseudopotential with vdW interactions, density of states (DOS), the Grüneisen parameters and the lattice thermal conductivity have been calculated by the Callaway-Klemens and Real space super cell approach for SLBN, BLBN, 5LBN and Bulk-[*h*]{}BN. Additionally, in the case of Bulk[*h*]{}-BN, we calculate the two-phonon DOS. Irreducible representation using the character table at the $\Gamma$ point in the BZ for each of the systems have been derived in order to compare the symmetry modes with those obtained from Raman and infrared spectroscopy experiments. Results from the investigations by EELS data, Raman, second-order Raman and Infrared spectroscopy are found to be in excellent agreement with the theoretical calculations based on the phonon dispersion, DOS and two-phonon DOS which rely on the harmonic second order inter atomic force constants. Further, we have calculated the sample length and temperature dependence of lattice thermal conductivity by the real space super cell approach with the help of the second order IFCs calculated by DFPT. Lattice thermal conductivity at the thermodynamic limit for each system has a maxima between the 110-150 K. For sample sizes in the range 1-5 $\mu$m, $\kappa_L$ does not have a maxima. However with increase in temperature it tends to saturate at a value which is an order smaller than the thermodynamic limit. Our mode dependent calculations using the real space method suggests that the majority of the contribution to the thermal conductivity are due to the ZA phonons for all of the structures. The substantial decrease in $\kappa_L$ from single to MLBN is because of the additional layer breathing modes but mainly due to the fact that the interlayer interactions breaks the SLBN selection rule resulting in suppressing the ZA phonons contributions to $\kappa_L$ in MLBN. Contribution to $\kappa_L$ from the in-plane modes are not sensitive to the number of layers and have a lower $L$ dependence compared to the out of plane modes. This reduction in $\kappa_L$ from SLBN to MLBN which is mainly due to the ZA phonons indicate that the interlayer interactions are short ranged. The $L$ dependence of the TA and LA contributions to $\kappa_L$ saturate to their thermodynamic limit faster than that of the contribution from the ZA phonons implying that the ZA phonons travel ballistically along the sample while the TA phonons travel diffusively. Grüneisen ($\gamma$) parameters were obtained from first principle calculation by applying a positive and negative biaxial strain. For the in-plane acoustic modes, we find that $\gamma$ does not vary much from its mean value but the out-of-plane modes have a strong $q$-dependence. Our calculated $\gamma$ values for Bulk-[*h*]{}BN at the $\Gamma$ point is $\sim$ 1% larger than those obtained from experiments which measures the pressure dependence of Raman lines. $\gamma$ parameters for 5LBN suggest that due to the layer breathing modes, atoms along the perpendicular direction lose their coherence between each layer and decrease the phonon frequencies when under a biaxial strain. In comparison to the real space super cell approach, lattice thermal conductivity has been calculated, both analytically and numerically, using Callaway-Klemens formalism. To obtain analytical solution of the phonon, we make a linear fit to the LA and TA modes, a quadratic fit to the ZA mode, and use an averaged value for its Grüneisen parameters for the $\gamma$ parameters corresponding to the in-plane acoustic modes and an inverse square wave-vector dependence $\gamma$ for the out-of-plane modes. Theoretical results for sample length and temperature dependence of $\kappa_L$ are in good agreement with experimental observation. The phonon BTE is then solved analytically and numerically for SLBN, BLBN, 5LBN and Bulk-[*h*]{} BN to calculate $\kappa_L$ for a constant length over a wide range of temperatures and [*vice versa*]{} again in good agreement with available experimental results. Both the theoretical approaches, [*i.e.*]{} real space super cell and Callaway-Klemens, show the same magnitude for $\kappa_L$ but the temperature dependence by the two methods are different. The lattice thermal conductivity for these materials are practically length independent for sample lengths greater than 100 $\mu$m which tends to their thermodynamic limit. Calculated values for $\kappa_L$ for BLBN and 5LBN agree very well with experiments when calculated by the real space approach rather than by the Callaway-Klemens method. This may be because the experimental behavior of $\kappa_L$ for both BLBN and 5LBN tend to saturate at higher temperatures instead of having a maxima. However, the Callaway-Klemens method agrees better with available experimental data for Bulk-[*h*]{} BN. Further experiments could resolve this discrepancy. Mode dependent numerical calculations using the Callaway-Klemens formalism suggest that ZA modes dominate only at very low temperatures and have the least contribution as the temperature is increased. This is in stark conflict with our calculations based on real space super cell approach. Since the velocities and Grüneisen parameters are extremely similar for single and bi layer boron nitride, one would expect $\kappa_L$ for both the systems to be similar. However, in the case of graphene, we have a significant reduction in $\kappa_L$ which is seen both experimentally [@hongyang2014] and theoretically [@lindsay2011; @RDSM17]. The larger $\kappa_L$ in SLBN in comparison to BLBN using the Callaway-Klemens method was due to the symmetry put by hand and not a consequence of the theory. This implies that the closed form of the relaxation time used in Callaway-Klemens method is a poor approximation having little predictive value and one must solve the BTE beyond the RTA. Our calculations suggests that for an enhanced figure of merit, $ZT$, in such materials, the sample length must be in the $\mu$m range or smaller and should be stacked on top of each other. Acknowledgments {#acknowledgments .unnumbered} =============== We thank Dr. Jesús Carrete of the Technical University, Vienna, for his insightful correspondence on the ShengBTE code for the calculation of the mode dependence contribution to the total thermal conductivity. We also thank D.L. Nika of Moldova State University and A.A. Balandin of the University of California, Riverside, for their helpful correspondence based on their recent publications. All calculations were performed in the High Performance Cluster platform at the S.N. Bose National Centre for Basic Sciences. RD acknowledges support through a Senior Research Fellowship of the S.N. Bose National Centre for Basic Sciences. Derivation of Irreducible representations ========================================= We define the reducible representation ($\Gamma_{\rm red}$) by placing three vectors on each atom in the unit cell which will obey the following rules when operated by a symmetry transformation. - If a vector is not moved (reversed) by an operation, it contributes 1 (-1) to $\chi$. - If a vector is moved to a new location by an operation, it contributes 0 to $\chi$. where $\chi$ is the character in the reducible representation. Our reducible representations ($\Gamma_{red}$) are shown in the column before every new point group representation in table \[ct\]. Using the reduction formula, $a_i = \frac{1}{g} \sum \chi_R \chi_{IR}$, where $a_i$ is the number of times an irreducible representation contributes to the reducible representation, $g$ is the total number of symmetry operations for a particular point group and $\chi_R$ ($\chi_{IR}$) is the corresponding character in the reducible (irreducible) representation, we derive the irreducible representations. D$_{{\rm 6h}}$ E 2C$_{6}$ 2C$_3$ C$_2$ 3C$_2^{'}$ 3C$_2^{''}$ i 2S$_6$ 2S$_3$ $\sigma_{\rm h}$ 3$\sigma_{\rm v}$ 3$\sigma_{\rm d}$ Basis ------------------------------------- ---- ---------- -------- ------- ------------ ------------- ------------------ -------- -------- ------------------ ------------------- ------------------- ---------------- A$_{\rm 2u}$ 1 1 1 1 -1 -1 -1 -1 -1 -1 1 1 $z$ B$_{\rm 1g}$ 1 -1 1 -1 1 -1 1 1 -1 -1 -1 1 $yz(3x^2-y^2)$ B$_{\rm 2g}$ 1 -1 1 -1 -1 1 1 1 -1 -1 1 -1 $xz(x^2-3y^2)$ E$_{\rm 2g}$ 2 -1 -1 2 0 0 2 -1 -1 2 0 0 {$x^2-y^2,xy$} E$_{\rm 1u}$ 2 1 -1 -2 0 0 -2 1 -1 2 0 0 {$x,y$} $\Gamma^{bulk-{\it h}BN}_{\rm red}$ 12 0 0 0 0 -4 0 -8 0 4 4 0   $\Gamma^{SLBN}_{\rm red}$ 6 0 0 0 -2 0 0 -4 0 2 0 2   D$_{\rm 3d}$ E 2C$_3$ 3C$_2^{'}$ i 2S$_6$ 3$\sigma_{\rm d}$ A$_{\rm 2u}$ 1 1 -1 -1 -1 1 $z$ A$_{\rm 1g}$ 1 1 1 1 1 1 $z^2$ E$_{\rm g}$ 2 -1 0 2 -1 0 {$xz, yz$} E$_{\rm u}$ 2 -1 0 -2 1 0 {$x, y$} $\Gamma^{BLBN}_{\rm red}$ 12 0 4 0 0 0 D$_{\rm 3h}$ E 2C$_3$ 3C$_2^{'}$ $\sigma_{\rm h}$ 2S$_3$ 3$\sigma_{\rm v}$ A$_1^{'}$ 1 1 1 1 1 1 $z^2$ A$_2^{''}$ 1 1 -1 1 1 -1 $z$ E$^{'}$ 2 -1 0 2 -1 0 {$x, y$} E$^{''}$ 2 -1 0 -2 1 0 {$xz, yz$} $\Gamma^{5LBN}_{\rm red}$ 30 0 10 2 -4 -2 [63]{}ifxundefined \[1\][ ifx[\#1]{} ]{}ifnum \[1\][ \#1firstoftwo secondoftwo ]{}ifx \[1\][ \#1firstoftwo secondoftwo ]{}““\#1””@noop \[0\][secondoftwo]{}sanitize@url \[0\][‘\ 12‘\$12 ‘&12‘\#12‘12‘\_12‘%12]{}@startlink\[1\]@endlink\[0\]@bib@innerbibempty @noop [****,  ()]{} @noop [ ()]{} @noop [****,  ()]{} @noop [****, ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****, ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****, ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} in @noop [**]{}, Vol. ,  (, , ) p.  @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{}
--- author: - 'Nicholas Bender,$^{1}$ Mengyuan Sun,$^{2}$ Hasan Y[i]{}lmaz,$^{1}$ Joerg Bewersdorf,$^{2}$ and Hui Cao$^{1}$' title: 'Supplementary information: Circumventing the optical diffraction limit with customized speckles\' --- Department of Applied Physics, Yale University, New Haven, Connecticut 06520, USA Yale School of Medicine, Yale University, New Haven, Connecticut 06520, USA Experimental setup for photoconversion ====================================== ![\[Setup\] [**Experimental setup.**]{} The experimental setup used for nonlinear speckle-illumination microscopy is shown. We use a phase-only SLM to generate a customized speckle pattern to illuminate a fluorescent sample for photoconversion.](SupExpSetup.pdf){width="\linewidth"} Figure \[Setup\] is a schematic of our experimental setup for photoconverting a fluorescent sample with customized speckle patterns and imaging the unconverted fluorescence. A CW laser operating at a wavelength of $\lambda = 405$ nm is used to photoconvert the mEos3.2 protein sample. The laser beam is expanded and linearly polarized before it is incident on a phase-only SLM (Meadowlark Optics). The pixels on the SLM can modulate the phase of the incident field between $0$ and $2 \pi$: in increments of $2 \pi / 90$. Because a small portion of light reflected from the SLM is unmodulated, we write a binary phase diffraction grating on the SLM and use the light diffracted to the first order for photonconversion. In order to avoid cross-talk between the neighboring SLM pixels, $32 \times 32$ pixels are grouped to form one macropixel, and the binary diffraction grating is written within each macropixel with a period of 8 pixels. We use a square array of $32 \times 32$ macropixels in the central part of the phase modulating region of the SLM to shape the photoconverting laser light. The light modulated by our SLM is Fourier transformed by a lens with a focal length of $f_{1}=500$ mm and cropped in the Fourier plane with a slit to keep only the first-order diffraction. The complex field on the Fourier plane of the SLM is imaged onto the surface of the sample by a second lens, with a focal length of $f_{2}=500$ mm. Using a $\lambda/4$ plate, we convert the linearly polarized photoconverting beam into a circularly polarized beam before it is incident upon the sample. In this setup, the full width at half-maximum of a diffraction-limited focal spot is 17 m. After the photonconverion process, a second laser, operating at a wavelength of $\lambda = 488$ nm, uniformly illuminates the sample and excites the non-photoconverted mEos3.2 proteins. To collect the fluorescence, we use a $10\times$ objective of ${\rm NA}=0.25$ and a tube lens with a focal length of $f_{3}=150$ mm. The 2D fluorescence image is recorded by a CCD camera (Allied Vision Manta G-235B). The spatial resolution of the detection system is estimated to be $1.1$ m. The remaining excitation laser light, which is not absorbed by the sample, is subsequently removed by two Chroma ET 535/70 bandpass filters. One filter is placed after the objective lens and reflects the excitation beam off the optical axis of our system. The second one is placed directly in front of the camera. Live sample demonstration ========================= In the main text, we use delta speckle patterns to photoconvert a homogeneous film of purified protein to demonstrate the isometric and isotropic spatial resolution enhancement they allow. Here we show that our technique is compatible with living samples, which are typically inhomogeneous. Live yeast cell culture and preparation. ---------------------------------------- The *S. Pombe* strain *Leu1::Leu1+ pAct1 mEos3.2 nmt1Term ade6-M216 his3-$\Delta$1 leu1-32 ura4-$\Delta$18* is generated from NruI digested plasmid pJK148-pAct1-mEos3.2-nmt1Term through homologous recombination. Cytosolic mEos3.2 is expressed from the act1 promoter in the endogenous Leu1 locus. Cells are grown in exponential phase at 25 °C in YE5S-rich liquid medium in 50 mL flasks in the dark before switching to EMM5S-synthetic medium for 12-18 hours, to reduce the cellular autofluroescence background. Live cells are concentrated 10- to 20-fold by centrifugation at 3,000 rpm for 30 s and resuspended in EMM5S for imaging. Concentrated cells in 10 uL are mounted on a thin layer consisting of 35 L 25 $\%$ gelatin (Sigma-Aldrich; G-2500) in EMM5S. ![\[LiveSample\] [**Photoconversion of live yeast cells with a customized speckle pattern.**]{} In [**a**]{}, we present an optical image of the fluorescent light emitted from a sample of live yeast cells before they are photoconverted by the delta speckle pattern shown in [**b**]{}. The white circles in [**b**]{} indicate the vortices which overlap with the yeast cells. An image of the fluorescent light emitted by the live-cell sample after photoconversion is shown in [**c**]{}. The red circles in [**c**]{} correspond to the white circles in [**b**]{}.](FigureLiveSample.pdf){width="\textwidth"} Photonconverting live yeast cells with customized speckles. ----------------------------------------------------------- We illuminate the collection of nonuniformly distributed yeast cells shown in Fig. \[LiveSample\][**a**]{} with the photoconverting speckle pattern presented in [**b**]{}. The optical vortices of the delta speckle pattern do not photoconvert the yeast cells in their vicinity. Consequently after photoconversion, multiple isolated groups of cells will emit fluorescence when illuminated by 488 nm light. In the fluorescence image taken after photoconversion, the fluorescent regions (marked by red circles in [**c**]{}) have a one-to-one correspondence to the optical vortices -in the delta speckle pattern- that overlap with the yeast cells (marked by white circles in [**b**]{}). Because the mEos3.2 protein is cytosolic, it is impossible to obtain fluorescence from a sub-cellular region, even if we shrink the dark region surrounding each vortex core using high-NA optics. Nevertheless, we are able to select individual groupings of live cells located near the vortices. The joint PDF of the complex speckle fields =========================================== A speckle pattern is ‘fully developed’ if the joint PDF of its complex field is azimuthally invariant. This means that the phase distribution of the speckle pattern is uniform between $0$ and $2 \pi$, and that the amplitude values of the field are independent of the phase values. This means that the amplitude and phase distributions of a fully-developed speckle pattern are statistically independent. In Fig. \[FieldPDF\], we plot the complex field PDF calculated from 1,000 Rayleigh speckle patterns and 1,000 delta speckle patterns. While the field PDF obeyed by Rayleigh speckles is a circular Gaussian function, the delta speckles adhere to a circular non-Gaussian function. Because the field PDF is circular for delta speckles, they are fully developed like Rayleigh speckles. ![\[FieldPDF\] [**Joint PDF of the complex speckle fields.**]{} In [**a**]{}, we show the circular non-Gaussian joint PDF associated with delta speckles next to the circular Gaussian joint PDF of Rayleigh speckles in [**b**]{}. In both cases, the speckles are fully developed.](SupFigurePDF.pdf){width="4in"} Speckle decorrelation upon axial propagation ============================================ The rapid axial decorrelation of a fully-developed speckle pattern enables parallel 3D nonlinear patterned-illumination microscopy. The axial intensity correlation function is defined as $$C_{I}(\Delta z) \equiv \frac{\langle \delta I({\bf r}, z_0) \delta I({\bf r} , z_0 + \Delta z)\rangle} {\sqrt{\langle [\delta I({\bf r}, z_0)]^2 \rangle}\sqrt{\langle [\delta I({\bf r}, z_0 + \Delta z)]^2\rangle}} \\$$ where $z_0$ is the axial position of the focal plane, $\langle ... \rangle$ denotes averaging over transverse position $\bf r$, and $\delta I({\bf r}, z) = I({\bf r}, z)-\langle I({\bf r}, z)\rangle$ is the intensity fluctuation around the mean. The FWHM of $C_I(\Delta z)$ defines the axial decorrelation length. For a Rayleigh speckle pattern, the axial decorrelation length $R_{l}$ is proportional to the Rayleigh range: which is determined by the axial diffraction limit $2 \lambda/ {\rm NA}^{2}$. We numerically simulate the axial propagation of delta speckles and compare it with that of Rayleigh speckles. In Fig. \[Prop\], an example delta speckle pattern at $\Delta z=0$ and $\Delta z=R_{l}/2$ is shown in [**a**]{} and [**b**]{}, while an example Rayleigh speckle pattern at $\Delta z=0$ and $\Delta z=R_{l}/2$ is presented in [**c**]{} and [**d**]{}. A qualitative comparison between [**a**]{}, [**b**]{} and [**c**]{}, [**d**]{} illustrates that the delta speckles’ transverse intensity-profile axially evolves much faster than the Rayleigh speckles’ profile. ![\[Prop\] [**Axial decorrelation of speckle patterns.**]{} An example delta speckle pattern at $\Delta z=0$ and $\Delta z=R_{l}/2$ is shown in [**a**]{} and [**b**]{}, while an example Rayleigh speckle pattern at $\Delta z=0$ and $\Delta z=R_{l}/2$ is shown in [**c**]{} and [**d**]{}. In [**e**]{}, the axial intensity correlation function $C_I(\Delta z)$ for delta speckles (purple line) and Rayleigh speckles (green dashed line) is shown. We ensemble average over the propagation of 100 speckle patterns to create the curves in [**e**]{}.](Propagation.pdf){width="4in"} We plot the axial correlation function of the delta speckles (purple solid line) and the Rayleigh speckles in [**e**]{}. The axial decorrelation length of the delta speckles (FWHM of axial intensity correlation function) is $R_{l}/3$, which is three times shorter than that of the Rayleigh speckles. As mentioned in the main text, the vortices in a delta speckle pattern deviate the most from the mean intensity value, thus they dictate the intensity correlation function. The accelerated axial decorrelation is partially attributed to the rapid transverse motion of optical vortices upon axial propagation. This characteristic of the delta speckles can enhance the axial resolution in 3D imaging. In Fig. \[Spot\] we show the effect the transverse motion of optical vortices in delta speckles -upon axial propagation over one $R_{l}$- has on the fluorescent spots in a photoconverted sample. An example delta speckle pattern, $I(x,y,\Delta z = 0)$, is shown in [**a**]{} along with two vertical cross-sections, $I(x=x_{0},y,\Delta z )$ in [**b**]{} and $I(x,y=y_{0},\Delta z )$ in [**c**]{}. The axial cross-section distributions show the speckle’s intensity, along a line in $x$ or $y$ axis, as a function of axial propagation from $\Delta z=0$ to $\Delta z=R_{l}$. Using Eq. 2 from the main text, we simulate the corresponding unconverted protein density, $\rho({\bf r}, t)$, in a sample after it is photoconverted with the speckle pattern shown in [**a**]{}-[**c**]{}. In [**d**]{}-[**f**]{} we show the unconverted protein densities after the sample is photoconverted for $t=10/q$. Not only do the fluorescent spots located at ${\bf r} = \alpha, \beta, \gamma$ move transversely and leave the axial cross-section in [**e**]{} & [**f**]{} before propagating to $\Delta z=R_{l}$, but also, other fluorescent spots enter and exit the axial cross-sections: demonstrating that delta speckles can be used to obtain 3D super-resolution images. ![\[Spot\] [**Three-dimensional photoconversion with delta speckles.**]{} The calculated spatial distribution of a delta speckle pattern’s intensity on a transverse plane at $z= 0$, $I(x,y,\Delta z = 0)$, is shown in [**a**]{} along with the intensity distribution over two axial cross-sections, $I(x=x_{0},y,\Delta z )$ in [**b**]{} and $I(x,y=y_{0},\Delta z )$ in [**c**]{}. A photoconversion simulation using the speckles in [**a**]{}-[**c**]{} gives the corresponding distribution of the unconverted protein density in a uniform sample, $\rho({\bf r}, t)$, in [**d**]{}-[**f**]{}.](SimSlicePC.pdf){width="\textwidth"}
--- abstract: 'Consider $m \in \mathbb{N}$ and $\beta \in (1, m + 1]$. Assume that $a\in \mathbb{R}$ can be represented in base $\beta$ using a development in series $a = \sum^{\infty}_{n = 1}x(n)\beta^{-n}$ where the sequence $x = (x(n))_{n \in \mathbb{N}}$ take values in the alphabet $\mathcal{A}_m := \{0, \ldots, m\}$. The above expression is called the $\beta$-expansion of $a$ and it is not necessarily unique. We are interested in sequences $x = (x(n))_{n \in \mathbb{N}} \in \mathcal{A}_m^\mathbb{N}$ which are associated to all possible values $a$ which have a unique expansion. We denote the set of such $x$ (with some more technical restrictions) by $X_{m,\beta} \subset\mathcal{A}_m^\mathbb{N}$. The space $X_{m, \beta}$ is called the symmetric $\beta$-shift associated to the pair $(m, \beta)$. It is invariant by the shift map but in general it is not a subshift of finite type. Given a Hölder continuous potential $A:X_{m, \beta} \to\mathbb{R}$, we consider the Ruelle operator $\mathcal{L}_A$ and we show the existence of a positive eigenfunction $\psi_A$ and an eigenmeasure $\rho_A$ for some values of $m$ and $\beta$. We also consider a variational principle of pressure. Moreover, we prove that the family of entropies $(h(\mu_{tA}))_{t>0}$ converges, when $t \to\infty$, to the maximal value among the set of all possible values of entropy of all $A$-maximizing probabilities.' author: - 'Artur O. Lopes[^1] and Victor Vargas[^2]' title: 'The Ruelle operator for symmetric $\beta$-shifts' --- [ $\beta$-expansions, equilibrium states, Gibbs states, Ruelle operator, symmetric $\beta$-shifts.]{} [ 11A63, 28Dxx, 37A35, 37D35.]{} Introduction ============ Statistical Mechanics and Thermodynamic Formalism are branches of mathematics which are interested in the study of properties of systems of particles defined on lattices, whose interactions are defined from potentials taking real values. Usually it is assumed that the potential is at at least continuous. One of the main topics of interest is the study of Gibbs states and equilibrium states. There are several papers concerning the existence and uniqueness of equilibrium probabilities in the case the potential is Hölder continuous. In the most part of these works the important ergodic properties are derived from properties of the eigenprobabilities and eigenfunctions of the Ruelle operator associated to $A$. The seminal work using this approach was presented by Ruelle in [@MR0234697] as an instrument for the study of thermodynamic properties of systems defined on uni-dimensional lattices. From this paper several important results were derived. Bowen, Sinai, Parry and Pollicott made important contributions for compact finite type subshifts (see [@MR1085356] for a nice presentation of the theory) and they also get interesting results in number theory. In more general cases, such as the case of symbolic spaces with a countable number of spins, Mauldin and Urbański in [@MR1853808] and Sarig in [@MR1738951], made some very important contributions on the noncompact countable setting (see also [@BG], [@BF] and [@MR3864383]). On other hand, in [@MR3377291], [@MR3538412] and [@Lea] are presented advances regarding Thermodynamic Formalism and problems of selection at zero temperature in the classical $XY$-model (the case where the set of spins is not countable) in a compact setting. Consider $m \in \mathbb{N}$ and $\beta \in (1, m + 1]$, we are interested in representing a real number $a$ in base $\beta$ using a development in series of the form $a = \sum^{\infty}_{n = 1}x(n)\beta^{-n}$, where the sequence $x = (x(n))_{n \in \mathbb{N}}$ take values in the alphabet $\mathcal{A}_m := \{0, \ldots, m\}$. The above expression is called the [**$\beta$-expansion of $a$**]{} which is obviously not necessarily unique (see [@MR1153488]). From now on, we denote by $\mathcal{U}_m$ for the set of real numbers $\beta$ belonging to the interval $(1, m + 1]$ for which the number $1$ has a unique $\beta$-expansion - this set is known as [**set of univoque bases**]{}. We are going to use the notation $\mathcal{U}_{m, \beta}$ for denoting the set of numbers in the interval $\left[0,\frac{m}{\beta - 1}\right]$ that have a [**unique $\beta$-expansion**]{}, with $\beta \in (1, m + 1]$. In general terms, we are interested here, for a fixed value of $\beta\in \mathcal{U}_m$, on the strings $x = (x(n))_{n \in \mathbb{N}} \in \mathcal{A}_m^\mathbb{N}$ which are obtained from values $a$ which have an unique $\beta$ expansion (we will be more precise in a moment). We will consider here the action of the shift in the set of such strings $x$ and the corresponding ergodic properties of shift invariant probabilities. The dynamical systems which we consider here is widely known in the literature as the symmetric $\beta$-shifts. They were introduced by Sidorov in [@MR1851269] as a generalization of the classical $\beta$-shifts (see [@MR2180243] and [@MR0466492]) for the case $\beta \in (1, 2)$. The $\beta$-expansions of real numbers is one of the main topics of interest in number theory (classical contributions were presented by Erdös, Parry and Rényi in [@MR1078082], [@MR0142719], and [@MR0097374]. In [@MR3223814] it is studied the topological properties of symmetric $\beta$-shifts when $\beta \in (1, 2)$. In the mentioned work are presented topological properties of these subshifts, characterizing them from the behavior of the $\beta$-expansions. In a more recent work [@MR3896110] Alcaraz et al. extended these results for a more generalized context, when $\beta \in (1, m + 1]$, for some arbitrary natural number $m$. Our aim here is to understand the Thermodynamic Formalism for this model. A special analysis (see section \[preliminaries-section\]) will be required in order to show that the Ruelle operator is well defined (we have to take care of the preimages of a given point $x$ on the shift space in order to get a local homeomorphism). More precisely, we are interested in study of properties of the Ruelle operator defined in the context of symmetric $\beta$-shifts: existence and uniqueness of Gibbs states associated to a Hölder continuous potential. First we will present the definition of the shift space (it is not a shift of finite type) which will be the main focus of our paper Given $m$ the [**generalized golden ratio**]{} is defined as $$\mathcal{G}(m) = \begin{cases} k + 1 &, m = 2k \\ \frac{k + 1 + \sqrt{k^2 + 6k + 5}}{2} &, m = 2k + 1 \end{cases}$$ This number satisfies $\mathcal{U}_{m, \beta} \neq \emptyset$ for any $\beta \in (\mathcal{G}(m), m + 1]$ and $\mathcal{U}_{m, \beta} = \emptyset$ for each $\beta \in (1, \mathcal{G}(m))$ (see for instance [@MR3653101]). We will define the symmetric $\beta$-shifts from the sets $\mathcal{U}_m$ and $\mathcal{U}_{m, \beta}$ (with some extra conditions) for values $\beta \in (\mathcal{G}(m),m+1]$. This setting was introduced in [@MR3570134] in a work mainly interested in the study of topological properties of univoque sets. Set $X_m := (\mathcal{A}_m)^{\mathbb{N}}$ equipped with the usual lexicographic order $\prec$, which is defined as $x \prec y$, if and only if, there exists $n \in \mathbb{N}$, such that, $x(j) = y(j)$, for all $j < n$ and $x(n) < y(n)$. Set $\sigma: X_m \to X_m$ the shift map defined by $\sigma((x(n))_{n \in \mathbb{N}}) = (x(n+1))_{n \in \mathbb{N}}$. Henceforth, we will denote by $\overline{x} = (\overline{x(n)})_{n \in \mathbb{N}}$ its corresponding reflection, that is, $\overline{x(n)} = m - x(n)$, for each $n \in \mathbb{N}$. Besides that, for any finite word $\omega = \omega(1) \ldots \omega(l)$ we will define its reflection as $\overline{\omega} = (m - \omega(1)) \ldots (m - \omega(l))$. We will also use the following notation: $\omega^+ = \omega(1) \ldots (\omega(l) + 1)$ and, finally, $\omega^{\infty}$, for the periodic sequence $(x(n))_{n \in \mathbb{N}}$ satisfying $x(kl + i) = \omega(i)$, for each $k \in \mathbb{N} \cup \{0\}$ and any $i \in \{1, \ldots, l\}$. Note that the definition of $\omega^+$ just make sense in the case that $\omega(l) \neq m$, and the definition of $\omega^-$ just make sense in the case that $\omega(l) \neq 0$. We will say that a sequence $x$ is infinite, if it has not a tail of the form $0^{\infty}$, in other case, we will say that the sequence $x$ is finite. We name [**greedy (resp. lazy) $\beta$-expansion**]{} of a number $a \in \left[0, \frac{m}{\beta - 1}\right]$, to the largest (resp. smallest) sequence, regarding the lexicographic order, in the set of all possible $\beta$-expansions of $a$, which can be either, a finite or an infinite sequence. Observe that both of these sequences lazy and greedy agree, if and only if, the real number $a$ has a unique $\beta$-expansion. We are going to name [**quasi-greedy (resp. quasi-lazy) $\beta$-expansion**]{} of a number $a \in \left[0, \frac{m}{\beta - 1}\right]$, to the largest (resp. smallest) infinite sequence, with respect to the lexicographic order, in the set of all possible $\beta$-expansions of $a$. Note that when the greedy (resp. lazy) $\beta$-expansion of a number $a$ is infinite, it agrees with the quasi-greedy (resp. quasi-lazy) $\beta$-expansion of the number $a$. A typical example of the above definitions is the following. Taking $\beta = \frac{1 + \sqrt{5}}{2}$, we have that the lazy $\beta$-expansion of $1$ is $x = 01^{\infty}$ and the greedy $\beta$-expansion of $1$ is $x = 110^{\infty}$. Furthermore, in this example the quasi-lazy $\beta$-expansion of $1$ coincides with the lazy $\beta$-expansion of $1$ because it is infinite and the quasi-greedy $\beta$-expansion of $1$ is $x = (10)^{\infty}$. Set $x^{m, \beta}$ the quasi-greedy $\beta$-expansion of $1$, from the greedy algorithm, it is easy to verify that $\overline{x^{m, \beta}}$ is the quasi-lazy $\beta$-expansion of $\frac{m}{\beta - 1} - 1$ when $\beta \in (\frac{m}{2} + 1, m + 1]$. Fixing $\beta \in (\mathcal{G}(m), m + 1]$, we define the set $\mathcal{W}_{m, \beta} = \mathcal{U}_{m, \beta} \cap \left(\frac{m}{\beta - 1} - 1, 1\right)$, and $\pi_{m, \beta} : X_m \to \left[0, \frac{m}{\beta - 1}\right]$ as the map assigning the real number $\sum^{\infty}_{n = 1}x(n)\beta^{-n}$ to each sequence $(x(n))_{n \in \mathbb{N}} \in X_m$. It is also easy to verify that $$\pi_{m, \beta}^{-1}(\mathcal{W}_{m, \beta}) = \{x \in X_m : \overline{x^{m, \beta}} \prec \sigma^k x \prec x^{m, \beta}, \forall k \in \mathbb{N} \cup \{0\}\} \,.$$ From the above, we can define the [**symmetric $\beta$-shift**]{} associated to the pair $(m, \beta)$ as the $\sigma$-invariant set $$X_{m, \beta} = \{x \in X_m : \overline{x^{m, \beta}} \preceq \sigma^k x \preceq x^{m, \beta}, \forall k \in \mathbb{N} \cup \{0\}\} \,. \label{symmetric-beta-shift}$$ A more detailed analysis of these dynamical systems can be found in [@MR3223814; @MR3570134]. Through this paper we are going to use the following metric on $X_{m, \beta}$ $$d(x, y) = 2^{-\min\{n \in\mathbb{N} : x(n) \neq y(n)\} + 1} \,.$$ It is easy to check that for any $m \in \mathbb{N}$ and each $\beta \in (\mathcal{G}(m), m + 1]$ the metric space $(X_{m, \beta}, d)$ is a bounded metric space. Moreover, in [@MR3570134] it was proved that $X_{m, \beta}$ is a compact $\sigma$-invariant subset of $X_m$. From now on, we will denote by $\sigma = \sigma|_{X_{m, \beta}}$. Observe that $(X_{m, \beta}, \sigma)$ is a compact subshift. In [@MR3896110] it was defined the set of irreducible sequences, that is, the set of sequences $(x(n))_{n \in \mathbb{N}} \in X_{m, \beta}$ satisfying $$x(1) \ldots x(j) (\overline{x(1) \ldots x(j)}^+)^{\infty} \prec (x(n))_{n \in \mathbb{N}}, \forall j \in \mathbb{N} \,. \label{irreducible-sequence}$$ Besides that, we define $\beta_T$ as the unique number belonging to the interval $(1, m + 1]$ satisfying that the quasi-greedy $\beta_T$-expansion of $1$ is given by $$x^{m, \beta_T} := \begin{cases} (k + 1)k^{\infty}, & m = 2k \\ (k + 1)((k + 1)k)^{\infty}, & m = 2k + 1 \end{cases} \nonumber \label{transitive-base}$$ One of the main results in [@MR3896110] claims that for any $\beta \in (\mathcal{G}(m), m + 1] \cap \overline{\mathcal{U}_m}$, the symmetric $\beta$-shift $X_{m, \beta}$ is a topologically transitive subshift, if and only if, the quasi-greedy $\beta$-expansion of 1 is an irreducible sequence, or $\beta = \beta_T$. The so called [**transitivity condition**]{} will be necessary to guarantee existence of an strictly positive eigenfunction and an eigenmeasure associated to the Ruelle operator. Henceforth, we are going to assume that either $\beta \in (\mathcal{G}(m), m + 1] \cap \overline{\mathcal{U}_m}$, with $x^{m, \beta}$ an irreducible sequence, or, $\beta = \beta_T$. We denote by $\mathcal{M}_{\sigma}(X_{m, \beta})$ the set of [**$\sigma$-invariant probabilities**]{} for the shift acting on $ X_{m, \beta}$, that is, the set of probability measures satisfying $\mu(E) = \mu(\sigma{-1}(E))$ for any Borelian set $E \subset X_{m, \beta}$. Our first result provides conditions to guarantee the existence of Gibbs states for Hölder potentials defined on the symmetric $\beta$-shift $X_{m, \beta}$ associated to certain values of $m \in \mathbb{N}$ and $\beta \in (\mathcal{G}(m), m + 1]$. The statement of the main result is the following: Consider $X_{m, \beta}$ a symmetric $\beta$-shift satisfying the transitivity condition. Let $A : X_{m, \beta} \to \mathbb{R}$ be a Hölder continuous potential. There exists a class of possible values of $m$ and $\beta$, such that, there exists $\lambda_A > 0$ and $\psi_A : X_{m, \beta} \to \mathbb{R}$, a strictly positive Hölder continuous function, such that, $\mathcal{L}_A(\psi_A) = \lambda_A\psi_A$. The eigenvalue $\lambda_A$ is simple and is the maximal possible eigenvalue. Moreover, there exists a unique Radon probability measure $\rho_A$, defined on the Borelian sets of $X_{m, \beta}$, such that, $\mathcal{L}^*_A(\rho_A) = \lambda_A\rho_A$. The invariant probability measure $\mu_A = \psi_A d\rho_A$ is the unique fixed point of $\mathcal{L}^*_{\overline{A}}$, where $\overline{A}$ is the normalization of $A$. Furthermore, for any Hölder continuous potential $\psi : X_{m, \beta} \to \mathbb{R}$, it is satisfied the following uniform limit $$\lim_{n \in \mathbb{N}}\lambda^{-n}_A\mathcal{L}^n_A(\psi) = \psi_A \int_{X_{m, \beta}} \psi d\rho_A \,.$$ We will present explicit conditions for values of $m$ and $\beta$ such that all the claims are valid. For instance, for $m > 2$ and $\beta \in [\frac{m}{2} + 2, m + 1]$. \[theorem1\] Theorem \[theorem1\] assures that Ruelle’s Perron-Frobenius Theorem can be proved for some parameters $m$ and $\beta$ of the symmetric $\beta$-shift, which implies that given a Hölder continuous function $A$, the value $\lambda_A$, the eigenfunction $\psi_A$ and te eigen-probability $\rho_A$ are well defined. The determination of these parameters is the main result of this paper. The proofs of the other results we presented here are in some way analogous to the proof of other known results (but there are some technical differences). Remember that $\mathcal{M}_{\sigma}(X_{m, \beta})$ denotes the set of invariant probabilities for the shift acting on $ X_{m, \beta}.$ We define a suitable definition of entropy on section \[pre\] and we show a variational principle of pressure. In Proposition \[variational-principle\] we will show that the probability $\psi_A\, \rho_A$ maximizes pressure. Hereafter, we will use the notation $m(A) = \sup_{\mu \in \mathcal{M}_{\sigma}(X_{m, \beta})} \left\{\int_{X_{m, \beta}} A d\mu \right\}$, and we will denote by $\mathcal{M}_{\max}(A)$ the set of [**$A$-maximizing probability measures**]{}, that is, the set of $\sigma$-invariant probability measures that attain $m(A)$. It is easy to check that $\mathcal{M}_{\max}(A)$ is a compact non-empty set. Theorem \[theorem1\] and the variational principle described in the Proposition \[variational-principle\] will ensure the existence of a family of equilibrium states $(\mu_{t\,A})_{t>0}$ depending of the real parameter $t$ (and a suitable expression for the family of entropies $(h(\mu_{t\,A}))_{t>0}$). Moreover, it follows from the compactness of set of shift-invariant probabilities on $X_{m, \beta}$ that the family $(\mu_{t\,A})_{t>0}$ has accumulation points, when $t \to\infty$. These accumulation points are sometimes called ground states. The parameter $t$ is usually identified with the inverse of the temperature for the system of particles on the lattice under the action of the potential $A$. The limit of $\mu_{t\,A}$, when $t\to \infty$, is called the [**limit at zero temperature**]{}. It is known that the ground state probabilities are maximizing probabilities for the potential $A$ (see [@BLL], [@MR3377291] or [@Lele]). One interesting question is what can be said about the entropy of ground states in terms of the entropies of equilibrium states, when temperature goes to zero, that is, the values $(h(\mu_{t\,A}))_{t>0}$, when $t \to \infty$ . One of the first works in this direction was due to Contreras et al. in [@MR1855838]. They show some properties of the limit of this family of entropies at zero temperature for potentials of class $\mathcal{C}^{1 + \alpha}$ defined on $S^1$. In the non-compact case, Morris proved in [@MR2295238] the existence of the zero temperature limit (see also [@MR2151222]). All these results were extended recently in [@MR3864383] by Freire and Vargas beyond the finitely primitive case. Although these type of problems have been widely studied in finite type subshifts in both, compact the non-compact setting, they have not been studied in a non-Markovian setting yet. Our second result guarantees the existence of the zero temperature limit for entropies in the symmetric $\beta$-shifts model. The statement of the result is as follows: Consider $m > 2$ and $\beta \in [\frac{m}{2} + 2, m + 1]$, such that, $X_{m, \beta}$ satisfies the transitivity condition. Let $A : X_{m, \beta} \to \mathbb{R}$ be a Hölder continuous potential. Then, the family $(h(\mu_{tA}))_{t > 0}$ is continuous at infinity, and $$\lim_{t \to \infty} h(\mu_{tA}) = \max_{\mu \in \mathcal{M}_{max}(A)} h(\mu) \,.$$ \[theorem3\] The paper is organized as follows. In section \[preliminaries-section\] we present some preliminaries and we introduce the Ruelle operator on symmetric $\beta$-shifts. We prove that it is well defined and preserves the set of Hölder continuous functions. At the end of the section we introduce some more notation that will be used through the paper. In the Appendix \[RPF-theorem-section\] appears the proof of the Theorem \[theorem1\] (which is similar to the one in [@MR1085356]). In section \[pre\] we present a suitable definition of entropy. We also consider a variational principle of pressure. We use the results above to show that the Gibbs state found in the Ruelle-Perron-Frobenius Theorem is an equilibrium state. Finally, in section \[zero-temperature-limit-section\] we present the proof of Theorem \[theorem3\]. The Ruelle operator - existence of eigenfunctions and eigenprobabilities {#preliminaries-section} ======================================================================== In this section we present the definition of the Ruelle operator and we show that it is well defined for certain values of the parameters $m$ and $\beta$. By the characterization of symmetric $\beta$-shifts that appears in (\[symmetric-beta-shift\]) we get that for any pair of numbers $m \in \mathbb{N}$ and $\beta \in (1, m + 1]$, it is true that $X_{m, \beta} = X_{\mathcal{F}_{m, \beta}}$, with $\mathcal{F}_{m, \beta}$ the collection of forbidden words of the shift $X_{m, \beta}$ (See [@MR1369092]), given by $$\mathcal{F}_{m, \beta} = \bigcup_{n \in \mathbb{N}} \bigl(\mathcal{F}_{m, \beta}(n) \cup \overline{\mathcal{F}_{m, \beta}}(n)\bigr) \,,$$ with $\mathcal{F}_{m, \beta}(n) = \{\omega = \omega(1) \ldots \omega(n) : x^{m, \beta}(1) \ldots x^{m, \beta}(n) \prec \omega(1) \ldots \omega(n)\}$, and $\overline{\mathcal{F}_{m, \beta}}(n) = \{\omega = \omega(1) \ldots \omega(n) : \omega(1) \ldots \omega(n) \prec \overline{x^{m, \beta}(1) \ldots x^{m, \beta}(n)}\}$. From the characterization of symmetric $\beta$-shifts in (\[symmetric-beta-shift\]), we can define the cylinder associated to the word $\omega = \omega(1) \ldots \omega(l)$, as the set $$[\omega] = \{x \in X_{m, \beta} : x(1) = \omega(1), \ldots, x(l) = \omega(l)\} \,.$$ Observe that $[\omega] \neq \emptyset$, if and only if, $\overline{x^{m, \beta}(i)} \preceq \omega(i + k) \preceq x^{m, \beta}(i)$, for all $i \in \{1, \ldots, l\}$ and each $1 \leq k \leq l - i$. Moreover, the topology generated by cylinders coincides with the product topology on the set $X_{m, \beta}$ and $\mathcal{P} = \{[0], \ldots, [m]\}$ is a generating partition of the Borel sigma algebra. It is easy to check that $X_{m, \beta}$ is a completely disconnected set. Indeed, if $U$ is a non-empty connected open set satisfying $U \subset X_{m, \beta}$ and $U \neq \{x\}$, we can choose $x \in U$ and $\epsilon > 0$, such that, $B(x, \epsilon) \subset U$. Therefore, for each $y \in B(x, \epsilon)$, it is satisfied $\overline{x^{m, \beta}} \preceq \sigma^k y \preceq x^{m, \beta}$, for all $k \in \mathbb{N} \cup \{0\}$, in other words, $\sigma^k(B(x, \epsilon)) \subset X_{m, \beta}$, for all $k \in \mathbb{N} \cup \{0\}$, which is a contradiction. This is so, because for each $y_0 \in B(x, \epsilon)) \setminus \{x\}$, there exists $k_0 \in \mathbb{N}$, such that, $d(\sigma^{k_0}(x), \sigma^{k_0}(y_0)) = 2^{k_0}\epsilon > d(\overline{x^{m, \beta}}, x^{m, \beta})$. We will be interested only in the case of symmetric $\beta$-shifts $X_{m, \beta}$, with $m \in \mathbb{N}$ and values of $\beta \in (\mathcal{G}(m), m + 1] \cap \overline{\mathcal{U}_m}$. Hereafter, we will denote by $\mathcal{H}_{\alpha}(X_{m, \beta})$ the set of [**Hölder continuous functions**]{} from $X_{m, \beta}$ into $\mathbb{R}$ with coefficient $\alpha$, i.e. the set of functions $\psi : X_{m, \beta} \to \mathbb{R}$ satisfying for some $K \geq 0$ and all $x, y \in X_{m, \beta}$ the following inequality $$|\psi(x) - \psi(y)| \leq Kd(x, y)^{\alpha} \,. \label{Holder}$$ Besides that, for any $\psi \in \mathcal{H}_{\alpha}(X_{m, \beta})$ we will use the notation $\mathrm{Hol}_{\psi}$ for the Hölder constant of $\psi$, which is defined by $\mathrm{Hol}_{\psi} = \sup_{x \neq y} \frac{|\psi(x) - \psi(y)|}{d(x, y)^{\alpha}}$. Thus, given $\psi \in \mathcal{H}_{\alpha}(X_{m, \beta})$, its norm is defined as $\|\psi\|_{\alpha} = \|\psi\|_{\infty} + \mathrm{Hol}_{\psi}$. It is simple to check that $(\mathcal{H}_{\alpha}(X_{m, \beta}), \|\cdot\|)$ is a Banach space. We will denote by $\mathcal{C}(X_{m, \beta})$ the set of continuous functions from $X_{m, \beta}$ into $\mathbb{R}$. Taking a potential $A \in \mathcal{C}(X_{m, \beta})$, we define the [**Ruelle operator associated to $A$**]{} as the function that assigns to each continuous function $\varphi$, the function $$\mathcal{L}_A(\varphi)(x) := \sum_{\sigma(y) = x} e^{A(y)}\varphi(y) \,. \label{Ruelle-operator}$$ In the following Lemma we will provide values for $m$ and $\beta$, such that, $\mathcal{L}_{A}(\varphi)$ is well defined on the set $X_{m, \beta}$. \[bigd\] Consider $m > 2$ and $\beta \in \left[\frac{m}{2} + 2, m + 1\right]$. Then, any $x \in X_{m, \beta}$ is such that $\sigma^{-1}(\{x\}) \neq \emptyset$. Moreover, in this case $\#(\sigma^{-1}(\{x\})) \geq 2$. We use the notation $ax$ for the sequence $(a, x(1), x(2), \ldots) \in \mathcal{A}_m^{\mathbb{N}}$. From the above, we define for each $x \in X_{m, \beta}$ the set $\mathcal{A}_m(x)$ as $$\begin{aligned} \mathcal{A}_m(x) &:= \{a \in \mathcal{A}_m: ax \in X_{m, \beta}\} \nonumber \\ &= \{a \in \mathcal{A}_m: ax(1) \ldots x(n) \notin \mathcal{F}_{m, \beta}, \, \forall n \in \mathbb{N} \} \nonumber \,.\end{aligned}$$ Moreover, each $a \in \mathcal{A}_m(x)$ satisfies the following conditions: 1. $\frac{m}{\beta - 1} - 1 < \frac{a}{\beta} < 1$; 2. $\frac{m}{\beta - 1} - 1 < \frac{a}{\beta} + \frac{1}{\beta}\sum_{k=1}^n x(k)\beta^{-k} < 1$ for each $n \in \mathbb{N}$. Note that by the above definition, it follows immediately that $$\#(\sigma^{-1}(\{x\})) = \#(\mathcal{A}_m(x)) \,.$$ Besides that, each point $x \in X_{m, \beta}$ satisfy the following inequalities $$\frac{m}{\beta - 1} - 1 < \sum_{k = 1}^n x(k)\beta^{-k} < 1 , \, \forall n \in \mathbb{N} \,. \label{inequality-shift}$$ Now, we want to demonstrate that under the hypothesis of this Lemma, it is satisfied $$\left(\frac{\beta}{\beta - 1}(m - \beta + 1), \beta - 1\right) \cap \mathbb{N} \subset \mathcal{A}_m(x) \,. \label{interval}$$ Indeed, taking $$a \in \left(\frac{\beta}{\beta - 1}(m - \beta + 1), \beta - 1\right) \cap \mathbb{N} \,,$$ by the right side in (\[inequality-shift\]), we obtain that for all $n \in \mathbb{N}$ it is satisfied $$\frac{a}{\beta} + \frac{1}{\beta}\sum_{k = 1}^n x(k)\beta^{-k} < \frac{a}{\beta} + \frac{1}{\beta} < \frac{\beta - 1}{\beta} + \frac{1}{\beta} = 1 \,$$ On other hand, by the left side in (\[inequality-shift\]), for all $n \in \mathbb{N}$, we have $$\begin{aligned} \frac{a}{\beta} + \frac{1}{\beta}\sum_{k = 1}^n x(k)\beta^{-k} &> \frac{m - \beta + 1}{\beta - 1} + \frac{m}{\beta(\beta - 1)} - \frac{1}{\beta} \nonumber \\ &> \frac{m - \beta + 1}{\beta} + \frac{m}{\beta(\beta - 1)} - \frac{1}{\beta} \nonumber \\ &= \frac{(m - \beta + 1)(\beta - 1) + m - (\beta - 1)}{\beta(\beta - 1)} \nonumber \\ &= \frac{m\beta - \beta^2 + \beta - m + \beta -1 + m - \beta + 1}{\beta(\beta - 1)} \nonumber \\ &= \frac{m - \beta + 1}{\beta - 1} \nonumber \\ &=\frac{m}{\beta - 1} - 1 \nonumber \,.\end{aligned}$$ Therefore, $$\frac{m}{\beta - 1} - 1 < \frac{a}{\beta} + \frac{1}{\beta}\sum_{k=1}^n x(k)\beta^{-k} < 1, \, \forall n \in \mathbb{N} \,.$$ Besides that, as $a < \beta - 1$, we get $$\frac{a}{\beta} < \frac{\beta - 1}{\beta} < 1 \,,$$ and using the fact that $a > \frac{\beta}{\beta - 1}(m - \beta + 1)$, it follows that $$\frac{a}{\beta} > \frac{m - \beta + 1}{\beta - 1} = \frac{m}{\beta - 1} - 1 \,.$$ That is, $$\frac{m}{\beta - 1} - 1 < \frac{a}{\beta} < 1 \,.$$ By the above and (\[preimages\]), it follows that $a \in \mathcal{A}_m(x)$, which proves (\[interval\]). Moreover, taking $\beta = \frac{m}{2} + 2$, it follows that $$\left(\frac{\beta}{\beta - 1}(m - \beta + 1), \beta - 1\right) = \left(\frac{m^2/4 + m/2 - 2}{m/2 + 1}, \frac{m}{2} + 1\right) \,,$$ and for all $m > 2$ it is satisfied $$\#\bigl(\left(\frac{m^2/4 + m/2 - 2}{m/2 + 1}, \frac{m}{2} + 1\right) \cap \mathbb{N} \bigr) \geq 2\,.$$ In addition, we have $\left(\frac{m^2/4 + m/2 - 2}{m/2 + 1}, \frac{m}{2} + 1\right) \subseteq \left(\frac{\beta}{\beta - 1}(m - \beta + 1), \beta - 1\right)$, for all $\beta \in \left[\frac{m}{2} + 2, m + 1\right]$. Therefore, it follows that $\#(\sigma^{-1}(\{x\})) = \#(\mathcal{A}_m(x)) \geq 2$, for all $\beta \in \left[\frac{m}{2} + 2, m + 1\right]$ In the next Lemma we will check that the Ruelle operator is a local homeomorphism. Moreover, we will show in the next Lemma that the Ruelle operator preserves the set of Hölder continuous potentials. Consider $x \in X_{m, \beta}$ such that $\mathcal{A}_m(x) \neq \emptyset$. Then, for any point $x' \in X_{m, \beta}$ which is close enough to $x$, we get $\mathcal{A}_m(x) = \mathcal{A}_m(x')$. For a fixed $x \in X_{m, \beta}$ and $a \in \mathcal{A}_m(x)$, it is easy to verify that $$\frac{\beta m}{\beta - 1} - \beta - a \leq \sum_{n \in \mathbb{N}}x(n)\beta^{-n} \leq \beta - a \,.$$ The analysis of the above inequalities can be decomposed in the analysis of the following cases: *Case 1:* $$\frac{\beta m}{\beta - 1} - \beta - a < \sum_{n \in \mathbb{N}}x(n)\beta^{-n} < \beta - a \,.$$ Take $N \in \mathbb{N}$ large enough, such that, $$m \sum_{n > N} \beta^{-n} < \min \left\{\beta - a - \sum_{n \in \mathbb{N}}x(n)\beta^{-n}, \sum_{n \in \mathbb{N}}x(n)\beta^{-n} - \frac{\beta m}{\beta - 1} + \beta + a \right\} \,.$$ It follows that for any $x' \in X_{m, \beta}$ with $d(x, x') < 2^{-N}$, it is satisfied $$\frac{\beta m}{\beta - 1} - \beta - a < -m \sum_{n > N} \beta^{-n} + \sum_{n \in \mathbb{N}}x(n)\beta^{-n} \leq \sum_{n \in \mathbb{N}}x'(n)\beta^{-n} \,,$$ and $$\sum_{n \in \mathbb{N}}x'(n)\beta^{-n} \leq \sum_{n \in \mathbb{N}}x(n)\beta^{-n} + m \sum_{n > N} \beta^{-n} < \beta - a \,.$$ The reasoning above implies $ax' \in X_{m, \beta}$, which is equivalent to $a \in \mathcal{A}_m(x')$. *Case 2:* $$\sum_{n \in \mathbb{N}}x(n)\beta^{-n} = \beta - a \,.$$ For this case we choose $N \in \mathbb{N}$ large enough, such that $$m \sum_{n > N} \beta^{-n} < \sum_{n \in \mathbb{N}}x(n)\beta^{-n} - \frac{\beta m}{\beta - 1} + \beta + a \,.$$ Therefore, for any $x' \in X_{m, \beta}$ satisfying $x' \preceq x$ and $d(x, x') < 2^{-N}$, we have $$\frac{\beta m}{\beta - 1} - \beta - a < -m \sum_{n > N} \beta^{-n} + \sum_{n \in \mathbb{N}}x(n)\beta^{-n} \leq \sum_{n \in \mathbb{N}}x'(n)\beta^{-n} \,,$$ and $$\sum_{n \in \mathbb{N}}x'(n)\beta^{-n} \leq \sum_{n \in \mathbb{N}}x(n)\beta^{-n} = \beta - a \,.$$ By the above, $ax' \in X_{m, \beta}$. Then, we can conclude that $a \in \mathcal{A}_m(x')$. *Case 3:* $$\frac{\beta m}{\beta - 1} - \beta - a = \sum_{n \in \mathbb{N}}x(n)\beta^{-n} \,.$$ In this case, we choose $N \in \mathbb{N}$ large enough, such that, $$m \sum_{n > N} \beta^{-n} < \beta - a - \sum_{n \in \mathbb{N}}x(n)\beta^{-n} \,.$$ Thus, for any $x' \in X_{m, \beta}$ satisfying $x \preceq x'$ and $d(x, x') < 2^{-N}$, we get that $$\frac{\beta m}{\beta - 1} - \beta - a = \sum_{n \in \mathbb{N}}x(n)\beta^{-n} \leq \sum_{n \in \mathbb{N}}x'(n)\beta^{-n} \,,$$ and $$\sum_{n \in \mathbb{N}}x'(n)\beta^{-n} \leq \sum_{n \in \mathbb{N}}x(n)\beta^{-n} + m \sum_{n > N} \beta^{-n} < \beta - a \,.$$ By the above, $ax' \in X_{m, \beta}$. That is, $a \in \mathcal{A}_m(x')$. Note that in all the cases studied above $\mathcal{A}_m(x) = \mathcal{A}_m(x')$, when $x$ and $x'$ are close enough. The foregoing implies that $\mathcal{L}_A(\varphi)$ is a local homeomorphism when $A, \varphi \in \mathcal{C}(X_{m, \beta})$. There are parameters $m$ and $\beta$ (for instance, when $m > 2$ and $\beta \in [\frac{m}{2} + 2, m + 1]$), such that, the shift is transitive and also satisfy the conditions of Lemma \[bigd\]. We assume on the proof of the Ruelle Theorem these conditions. The main conclusion is: if $A, \varphi \in \mathcal{H}_{\alpha}(X_{m, \beta})$ and $x, x' \in X_{m, \beta}$ are close enough, we get that $\mathcal{A}_m(x) = \mathcal{A}_m(x')$. It follows that $$\begin{aligned} |\mathcal{L}_{A}(\varphi)(x) - \mathcal{L}_{A}(\varphi)(x')| &\leq \sum_{a \in \mathcal{A}_m(x)}\left|e^{A(ax)}\varphi(ax) - e^{A(ax')}\varphi(ax')\right| \nonumber \\ &\leq \frac{(m + 1)}{2^{\alpha}}\bigl(e^{\|A\|_{\infty}}\mathrm{Hol}_{\varphi} + \|\varphi\|_{\infty}\mathrm{Hol}_{e^{A}}\bigr)d(x, x')^{\alpha} \nonumber \,.\end{aligned}$$ By the above, $\mathcal{L}_{A}(\varphi)$ is locally Hölder continuous. Thus, by compactness of $X_{m, \beta}$, it follows that $\mathcal{L}_{A}(\varphi) \in \mathcal{H}_{\alpha}(X_{m, \beta})$. It follows that for each $n \in \mathbb{N}$, the $n$-th iterate of the Ruelle operator, which is defined by $$\mathcal{L}^n_A(\varphi)(x) = \sum_{\sigma^n(y) = x} e^{S_n A(y)}\varphi(y) \,$$ satisfies the same properties mentioned above, where $S_n A(y) = \sum_{j=0}^{n-1} A(\sigma^j(y))$. The above, it is an immediate consequence of the fact $\mathcal{L}^n_A(\varphi) = \mathcal{L}_A\left(\mathcal{L}^{n-1}_A(\varphi)\right)$ Given two Banach spaces $X$ and $Y$ we denote by $l(X, Y)$ the Banach space of linear continuous operators from $X$ into $Y$. We will use the notation $l(X)$ for the Banach space of linear continuous operators from $X$ into itself. Note that $\|\mathcal{L}_A(\varphi)\|_{\alpha} < K < \infty$, for any $\varphi \in \mathcal{H}_{\alpha}(X_{m, \beta})$, with $\|\varphi\|_{\alpha} \leq 1$. Therefore, $\|\mathcal{L}_A\| < \infty$, in other words $\mathcal{L}_A \in l(\mathcal{H}_{\alpha}(X_{m, \beta}))$. Using the properties of the dual space of a Banach space, we can define the dual of the Ruelle operator $\mathcal{L}^*_A$ on the set of Radon measures, as the operator that satisfies for any $\varphi \in \mathcal{C}(X_{m, \beta})$ the following equation $$\int_{X_{m, \beta}}\varphi d\left(\mathcal{L}^*_A(\nu)\right) = \int_{X_{m, \beta}}\mathcal{L}_A(\varphi) d\nu \,.$$ From the above equation, for each $n \in \mathbb{N}$ we can express the $n$-th iterate of the dual Ruelle operator by $$\int_{X_{m, \beta}}\varphi \,\,d\left(\mathcal{L}^{*,n}_A(\nu)\right) = \int_{X_{m, \beta}}\mathcal{L}^n_A(\varphi) d\nu \,.$$ From now on, we will denote by $\mathcal{R}(X_{m, \beta})$ the set of Radon measures on the symmetric $\beta$-shift $X_{m, \beta}$ and we will use the notation $\mathcal{M}(X_{m, \beta})$ for the set of Radon probability measures on $X_{m, \beta}$. Besides that, we will denote by $\mathcal{M}_{\sigma}(X_{m, \beta})$ the set of $\sigma$-invariant Radon probability measures defined on $X_{m, \beta}$. Observe that by Banach-Alaoglu’s Theorem both of the sets, $\mathcal{M}(X_{m, \beta})$ and $\mathcal{M}_{\sigma}(X_{m, \beta})$, are compact subsets of $\mathcal{R}(X_{m, \beta})$. Theorem \[theorem1\] claims that if $A : X_{m, \beta} \to \mathbb{R}$ is a Hölder continuous potential, then, there exists $\lambda_A > 0$, and 1\) a unique Radon probability measure $\rho_A$, defined on the Borelian sets of $X_{m, \beta}$, such that, $\mathcal{L}^*_A(\rho_A) = \lambda_A\rho_A$. 2\) a function $\psi_A : X_{m, \beta} \to \mathbb{R}$ which is a strictly positive Hölder continuous function and such that $\mathcal{L}_A(\psi_A) = \lambda_A\psi_A$. Assuming that $\int \psi_A d \rho_a =1$, the normalized eigenfunction $\psi_A$ is uniquely determined (because the probability $\rho_A$ was [**uniquely**]{} determined). 3\) The probability measure $\mu_A = \psi_A d\rho_A$, where we take the uniquely determined function $\psi_A$ from item 2), is invariant for the shift and also uniquely determined. Hereafter, we will assume that $\rho_A$, $\psi_A $ and $\mu_A$ denote the uniquely determined elements describe by 1) , 2) and 3) (and in this order of determination). In the proof of Theorem \[theorem1\] we use a similar procedure as the one that appears in [@MR1085356] for the case of compact subshifts. One can show that the same reasoning can be applied on our setting (we have to check all details for a specific proof that works in our setting). For a question of completeness we will present the sketch of the proof in the appendix \[RPF-theorem-section\]. The variational principle of pressure {#pre} ===================================== In this section we are going to define the entropy associated to a $\sigma$-invariant probability measure. Furthermore, we are going to show that this definition satisfies a variational principle. Indeed, given $\mu \in \mathcal{M}_{\sigma}(X_{m, \beta})$ we define the entropy of $\mu$ (see [@MR3377291]) as $$h(\mu) = \inf_{u \in \mathcal{C}^+(X_{m, \beta})}\left\{\int_{X_{m, \beta}}\log\left(\frac{\mathcal{L}_0(u)}{u}\right)d\mu\right\} \,. \label{entropy}$$ We assume that the parameters $m$ and $\beta$ satisfy the conditions required in last section. That is, $m > 2$ and $\beta \in [m/2 + 2, m + 1]$. Given the potential $A \in \mathcal{H}_{\alpha}(X_{m, \beta})$, the normalization of $A$ is defined as $$\overline{A} := A + \log(\psi_A) - \log(\psi_A \circ \sigma) - \log(\lambda_A) \,. \label{normalization}$$ The above definition will be used in the proof of the variational principle that appears in Proposition \[variational-principle\] and also in the proof of Theorem \[theorem1\] that appears in section \[RPF-theorem-section\]. If $\mu$ is a fixed point of the Ruelle operator associated to some Hölder continuous potential (see [@MR3377291]) the following Lemma show us that entropy (given by the above definition) can be expressed in an integral form. Consider $A \in \mathcal{H}_{\alpha}(X_{m, \beta})$ and $\mu_A$ the unique fixed point of the dual Ruelle operator $\mathcal{L}^*_{\overline{A}}$. Then, $$h(\mu_A) = -\int_{X_{m, \beta}} \overline{A}d\mu_A \,.$$ \[variational-principle-entropy\] Set $u_0 = e^{\overline{A}}$, so $u_0 \in \mathcal{C}^+(X_{m \beta})$. Since $\mathcal{L}^*_A(\mu_A) = \mu_A$, it follows that $$-\int_{X_{m, \beta}}\overline{A} d\mu_A = \int_{X_{m, \beta}}\log\left(\frac{\mathcal{L}_{\overline{A}}(1)}{u_0}\right) d\mu_A = \int_{X_{m, \beta}}\log\left(\frac{\mathcal{L}_0(u_0)}{u_0}\right) d\mu_A \,.$$ On other hand, for any $\widetilde{u} \in \mathcal{C}^+(X_{m \beta})$, we have that $u = \widetilde{u}e^{-\overline{A}} \in \mathcal{C}^+(X_{m \beta})$ and $\mathcal{L}_0(\widetilde{u}) = \mathcal{L}_{\overline{A}}(u)$. Therefore, $$\log\left(\frac{\mathcal{L}_0(\widetilde{u})}{\widetilde{u}}\right) = \log\left(\frac{\mathcal{L}_{\overline{A}}(u)}{u}\right) - \overline{A} \,.$$ From the above, integrating with respect to $\mu_A$, we get that $$\int_{X_{m, \beta}}\log\left(\frac{\mathcal{L}_0(\widetilde{u})}{\widetilde{u}}\right)d\mu_A =\int_{X_{m, \beta}}\log\left(\frac{\mathcal{L}_{\overline{A}}(u)}{u}\right)d\mu_A - \int_{X_{m, \beta}} \overline{A} d\mu_A \,.$$ Since $\overline{A}$ is a normalized potential, from the Jensen’s inequality, it follows that $0 \geq \int_{X_{m, \beta}}\log(\mathcal{L}_{\overline{A}}(u))d\mu_A - \int_{X_{m, \beta}}\log(u)d\mu_A$. Therefore, $$\int_{X_{m, \beta}}\log\left(\frac{\mathcal{L}_0(\widetilde{u})}{\widetilde{u}}\right)d\mu_A \geq - \int_{X_{m, \beta}} \overline{A} d\mu_A \,.$$ In other words, we have $$- \int_{X_{m, \beta}} \overline{A} d\mu_A = \inf_{u \in \mathcal{C}^+(X_{m, \beta})}\left\{\int_{X_{m, \beta}}\log\left(\frac{\mathcal{L}_0(u)}{u}\right)d\mu_A\right\} = h(\mu_A) \,.$$ The next Proposition shows that the Gibbs state found in Theorem \[theorem1\] satisfies the variational principle. Note that the above implies that any Gibbs state $\mu_A$ associated to some Hölder continuous potential $A$ (defined on the symmetric $\beta$-shift) is in fact an equilibrium state. Given $A \in \mathcal{H}_{\alpha}(X)$ the topological pressure $P(A)$ of the potential $A$ is defined by $$P(A) = \sup_{\mu \in \mathcal{M}_{\sigma}(X_{m, \beta})}\left\{ h(\mu) + \int_{X_{m, \beta}}A d\mu \right\} \,.$$ Then, $P(A)= \log(\lambda_A) $, where $\lambda_A$ is the eigenvalue of the Ruelle operator. The probability which attains the maximal value is $\mu_{\overline{A}} $, where $\overline{A}$ is associated to $A$ via the expression (\[normalization\]). It is also true that $\mu_{\overline{A}}=\mu_A = \psi_A d\rho_A$ (see 3) in the end of section 2). \[variational-principle\] By Lemma \[variational-principle-entropy\] we have $$P(A) = \log(\lambda_A) = h(\mu_A) + \int_{X_{m, \beta}}A d\mu_A \,.$$ Besides that, for any $\mu \in \mathcal{M}_{\sigma}(X_{m, \beta})$, it is satisfied $$\begin{aligned} h(\mu) &= \inf_{u \in \mathcal{C}^+(X_{m, \beta})}\left\{\int_{X_{m, \beta}}\log\left(\frac{\mathcal{L}_0(u)}{u}\right)d\mu\right\} \nonumber \\ &\leq \int_{X_{m, \beta}}\log\left(\frac{\mathcal{L}_0(e^{\overline{A}})}{e^{\overline{A}}}\right)d\mu \nonumber \\ &= -\int_{X_{m, \beta}} \overline{A} d\mu \nonumber \\ &= -\int_{X_{m, \beta}} A d\mu + \log(\lambda_A) \nonumber \,.\end{aligned}$$ Zero Temperature Limits for Entropies {#zero-temperature-limit-section} ===================================== In section \[pre\] we presented a notion of entropy for $\sigma$-invariant probability measures defined on symmetric $\beta$-shifts. Besides that, we showed in last section that it is satisfied a variational principle and the supremum of the variational equation of pressure is attained at the Gibbs state associated to the potential $A$, which assures that any Gibbs state it is an equilibrium state as well. In this section, we are going to present the proof of Theorem \[theorem3\] using the variational principle considered in Proposition \[variational-principle\]. This Theorem guarantees that the function assigning to each $t > 0$ the value $h(\mu_{tA})$ is continuous at infinity, which is known as zero temperature limit for the entropies of the equilibrium states. General results on maximizing probabilities can be found in [@MR1855838], [@G1] and [@BLL]. Note that any accumulation point of the family $(\mu_{tA})_{t > 0}$ in the weak\* topology is an $A$-maximizing probability measure. Indeed, for any $\mu \in \mathcal{M}_{\sigma}(X_{m, \beta})$ and each $t > 0$, it is satisfied $$\frac{1}{t}h(\mu_{tA}) + \int_{X_{m, \beta}}A d\mu_{tA} \geq \frac{1}{t}h(\mu) + \int_{X_{m, \beta}}A d\mu \,. \label{asymptotic-pressure}$$ Let $\mu_{\infty}$ be an accumulation point at $\infty$ of the family $(\mu_{tA})_{t > 0}$. Then, there exists an increasing sequence of positive real numbers $(t_n)_{n \in \mathbb{N}}$, such that, $\lim_{n \in \mathbb{N}} t_n = +\infty$ and $\lim_{n \in \mathbb{N}} \mu_{t_nA} = \mu_{\infty}$ in the weak\* topology. Then, taking the limit, when $n \to \infty$, in (\[asymptotic-pressure\]) and using the fact that $h(\mu_{t_nA}) \leq h(X_{m, \beta}) < \infty$, for all $n \in \mathbb{N}$, we get that $$\begin{aligned} \int_{X_{m, \beta}}A d\mu_{\infty} &= \lim_{n \in \mathbb{N}}\left(\frac{1}{t_n}h(\mu_{t_nA}) + \int_{X_{m, \beta}}A d\mu_{t_nA}\right) \nonumber \\ &\geq \lim_{n \in \mathbb{N}}\left(\frac{1}{t_n}h(\mu) + \int_{X_{m, \beta}}A d\mu\right) \nonumber \\ &= \int_{X_{m, \beta}}A d\mu \nonumber \,.\end{aligned}$$ The above implies that $m(A) = \int_{X_{m, \beta}}A d\mu_{\infty}$. On other hand, since for each cylinder $[i]$, with $i \in \mathcal{A}_m$, it is true that $\partial[i] = \emptyset$, then, the map $\mu \mapsto h(\mu)$ is upper semicontinuous. Thus, by compactness of $X_{m, \beta}$, it is guaranteed that $\mathcal{M}_{\max}(A)$ is a compact set as well. Therefore, there exists $\widehat{\mu} \in \mathcal{M}_{\sigma}(X_{m, \beta})$, such that, $h(\mu) \leq h(\widehat{\mu})$, for all $\mu \in \mathcal{M}_{\sigma}(X_{m, \beta})$. Note that (\[asymptotic-pressure\]) implies that $$P(tA) = tm(A) + h(X_{m, \beta}) + o(t) \,.$$ That is, the topological pressure has an asymptote that depends of $m(A)$. This implies that $$h(\widehat{\mu}) \leq h(X_{m, \beta}) + o(t) \,.$$ On other hand, by Proposition \[variational-principle\], we have $$h(X_{m, \beta}) + o(t) = P(tA) - tm(A) \leq h(\mu_{tA}) \,.$$ Therefore, $$h(\widehat{\mu}) \leq \limsup_{t \to \infty} (h(X_{m, \beta}) + o(t)) \leq \limsup_{t \to \infty} h(\mu_{t A}) \leq h(\mu_{\infty}) \leq h(\widehat{\mu}) \,.$$ Using again Proposition \[variational-principle\], we obtain that $$h(\mu_{tA}) \geq P(tA) - tm(A) \geq h(\widehat{\mu}) \,.$$ The foregoing implies that for any $n \in \mathbb{N}$ it is satisfied the inequality $$\inf_{t \geq t_n} h(\mu_{t_n A}) \geq h(\widehat{\mu}) \,.$$ Then, taking the limit when $n \to \infty$, we get $$\liminf_{t \to \infty} h(\mu_{tA}) \geq h(\widehat{\mu}) = \limsup_{t \to \infty} h(\mu_{t A}) \,.$$ Appendix -The proof of the Ruelle’s Perron-Frobenius Theorem {#RPF-theorem-section} ============================================================ In order to prove the Theorem \[theorem1\] we will analyze first some properties of the following collection of functions $$\Gamma := \{\psi \in \mathcal{C}(X_{m, \beta}) : 0 \leq \psi \leq 1, \log(\psi) \in \mathcal{H}_{\alpha}(X_{m, \beta})\} \,.$$ We will show first the existence of the eigenfunction. We assume that the parameters $m$ and $\beta$ are such that the action of the shift is transitive and the Ruelle operator is well defined. The proof is quite similar to the one in [@MR1085356]. We just outline some of the steps. Note that the above $\Gamma$ is convex, because it is satisfied $\psi(x) \leq \psi(y)e^{\mathrm{Hol}_A d(x, y)^{\alpha}}$, for all $\psi \in \Gamma$. Moreover, the above inequality implies that $$\begin{aligned} |\psi(x) - \psi(y)| &\leq \|\psi\|_{\infty}\left(e^{\mathrm{Hol}_A d(x, y)^{\alpha}} - 1\right) \nonumber \\ &\leq \|\psi\|_{\infty} \mathrm{Hol}_A d(x, y)^{\alpha} e^{\mathrm{Hol}_A d(x, y)^{\alpha}} \nonumber \\ &\leq \|\psi\|_{\infty} \mathrm{Hol}_A e^{\mathrm{Hol}_A}d(x, y)^{\alpha} \nonumber \,.\end{aligned}$$ Therefore, $\Gamma$ is contained in $\mathcal{H}_{\alpha}(X_{m, \beta})$ and the same inequality implies that $\Gamma$ is an equicontinuous and uniformly bounded collection of functions, which implies that $\Gamma$ is uniformly compact by Arzela-Ascoli’s Theorem. Now, for each $k \in \mathbb{N}$ we define the operator $L_k$ from $\Gamma$ into $\Gamma$ by the following expression $$L_k(\psi) = \frac{\mathcal{L}_A(\psi + 1/k)}{\|\mathcal{L}_A(\psi + 1/k)\|_{\infty}}\,.$$ Since all the constant functions taking values in $[0, 1]$ belongs to the set $\Gamma$, by linearity of the Ruelle operator, we get that $L_k(\psi) \in \Gamma$, for any $\psi \in \Gamma$. Besides that, $\|L_k(\psi)\|_{\infty} = 1$, for all $k \in \mathbb{N}$. Thus, using convexity and uniformly compactness of $\Gamma$, it is guaranteed the existence of a fixed point $\psi_k$ for $L_k$ by Schauder-Tychonoff’s Theorem. So, $$\mathcal{L}_A(\psi_k + 1/k) = \psi_k \|\mathcal{L}_A(\psi_k + 1/k)\|_{\infty} \,.$$ Using again that $\Gamma$ is uniformly compact, we obtain that the sequence $(\psi_k)_{k \in \mathbb{N}}$ has an accumulation point $\psi_A$ with the uniform norm. Then, by continuity of $\mathcal{L}_A$ we have $$\mathcal{L}_A(\psi_A) = \psi_A \|\mathcal{L}_A(\psi_A)\|_{\infty} \,.$$ Hereafter, the last term in the right side of the equation will be denoted by $\lambda_A$. Observe that $\lambda_k = \|\mathcal{L}_A(\psi_k + 1/k)\|_{\infty}$ satisfies $\lim_{k \in \mathbb{N}} \lambda_k = \lambda_A$. Moreover, $$\lambda_k \psi_k = \sum_{\sigma(y) = x}e^{A(y)}(\psi_k(y) + 1/k) \geq (\inf_{k \in \mathbb{N}}(\psi_k) + 1/k)e^{-\|A\|_{\infty}} \,.$$ By the above, we obtain that $\lambda_k \inf_{k \in \mathbb{N}}(\psi_k) \geq (\inf_{k \in \mathbb{N}}(\psi_k) + 1/k)e^{-\|A\|_{\infty}}$. Therefore, we can conclude that $\lambda_k > e^{-\|A\|_{\infty}}$, for all $k \in \mathbb{N}$, which implies that $\lambda_A > 0$. On other hand, if there exists some point $x \in X_{m, \beta}$, such that, $\psi_A(x) = 0$, then, it is true that $$0 = \lambda^n_A \psi_A(x) = \mathcal{L}^n_A(\psi_A)(x) = \sum_{\sigma^n(y) = x}e^{S_n A(y)} \psi_A(y) \,. \label{density}$$ In other words, $\psi_A(y) = 0$, for all $y \in \sigma^{-n}(\{x\})$. Since the quasi-greedy $\beta$-expansion of $1$ satisfies (\[irreducible-sequence\]), it follows that $X_{m, \beta}$ is topologically transitive, which implies that the set $\{y : y \in \sigma^{-n}(\{x\})\}$ is dense in $X_{m, \beta}$. Then, by continuity of $\psi_A$, we can conclude that $\psi_A \equiv 0$, which is a contradiction taking into account that $\lambda_A > 0$. The proofs that the eigenvalue is simple and the other properties are similar to the ones in [@MR1085356]. [28]{} D. Aguiar, L. Ciolleti and R. Ruviaro A Variational Principle for the Specific Entropy for Symbolic Systems with Uncountable Alphabets, Mathematische Nachrichten - V. 272, Issue 17-18, 2506-2515, 2018 R. Alcaraz. Topological and ergodic properties of symmetric sub-shifts. , 34(11):4459-4486, 2014. R. Alcaraz, S. Baker, and D. Kong. Entropy, topological transitivity, and dimensional properties of unique [$q$]{}-expansions. , 371(5):3209-3258, 2019. S. Baker and W. Steiner. On the regularity of the generalised golden ratio function. , 49(1):58-70, 2017. R. Bissacot and E. Garibaldi, Weak KAM methods and ergodic optimal problems for countable Markov shifts, Bull. Braz. Math. Soc. 41 (2010), no. 3, 321-338. R. Bissacot and R. dos Santos Freire, Jr. On the existence of maximizing measures for irreducible countable Markov shifts: a dynamical proof. Ergodic Theory Dynam. Systems 34 (2014), no. 4, 1103-1115. A. Baraviera, R. Leplaideur and A. O. Lopes, Ergodic Optimization, Zero Temperature Limits and the Max-Plus Algebra, mini-course in XXIX Coloquio Brasileiro de Matematica - IMPA (2013) S. Bundfuss, T. Krüger, and S. Troubetzkoy. Topological and symbolic dynamics for hyperbolic systems with holes. , 31(5):1305-1323, 2011. L. Cioletti and E. A. Silva. Spectral properties of the Ruelle operator on the Walters class over compact spaces. , 29 (8): 2253-2278, 2016. G. Contreras, A. O. Lopes, and Ph. Thieullen. Lyapunov minimizing measures for expanding maps of the circle. , 21(5): 1379-1409, 2001. M. de Vries and V. Komornik. Unique expansions of real numbers. , 221(2):390-427, 2009. P. Erdös, M. Horváth, and I. Joó.. On the uniqueness of the expansions [$1=\sum q^{-n_i}$]{}. , 58(3-4):333-342, 1991. P. Erdös, I. Joó, and V. Komornik. Characterization of the unique expansions [$1=\sum^\infty_{i=1}q^{-n_i}$]{} and related problems. , 118 (3) :377-390, 1990. R. Freire and V. Vargas. Equilibrium states and zero temperature limit on topologically transitive countable markov shifts. , Trans. Amer. Math. Soc. 370 (2018), no. 12, 8451-8465 E. Garibaldi, Ergodic Optimization in the expanding case, Springer Verlag (2017) P. Glendinning and N. Sidorov.. Unique representations of real numbers in non-integer bases. , 8(4):535-543, 2001. O. Jenkinson, R. D. Mauldin, and M. Urbański. Zero temperature limits of Gibbs-equilibrium states for countable alphabet subshifts of finite type. , 119(3-4):765-776, 2005. V. Komornik, D. Kong, and W. Li. Hausdorff dimension of univoque sets and devil’s staircase. , 305:165-196, 2017. R. Leplaideur, A dynamical proof for the convergence of Gibbs measures at temperature zero. Nonlinearity 18 (2005), no. 6, 2847-2880. D. Lind and B. Marcus. Cambridge University Press, Cambridge, 1995. A. O. Lopes, J. K. Mengue, J. Mohr, and R. R. Souza. Entropy and variational principle for one-dimensional lattice systems with a general a priori probability: positive and zero temperature. , 35(6):1925-1961, 2015. R. D. Mauldin and Mariusz Urbański. Gibbs states on the symbolic space over an infinite alphabet. , 125:93-130, 2001. I. D. Morris. Entropy for zero-temperature limits of Gibbs equilibrium states for countable-alphabet subshifts of finite type. , 126(2):315-324, 2007. W. Parry. On the [$\beta $]{}-expansions of real numbers. , 11:401-416, 1960. W. Parry and M. Pollicott. Zeta functions and the periodic orbit structure of hyperbolic dynamics. , vol. 187-188, 1990. A. Rényi. Representations for real numbers and their ergodic properties. , 8:477-493, 1957. D. Ruelle. Statistical mechanics of a one-dimensional lattice gas. , 9: 267-278, 1968. O. M. Sarig. Thermodynamic formalism for countable Markov shifts. , 19(6):1565-1593, 1999. K. Thomsen. On the structure of beta shifts. , pages 321-332. Amer. Math. Soc., Providence, RI, 2005. P. Walters. Equilibrium states for [$\beta $]{}-transformations and related transformations. , 159(1):65-88, 1978. [^1]: IME - UFRGS. Partially supported by CNPq [^2]: IME - UFRGS. Supported by PNPD-CAPES grant.
--- abstract: 'We study the unpolarised and polarised hadro-production of charmonium in non-relativistic QCD (NRQCD) at low transverse momentum, including sufficiently higher orders in the relative velocity, $v$, so as to study the ratio of $\chi_{c1}$ and $\chi_{c2}$ production rates.' author: - | Sourendu Gupta and Prakash Mathews\ Theory Group, Tata Institute of Fundamental Research, Homi Bhabha Road, Bombay 400 005, INDIA title: Polarised and Unpolarised Charmonium Production at Higher Orders in $v$ --- \[intro\]Introduction ===================== Recent progress in the understanding of cross sections for production of heavy quarkonium resonances has come through the NRQCD reformulation of this problem [@caswell]. A Factorisation approach based on NRQCD developed by Bodwin, Braaten and Lepage [@bbl] enables the factorisation of the production cross sections for a quarkonium $H$ (with momentum $P$) into a perturbative part and a non perturbative part— $$d\sigma\;=\;{1\over\Phi}{d^3P\over(2\pi)^3 2E_{\scriptscriptstyle P}} \sum_{ij} C_{ij}\left\langle{\cal K}_i\Pi(H){\cal K}^\dagger_j\right\rangle, \label{intro.nrqcd}$$ where $\Phi$ is a flux factor and $\Pi(H)$ denotes the hadronic projection operator. The fermion bilinear operators ${\cal K}_i$ are built out of heavy quark fields sandwiching colour and spin matrices and the covariant derivative ${\bf D}$. The labels $i$, $j$ include the colour index, spin $S$, orbital angular momentum $L$ (coupling the $N$ covariant derivatives), the total angular momentum $J$ and helicity $J_z$. The coefficient function $C_{ij}$ is computable in perturbative QCD and has an expansion in the strong coupling $\alpha_{\scriptscriptstyle S} (m)$ (where $m$ is the mass of heavy quark), whereas the matrix elements are non perturbative. However, in NRQCD, these matrix elements scale as powers of $v$. Hence the resulting cross section is an expansion in powers of $\alpha_ {\scriptscriptstyle S} (m)$ and $v$. Often, higher orders in $v$ involves previously neglected colour-octet states of the heavy quark pairs. For charmonium states, a numerical coincidence, $v^2\sim\alpha_ {\scriptscriptstyle S}(m^2)$, makes the higher order terms in $v$ important and the double expansion more complicated. The above non perturbative matrix elements can be reduced to the diagonal form, ${\cal O}^H_\alpha({}^{2S+1} L_J^N)$ (where $\alpha$ denotes the colour singlet or octet state) and off-diagonal form, ${\cal P}^H_\alpha ({}^{2S+1}L_J^N,{}^{2S+1}L_J^{N'})$ [@bbl]. These matrix elements scale as $v^d$ where $d=3+N+N'+2 (E_d+2M_d)$, $E_d$ and $M_d$ are the number of colour electric and magnetic transitions. This formalism has been successfully applied to large transverse momentum processes [@jpsi]. Inclusive production cross sections for charmonium at low energies, dominated by low transverse momenta, also seem to have a good phenomenological description in terms of this approach [@ours; @br; @our2]. The spin asymmetries have also been computed both for the low transverse momentum processes [@pol] and for those with high transverse momenta [@polo]. It was argued in [@our2] that a better understanding of such cross sections, and asymmetries, can be obtained if the higher order terms in $v$ and $\alpha_{\scriptscriptstyle S}$ are used. This follows from the fact that the total inclusive $J/\psi$ cross sections arise either from direct $J/\psi$ production (which starts at order $\alpha_{\scriptscriptstyle S}v^7$) or through radiative decays of $\chi_J$ states. $\chi_0$ and $\chi_2$ are first produced at order $\alpha_{\scriptscriptstyle S}v^5$, whereas $\chi_1$, which has the largest branching fraction into $J/\psi$, is produced only at order $\alpha_{\scriptscriptstyle S}v^9$. Further phenomenological problem is to explain the $\chi_1/\chi_2$ ratio observed in hadro production [@beneke; @cacci]. A better understanding of these cross sections requires the NRQCD expansion up to order $\alpha_{\scriptscriptstyle S}v^9$. The unpolarised and polarisation cross sections are defined as $$\sigma \;=\; \sum_{hh'} \sigma(h,h'), \quad \Delta\sigma \;=\; \sum_{hh'} hh'\sigma(h,h'), \label{intro.asym}$$ where $h$ and $h'$ are the helicities of the beam and target respectively, and $\sigma(h,h')$ denotes the cross section for fixed initial helicities. The difference between the polarised and unpolarised case is in the coefficient functions denoted by $\tilde C_{ij}$ for the polarised case; the set of non perturbative matrix element are the same. We construct the coefficient functions using the “threshold expansion” method [@bchen] and enumerate the non perturbative matrix elements using the spherical tensor method described in [@upol]. Cross Sections\[pol\] ===================== To lowest order in $\alpha_S$ the contributing parton level cross section are $\bar q q \rightarrow \bar Q Q$ and $g g \rightarrow \bar Q Q$. The hadron level cross section is obtained by multiplying appropriate parton luminosities— $${\cal L}_{ab}\;=\;a(x_1) ~b(x_2), \label{me.lumin}$$ where $a$, $b$ runs over quark $q_f$, antiquark $\bar q_f$ or gluon $g$ densities depending on the subprocess cross sections. For polarised cross sections, $\Delta \sigma$, the corresponding polarised luminosities $\Delta {\cal L}_{ab}$ is obtained by replacing $a$, by polarised parton densities $\Delta a$. Data indicates that ${\cal L}_ {\bar qq}\ll{\cal L}_{gg}$ for $\sqrt S\ge20$ GeV and $|\Delta{\cal L}_ {\bar qq}|\ll {\cal L}_{gg}$. Consequently, the $\bar qq$ channel may be neglected for double polarised asymmetries to good precision. The squared matrix element for the $gg$ process is technically more complicated. The difference between the unpolarised [@upol] and polarised cases [@pol1] lies solely in the flipped sign of the $J=2$ part, which arise in the polarisation sum of initial state gluons. The subprocess cross section for the production of a charmonium $H$ can be written as $$\begin{array}{rl} \hat\sigma^{H}_{gg}(\hat s) &= {\displaystyle{\pi^3\alpha_s^2\over4m^2}}\delta(\hat s-4 m^2) ~~\sum_d \nonumber\\& \left[{\displaystyle{1\over18}}\Theta^{H}_S(d) +{\displaystyle{5\over48}}\Theta^{H}_D(d) +{\displaystyle{3\over16}}\Theta^{H}_F(d) \right], \end{array}\label{cs.jpsi}$$ where $d$ runs over the various matrix elements that contribute to the charmonium $H$ at order $v^d$. The subscript $S$, $D$ and $F$ denotes the colour singlet, colour octet symmetric and antisymmetric parts respectively. For the polarised case $\hat\sigma^H$ is replaced by $\Delta\hat\sigma^H$ and the combination of non perturbative matrix elements $\Theta^H$ by $\widetilde\Theta^H$. For the various charmonium states $J/\psi$ and $\chi_J$ the combination of non perturbative matrix elements are listed bellow. The changes for the polarised case is mentioned appropriately. Direct $J/\psi$ Production $$\begin{array}{rl} \Theta^{J/\psi}_D(7)&= {\displaystyle{1\over2m^2}}{\cal O}^{J/\psi}_8({}^1S_0^0) \\& +{\displaystyle{1\over2m^4}}\left[ 3{\cal O}^{J/\psi}_8({}^3P_0^1) +{\displaystyle{4\over5}}{\cal O}^{J/\psi}_8({}^3P_2^1) \right], \\ \Theta^{J/\psi}_D(9)&= {\displaystyle{1\over\sqrt3m^4}}{\cal P}^{J/\psi}_8({}^1S_0^0,{}^1S_0^2) +{\displaystyle{1\over\sqrt{15}m^6}} \\& \biggl[ {\displaystyle{35\over4}} {\cal P}^{J/\psi}_8({}^3P_0^1,{}^3P_0^3) +2 {\cal P}^{J/\psi}_8({}^3P_2^1,{}^3P_2^3) \biggr], \\ \Theta^{J/\psi}_F(9)&= \displaystyle{1\over2 m^6} \left[{1\over3} {\cal O}^{J/\psi}_8 ({}^3P^2_1)- {2\over5} {\cal O}^{J/\psi}_8 ({}^3P^2_2) \right].\\ \end{array}\label{cs.jpsime}$$ $\chi_0$ Production $$\begin{array}{rl} \Theta^{\chi_0}_S(5)&= {\displaystyle{3\over2m^4}}{\cal O}^{\chi_0}_1({}^3P_0^1),\\ \Theta^{\chi_0}_S(7)&= {\displaystyle{7\sqrt5\over4\sqrt3m^6}} {\cal P}^{\chi_0}_1({}^3P_0^1,{}^3P_0^3),\\ \Theta^{\chi_0}_S(9)&= {\displaystyle{1\over8m^8}}\biggl[ {\displaystyle{245\over9}}{\cal O}^{\chi_0}_1({}^3P_0^3) \\& +{\displaystyle{149\sqrt7\over10\sqrt3}} {\cal P}^{\chi_0}_1({}^3P_0^1,{}^3P_0^5) \biggr] +{\displaystyle{2\over5m^4}}{\cal O}^{\chi_0}_1({}^3P_2^1),\\ \Theta^{\chi_0}_D(9)&= {\displaystyle{1\over2m^2}}{\cal O}^{\chi_0}_8({}^1S_0^0) \\& +{\displaystyle{1\over2m^4}}\left[ 3{\cal O}^{\chi_0}_8({}^3P_0^1) +{\displaystyle{4\over5}}{\cal O}^{\chi_0}_8({}^3P_2^1) \right],\\ \Theta^{\chi_0}_F(9)&= \displaystyle{1\over 6 m^4}{\cal O}^{\chi_0}_8 ({}^1P^1_1) \\& +\displaystyle{1\over18m^6} \biggl[ {\cal O}^{\chi_0}_8 ({}^3S^2_1) +5{\cal O}^{\chi_0}_8 ({}^3D^2_1) \biggr]. \end{array}\label{cs.chi0me}$$ $\chi_1$ Production $$\begin{array}{rl} \Theta^{\chi_1}_S(9)&= {\displaystyle{1\over2m^4}}\left[ 3{\cal O}^{\chi_1}_1({}^3P_0^1) +{\displaystyle{4\over5}}{\cal O}^{\chi_1}_1({}^3P_2^1) \right], \\ \Theta^{\chi_1}_D(9)&= {\displaystyle{1\over2m^2}}{\cal O}^{\chi_1}_8({}^1S_0^0) \\& +{\displaystyle{1\over2m^4}}\left[ 3{\cal O}^{\chi_1}_8({}^3P_0^1) +{\displaystyle{4\over5}}{\cal O}^{\chi_1}_8({}^3P_2^1) \right], \\ \Theta^{\chi_1}_F(9)&= \displaystyle{1\over 6 m^4} {\cal O}^{\chi_1}_8 ({}^1P^1_1) +\displaystyle{1\over3m^6} \biggl[ \displaystyle{1\over6}{\cal O}^{\chi_1}_8 ({}^3S^2_1) \\& +\displaystyle{5\over6}{\cal O}^{\chi_1}_8 ({}^3D^2_1) -\displaystyle{1\over5}{\cal O}_8 ({}^3D^2_2) \biggr]. \end{array}\label{cs.chi1me}$$ In eqs.(\[cs.jpsime\],\[cs.chi0me\],\[cs.chi1me\]) the coefficient of $J=2$ matrix elements changes sign for polarised cases. $\chi_1$ is produced first at order $v^9$. The large branching ratio for the decay $\chi_1\to J/\psi$ makes this a phenomenologically important term, and is the main motivation for this work. $\chi_2$ Production $$\begin{array}{rl} \Theta^{\chi_2}_S(5)&= {\displaystyle{2\over5m^4}}{\cal O}^{\chi_2}_1({}^3P_2^1),\\ \Theta^{\chi_2}_S(7)&= {\displaystyle{2\over\sqrt{15}m^6}} {\cal P}^{\chi_2}_1({}^3P_2^1,{}^3P_2^3),\\ \Theta^{\chi_2}_S(9)&= {\displaystyle{3\over2m^4}}{\cal O}^{\chi_2}_1({}^3P_0^1) +{\displaystyle{1\over75m^8}}\biggl[ {\displaystyle{262\over9}}{\cal O}^{\chi_2}_1({}^3P_2^3) \\& +{\displaystyle{141\sqrt3\over2\sqrt7}} {\cal P}^{\chi_2}_1({}^3P_2^1,{}^3P_2^5) \biggr],\\ \Theta^{\chi_2}_D(9)&= {\displaystyle{1\over2m^2}}{\cal O}^{\chi_2}_8({}^1S_0^0) \\& +{\displaystyle{1\over2m^4}}\left[ 3{\cal O}^{\chi_2}_8({}^3P_0^1) +{\displaystyle{4\over5}}{\cal O}^{\chi_2}_8({}^3P_2^1) \right], \\ \Theta^{\chi_2}_F(9)&= \displaystyle{1\over 6 m^4} {\cal O}^{\chi_2}_8 ({}^1P^1_1) \\& + \displaystyle{1\over3m^6} \biggl[ \displaystyle{1\over6}{\cal O}^{\chi_2}_8 ({}^3S^2_1) +\displaystyle{5\over6}{\cal O}^{\chi_2}_8 ({}^3D^2_1) \\& -\displaystyle{1\over5}{\cal O}^{\chi_2}_8 ({}^3D^2_2) +\displaystyle{2\over7}{\cal O}^{\chi_2}_8 ({}^3D^2_3) \biggr]. \end{array}\label{cs.chi2me}$$ In the combination ${\Theta}^{\chi_2}_S (9)$, the coefficient of ${\cal O}^ {\chi_2}_1 ({}^3 P^3_2)$ is different for the polarised case, both in magnitude as well as sign. This is because the matrix element arises also from sources other than the polarisation sum that contribute to the $J=2$ part. To obtain the polarised expression replace the coefficient $262$ by $- 238$. The other $J=2$ terms change sign as expected. The $J=3$ term in the $F$ colour amplitude is also derived from the $J=2$ part of the polarisation sum, and hence flips sign compared to the unpolarised case. Discussion\[disc\] ================== In spite of the large number of unknown non perturbative matrix elements in the final results, it is possible to make several quantitative and qualitative comments about the polarisation asymmetries by making use of heavy quark spin symmetry and scaling arguments developed in [@pol1]. This scaling argument allows us to make rough estimates. Neglecting possible logarithms of $m$ and $v$, dimensional argument can be used to write $$\langle{\cal K}_i\Pi(H){\cal K}^\dagger_j\rangle\;=\; R_H Y_{ij} \Lambda^{D_{ij}} v^d, \label{disc.dimen}$$ where $D_{ij}$ is the mass dimension of the operator, $d$ is the velocity scaling exponent in NRQCD, $\Lambda$ is the cutoff scale below which NRQCD is defined ($\Lambda\sim m$), and $Y_{ij}$ and $R_H$ are dimensionless numbers. $R_H$ contains the irreducible minimum non perturbative information. 7truecm Assuming the constancy of $R_H$ [@pol1] and using heavy-quark symmetry, we find that the non perturbative matrix elements contribute approximately $12 R_\chi m^2 v^9$ to the $\chi_1$ cross section and about $(1+v^2+12 v^4) R_\chi m^2 v^5$ to the $\chi_2$ cross sections. Then we expect $${\sigma(\chi_1)\over\sigma(\chi_2)}\;\approx\; {12v^4\over1+v^2+12v^4}\;=\;0.45, \label{disc.rat}$$ independent of $\sqrt S$. This estimate is in reasonable agreement with the measured values in proton-nucleon collisions (see fig. 1)— $0.34\pm0.16$ at $\sqrt S=38.8$ GeV [@e771] and $0.24\pm0.28$ at $\sqrt S=19.4$ [@e673]. The measurements are also compatible with a lack of $\sqrt S$ dependence. Estimate of ${\cal O} (\alpha^3_{\scriptscriptstyle S})$ effects in [@br] was used to show that the $\chi_1/\chi_2$ ratio could be about $0.3$. In NRQCD this ratio cannot depend on the beam hadron. It turns out that the estimate in eq. (\[disc.rat\]) is not very far from the recently measured value in pion-nucleon collisions— $0.57\pm0.19$ at $\sqrt S=31.1$ GeV [@e706]. However, the experimental situation certainly needs clarification. A straightforward application of the NRQCD scaling laws would lead us to the conclusion that the asymmetries for $pp\to\chi_{0,2}$ are given by $$A_{pp}^{\chi_0}\;\approx\; -A_{pp}^{\chi_2}\;\approx\; {\Delta{\cal L}_{gg}\over{\cal L}_{gg}}+{\cal O}(v^4), \label{disc.ppchi02}$$ where we have neglected the contribution of the $\bar qq$ channel. The asymmetry for $\chi_1$ production is $$A_{pp}^{\chi_1}= {{\widetilde\Theta}_S^{\chi_1}(9) + {\widetilde\Theta}_D^{\chi_1}(9) + {\widetilde\Theta}_F^{\chi_1}(9) \over \Theta_S^{\chi_1}(9) + \Theta_D^{\chi_1}(9) + \Theta_F^{\chi_1}(9)} \left[{\Delta{\cal L}_{gg}\over{\cal L}_{gg}}\right]. \label{disc.ppchip}$$ The ratio of matrix elements can be estimated using heavy quark spin symmetry and the scaling relations in eq. (\[disc.dimen\]). ${\widetilde\Theta}_D^{\chi_J}(9)$ vanishes in this approximation and the terms ${\widetilde\Theta}_{S,F}^{\chi_J}(9)$ come with opposite signs. The numerator is positive but small and we expect— $$A_{pp}^{\chi_1}\;\approx\; 0.2\,{\Delta{\cal L}_{gg}\over{\cal L}_{gg}}. \label{disc.ppchi1}$$ The $\bar qq$ channel remains negligible even at $\sqrt S=500$ GeV. The $J/\psi$ asymmetry seems to be enormously complicated because of the radiative decays of the $\chi$ states. However, a major simplification occurs because of the near vanishing asymmetry in direct $J/\psi$ production. Thus the asymmetry comes entirely from the 20–40% of the cross section due to $\chi$ decays. Taking into account the ratios of the production cross sections of $\chi$ and the branching fractions for their decays into $J/\psi$, we find that the $\chi_1$ and $\chi_2$ states contribute equally to $J/\psi$. Hence the $J/\psi$ polarisation asymmetry is expected to be approximately $$A_{pp}^{J/\psi}\;\approx\; -(0.15\pm0.05) {\Delta{\cal L}_{gg}\over{\cal L}_{gg}}. \label{disc.psi}$$ We summarise the predictions made on the basis of the NRQCD scaling in eq. (\[disc.dimen\]) and the assumption of $R_H$ depends only on the hadron $H$— $$\begin{array}{rl} -A_{pp}^{J/\psi} \approx A_{pp}^{\chi_1} < A_{pp}^{\chi_0} = -A_{pp}^{\chi_2} = {\Delta{\cal L}_{gg}\over{\cal L}_{gg}}. \end{array}\label{disc.nrqcd}$$ In conclusion, low energy double polarised asymmetries are a good test-bed for understanding the origin of all observed systematics in fixed target hadro-production of charmonium. The high order computations presented here provides a set of processes which can be used to test aspects of NRQCD factorisation and scaling. Acknowledgements {#acknowledgements .unnumbered} ================ One of us PM, would like to thank Prof. S Narison for the invitation to attend the QCD Euroconference 97 and for the opportunity to present this work. W. E. Caswell and G. P. Lepage, [*Phys. Lett.*]{}, B 167 (1986) 437. G. T. Bodwin, E. Braaten and G. P. Lepage, [*Phys. Rev.*]{}, D 51 (1995) 1125. E. Braaten, M. A. Doncheski, S. Fleming and M. Mangano, [*Phys. Lett.*]{}, B 333 (1994) 548; D. P. Roy and K. Sridhar, [*Phys. Lett.*]{}, B 339 (1994) 141; M. Cacciari and M. Greco, [*Phys. Rev. Lett.*]{}, 73 (1994) 1586. S. Gupta and K. Sridhar, [*Phys. Rev.*]{}, D 54 (1996) 5545. M. Beneke and I. Rothstein, [*Phys. Rev.*]{}, D 54 (1996) 2005. S. Gupta and K. Sridhar, [*Phys. Rev.*]{}, D 55 (1997) 2650. S. Gupta and P. Mathews, [*Phys. Rev.*]{}, D. 55 (1997) 7144. O. Terayev and A. Tkabladze, JINR-E2-96-431, hep-ph/9612301. M. Beneke, CERN-TH/97-55, hep-ph/ 9703429. M. Cacciari, in the Proceedings of the XXXII Rencontres de Moriond, QCD and High Energy Energy Hadronic Interactions, DESY 97-091 (hep-ph/9706374. E. Braaten and Y. Chen, [*Phys. Rev.*]{}, D 54 (1996) 3216. S. Gupta and P. Mathews, [*Phys. Rev.*]{}, D. 56 (1997) 3019. S. Gupta and P. Mathews, TIFR-TH-97-32, hep-ph/9706541. K. Hagan-Ingram (E771 Collaboration), unpublished; data quoted in the talk by M. Beneke [@br]. D. A. Bauer [*et al.*]{}, [*Phys. Rev. Lett.*]{}, 54 (1985) 753. V. Koreshev [*et al.*]{}, [*Phys. Rev. Lett.*]{}, 77 (1996) 4294.
--- abstract: 'In a recent article, Behrens and Vingron (JCB 17, 12, 2010) compute waiting times for $k$-mers to appear during DNA evolution under the assumption that the considered $k$-mers do not occur in the initial DNA sequence, an issue arising when studying the evolution of regulatory DNA sequences with regard to transcription factor (TF) binding site emergence. The mathematical analysis underlying their computation assumes that occurrences of words under interest do not overlap. We relax here this assumption by use of an automata approach. In an alphabet of size $4$ like the DNA alphabet, most words have no or a low autocorrelation; therefore, globally, our results confirm those of Behrens and Vingron. The outcome is quite different when considering highly autocorrelated $k$-mers; in this case, the autocorrelation pushes down the probability of occurrence of these $k$-mers at generation 1 and, consequently, increases the waiting time for apparition of these $k$-mers up to $40\%$. An analysis of existing TF binding sites unveils a significant proportion of $k$-mers exhibiting autocorrelation. Thus, our computations based on automata greatly improve the accuracy of predicting waiting times for the emergence of TF binding sites to appear during DNA evolution. We do the computation in the Bernoulli or M0 model; computations in the M1 model, a Markov model of order 1, are more costly in terms of time and memory but should produce similar results. While Behrens and Vingron considered specifically promoters of length $1000$, we extend the results to promoters of any size; we exhibit the property that the probability that a $k$-mer occurs at generation time $1$ while being absent at time $0$ behaves linearly with respect to the length of the promoter, which induces a hyperbolic behaviour of the waiting time of any $k$-mer with respect to the length of the promoter.' nocite: '[@GuiOdl81a; @GuiOdl81b; @GoJa83; @Lot05; @FlajoletSedgewick2009; @BehVin2010]' --- [\ ]{} Sarah Behrens,\ Westfälische Wilhelms-Universität, Institute for Evolution and Biodiversity,\ Hüfferstrasse 1 , 48149 Münster, Germany,\ phone: +49-(0)251-83-21096, fax: +49-(0)251-83-24668,\ [[email protected]]{}\ Cyril Nicaud,\ LIGM, CNRS-UMR 8049, Paris-Est, France\ phone: 33(0)16095-7550, fax +33(0)16095-7557,\ [[email protected]]{}\ Pierre Nicodème[^1],\ LIX, CNRS-UMR 7161, École polytechnique,\ 91128 Palaiseau and AMIB Team, INRIA-Saclay, France\ phone: +33(0)16933-4112, fax: +33(0)16933-4049,\ [[email protected]]{}.\ [**Running head:**]{} Waiting times and Evolution\ [**Key words:**]{} Transcription factors, evolution, words correlation, automata Introduction {#sec:intro} ============ The expression of genes is subject to strong regulation. The key concept of transcriptional gene regulation is the binding of proteins, so called transcription factors (TFs), to TF binding sites. These TF binding sites are typically short stretches of DNA, many of which are only around 5–8bp long ([@wray]). Usually, these TF binding sites are located in a region around 1000bp upstream of the gene they regulate, the so called promoter. Thus, the occurrence of particular $k$-mers in these promoter regions has a high impact on modulating transcription. There have been several experimental studies employing ChIP-chip or ChIP-seq technology showing that promoters are rapidly evolving regions that change over short evolutionary time scales ([@odom], [@schmidt], [@kunarso]). In a recent review, [@dowell] summarizes all these experimental findings and concludes that most TF binding events are species-specific and that gene regulation is a highly dynamic evolutionary process. Many of these changes in TF binding, if not necessarily all, can be explained by gains and losses of TF binding sites. Several theoretical studies have tried to give a probabilistic explanation for the speed of changes in transcriptional gene regulation (e.g. [@stone], [@durrett]). [@BehVin2010] infer how long one has to wait until a given TF binding site emerges at random in a promoter sequence. Using two different probabilistic models (a Bernoulli model denoted by M0 and a neighbor dependent model M1) and estimating evolutionary substitution rates based on multiple species promoter alignments for the three species [*Homo sapiens*]{}, [*Pan troglodytes*]{} and [*Macaca mulatta*]{}, they compute the expected waiting time for every $k$-mer, $k$ ranging from 5 to 10, until it appears in a human promoter. They conclude that the waiting time for a TF binding site is highly determined by its composition and that indeed TF binding sites can appear rapidly, i.e. in a time span below the speciation time of human and chimp. However, in their approach, [@BehVin2010] rely on the assumption that if a $k$-mer of interest appears more than once in a promoter sequence, it does not overlap with itself. This particularly affects the waiting times for highly autocorrelated words like e.g. [AAAAA]{} or [CTCTCTCTCT]{}. Using automata, we can relax this assumption and, thus, more accurately compute the expected waiting times until appearance for every $k$-mer, $k$ ranging from 5 to 10, in a promoter of length 1000bp. This automaton approach can be applied both for models M0 and M1. However, for the ease of exposition, in this article we will focus on the Bernoulli model M0. This article is structured as follows. In Section \[sec:models\], we describe model M0, state results from [@BehVin2010] that we rely on and recall how [@BehVin2010] have estimated model M0 parameters based on human, chimp and macaque promoter alignments. In Section \[sec:auto\], we present our new approach of computing waiting times using automata theory; we provide in this section a web-pointer to the program used to perform these computations. Section \[sec:bioresults\] compares the results of computing waiting times for $k$-mers to appear in a promoter of length 1 kb according to [@BehVin2010] and to our new automaton approach. For both computations, we employ the same model parameters estimations that have been already used in [@BehVin2010]; we also explain in this section the biological impact of our findings and show that autocorrelation matters in the context of TF binding site emergence. Section \[sec:linear\] exhibits the first order linear behaviour of the probability of evolution to a $k$-mer from generation time $0$ to time $1$ for specific examples; the observed phenomena is however general, as proved in [@Nicodeme2011]. We provide in this section a web-pointer to a database containing the waiting times of all $k$-mers for $k$ from $5$ to $10$ and for promoter lengths $n=1000$ and $n=2000$. Section \[sec:conclusion\] will conclude the article with some summarizing remarks. Model M0 and expected waiting times {#sec:models} =================================== Throughout the article, we assume that promoter sequences evolve according to model M0 which has been described by [@BehVin2010]. #### Model M0. Given an alphabet $\mathcal{A}=\{\text{A,C,G,T}\}$, let $S(0)=(S_1(0),\dots, S_n(0))$ denote the initial promoter sequence of length $n$ taking values in this alphabet. We assume that the letters in $S(0)$ are independent and identically distributed with $\nu(x):=\Pr(S_1(0)=x)$. Let the time evolution $(S(t))_{t\geq 0}$ of the promoter sequence be governed by the $4\times4$ infinitesimal rate matrix $\bQ=(r_{\alpha,\beta})_{\alpha,\beta\in\mathcal{A}}$. According to the general reverse complement symmetric substitution model, we assume that the nucleotides evolve independently from each other and that $r_{A, T}=r_{T, A}$, $r_{C , G}=r_{G, C}$, $r_{A , C}=r_{T, G}$, $r_{C , A}=r_{G, T}$, $r_{A , G}=r_{T, C}$ and $r_{G , A}=r_{C, T}$ (see also [@arndt3]). Thus, there are 6 free parameters. The matrix $\bP(t)=(p_{\alpha,\beta}(t))_{\alpha,\beta\in\mathcal{A}}$ containing the transitions probabilities of $\alpha$ evolving into $\beta$ in finite time $t\geq 0$, ($\alpha,\beta\in\mathcal{A}$), can be computed by $\bP(t)=e^{t\bQ}$; see [@KarTay75], p. 150-152. #### The expected waiting time. Given a binding site $$b=(b_1,\dots,b_k)\quad\text{where } b_1,\dots,b_k\in\mathcal{A},$$ the aim is to determine the expected waiting time until $b$ emerges in a promoter sequence of length $n$ provided that it does not appear in the initial promoter sequence $S(0)$. More precisely, let $$T_n=\inf\{t\in\mathbb{N} :\exists i\in\{1,\dots,n-k+1\}\text{ such that }(S_i(t),\dots,S_{i+k-1}(t))=(b_1,\dots,b_k)\}.$$ Then, given that $\Pr(b\text{ occurs in }S(0))=0$, $T_n$ has approximately a geometric distribution with parameter $$\begin{aligned} \mathfrak{p}_n&=\Pr(b\text{ occurs in generation 1}\ |\ b\text{ does not occur in generation 0})\\ \nonumber &= \Pr(b\in S(1) \ |\ b\not\in S(0))\end{aligned}$$ as shown by [@BehVin2010]. In particular, one has $$\label{eq:expected} \Ex(T_n)\approx\frac{1}{\mathfrak{p}_n}.$$ #### Estimating the parameters of model M0. For our analyses, we used the same parameter estimations as [@BehVin2010]. The estimations for $\nu(\alpha)$, $\alpha\in\mathcal{A}$, have been obtained by determining the relative frequencies of A, C, G and T in human promoter regions downloaded from UCSC. The substitution rates $r_{\alpha,\beta}$ have been estimated using multiple alignments from UCSC of chimp and macaque DNA sequences to human promoters and by employing the Maximum likelihood based tool developed by [@arndt2]. Afterwards, the transition probabilities $p_{\alpha,\beta}(t)$ for e.g. $t=1$ generation can be easily computed by the matrix exponential $\bP(t)=e^{t\bQ}$. Assuming a speciation time between human and chimp of 4 Million of years and a generation time of $y=20$ years, [@BehVin2010] obtain estimations for $p_{\alpha,\beta}(1)=p_{\alpha,\beta}(1\text{ generation})$ for all $\alpha,\beta\in\mathcal{A}$. Their results are summarized in Table \[estimations\]. A\) Estimations for $\nu(a)$, $a\in\mathcal{A}$:\ $\nu(A)$ $\nu(C)$ $\nu(G)$ $\nu(T)$ ---------- ---------- ---------- ---------- 0.23889 0.26242 0.25865 0.24004 : [**Parameter estimations.**]{} Numbers taken from [@BehVin2010], Supplementary Material S2.\[estimations\] \ B) Estimations for $p_{\alpha,\beta}(1)$, $\alpha,\beta\in\mathcal{A}$:\ A C G T --- ---------------- ---------------- ---------------- ---------------- A 9.99999996e-01 4.54999995e-09 1.57499996e-08 3.40000002e-09 C 6.14999993e-09 9.99999996e-01 7.14999985e-09 2.17499994e-08 G 2.17499994e-08 7.14999985e-09 9.99999996e-01 6.14999993e-09 T 3.40000002e-09 1.57499996e-08 4.54999995e-09 9.99999998e-01 : [**Parameter estimations.**]{} Numbers taken from [@BehVin2010], Supplementary Material S2.\[estimations\] Automaton approach {#sec:auto} ================== The aim of this section is to provide a new procedure to compute the expected waiting time $ \Ex(T_n)$ until a TF binding site $b$ of length $k$ emerges in a promoter sequence of length $n$ by using Equation , i.e. $\Ex(T_n)\approx\frac{1}{\mathfrak{p}_n}$. [@BehVin2010] approximated $\mathfrak{p}_n=\Pr(b\text{ occurs in generation 1}|b\text{ does not occur in generation 0})$ by applying the inclusion-exclusion principle. However, in order to make the computations feasible, they had to assume that $b$ cannot appear self-overlapping which especially adulterates the actual waiting times for autocorrelated words. Automata theory provides a natural and compact framework to handle autocorrelations easily; in this section we present how to use basic automata algorithms in order to compute the probability $\mathfrak{p}_n$ without resorting to the assumption that $b$ occurs non-overlapping. #### Definitions. In this section, only definitions that will be used in the sequel are recalled; more information about automata and regular languages can be found in [@HU01]. Given a finite alphabet $\mathcal{A}$, a [*deterministic and complete automaton*]{} on $\mathcal{A}$ is a tuple $(Q,\delta,q_0,F)$, where $Q$ is a finite set of [*states*]{}, $\delta$ is a mapping from $Q\times\mathcal{A}$ to $Q$, $q_0\in Q$ is the initial state and $F\subseteq Q$ is the [*set of final states*]{}. Let $\varepsilon$ denote the empty word. The mapping $\delta$ can be extended inductively to $Q\times\mathcal{A}^*$ by setting $\delta(q,\varepsilon)=q$ for all $q\in Q$ and, for all $q\in Q$, $u\in\mathcal{A}^*$ and $\alpha\in\mathcal{A}$, $\delta(q,u\alpha) = \delta(\delta(q,u),\alpha)$. A word $u\in\mathcal{A}^*$ is [*recognized*]{} by the automaton when $\delta(q_0,u)\in F$. The [*language recognized*]{} by the automaton is the set of words that are recognized. Since all automata considered in the sequel are deterministic and complete, we will call them “automata” for short. Automata are well represented as labelled directed graphs, where the states are the vertices, and where there is an edge between $p$ and $q$ labelled by a letter $\alpha\in\mathcal{A}$ if and only if $\delta(p,\alpha)=q$; such an edge is called a [*transition*]{}. The initial state has an incoming arrow, and final states are denoted by a double circle. See Figure \[pagauto\] for an example of such a graphical representation. A word $u$ is recognized when starting at the initial state and reading $u$ from left to right, letter by letter, and following the corresponding transition, one ends in a final state. #### Rewording the problem. Consider the alphabet $\mathcal{B} = \mathcal{A}\times\mathcal{A}$. Letters of $\mathcal{B}$ are pairs $(\alpha,\beta)$ of letters of $\mathcal{A}$, which are represented vertically by $\binom{\alpha}{\beta}$. A word $u$ of length $n$ on $\mathcal{B}$ is also seen as a pair of words of length $n$ over $\mathcal{A}$, and represented vertically: if $u=(\alpha_1,\beta_1)(\alpha_2,\beta_2)\ldots(\alpha_n,\beta_n)$, we shall write $u = \binom{\alpha_1\ldots \alpha_n}{\beta_1\ldots \beta_n}$. For any word $u=\binom{v}{w}$ of $\mathcal{B}^*$, the projections $\pi_0$ and $\pi_1$ are defined by $\pi_0(u) = v$ and $\pi_1(u)=w$. For the problems considered in this article, we have $\mathcal{A}=\{\texttt{A,C,G,T}\}$, and a word $u=\binom{v}{w}$ of length $n$ over $\mathcal{B}$ represents the sequence that was initially equal to $v$ and that has evolved into $w$ at time $1$; that is, $S(0) = \pi_0(u)$ and $S(1)=\pi_1(u)$. The main problem can be reworded using rational expressions: for a given $b=b_1\cdots b_k$, the fact that $b$ appear in $S(1)$ but not in $S(0)$ is exactly the condition $\pi_1(u) \in \mathcal{A}^*b\mathcal{A}^*$ and $\pi_0(u)\notin\mathcal{A}^*b\mathcal{A}^*$. We denote by $\mathcal{L}_b$ the set of such words and remark that $\mathcal{L}_b$ is a rational language. #### Construction of the automaton. The smallest automaton $\mathcal{M}_b$ that recognizes the language $\mathcal{A}^*b\mathcal{A}^*$ can be built using the classical Knuth-Morris-Pratt construction (see [@CroRyt94], chapter 7). This requires for any $k$-mer $O(k)$ time and space, and the produced automaton $\mathcal{M}_b=(\{0,\ldots,k\},\delta_b,0,\{k\})$ has exactly $k+1$ states. The language $\mathcal{A}^*\setminus \mathcal{A}^*b\mathcal{A}^*$ is the complement of the previous one, and is therefore recognized by the automaton $\overline{\mathcal{M}}_b = (\{0,\ldots,k\},\delta_b,0,\{0,\ldots,k-1\})$, which has the same underlying graph as $\mathcal{M}_b$ and whose set of final states is the complement of $\mathcal{M}_b$’s one. For the examples given in this section, we use a smaller alphabet $\mathcal{A}=\{A,C\}$ and the $k$-mer is always $b=ACC$, (hence $k=3$). The two automata are depicted in Figure \[pagauto\]. ![**The automata $\mathcal{M}_{ACC}$ ($\geq 1$ occ.; on the left) and $\overline{\mathcal{M}}_{ACC}$ ($0$ occ.; on the right). \[pagauto\]**](BehNicNic_Fig1.eps){width="\textwidth"} To fully describe the language $\mathcal{L}_b$, we use the classical product automaton construction, tuned to fit our needs. Define the automaton $\mathcal{N}_b=(Q,\delta,q_0,F)$ as follows: - The set of states is $Q=\{0,\ldots,k\}\times\{0,\ldots,k\}$. The states of $\mathcal{N}_b$ are therefore pairs $(p,q)$, where intuitively $p$ lies in $\overline{\mathcal{M}}_b$ and $q$ lies in $\mathcal{M}_b$. - The initial state is $q_0=(0,0)$. - The transition mapping $\delta$ is defined for every $(p,q)\in Q$ and every $(\alpha,\beta)\in\mathcal{B}$ by $\delta((p,q),(\alpha,\beta))=(\delta_b(p,\alpha),\delta_b(q,\beta))$. The idea is to read $\pi_0(u)$ in $\overline{\mathcal{M}_b}$ on the first coordinate, and $\pi_1(u)$ in $\mathcal{M}_b$ on the second coordinate. - A state $(p,q)$ is final if and only if both $p$ and $q$ are final in their respective automata, that is, $F = \{0,\ldots,k-1\}\times\{k\}$. The proof of the following lemma follows directly from the construction of $\mathcal{N}_b$: \[lm Nb\] The automaton $\mathcal{N}_b$ recognizes the language $\mathcal{L}_b$. Looking closer at the automaton one can make the following observations: while reading a word $u$ of $\mathcal{B}^*$ in $\mathcal{N}_b$, if one reaches a state of the form $(p,k)$ at some point, for some $p\in\{0,\ldots,k\}$, then all the remaining states on the path labelled by $u$ are also of the form $(q,k)$, for some $q\in\{0,\ldots,k\}$. This is because $\delta_b(k,\alpha)=k$ for every $\alpha\in\mathcal{A}$. Since this state is not final, this means that whenever the second coordinate is $k$ at some point, the word is not recognized because $\pi_0(u)$ contains $b$. We can therefore simplify the automaton $\mathcal{N}_b$ by merging all the states of the form $(p,k)$ into a single state, which we name [*sink*]{}. Let $\mathcal{N}'_b=(Q',\delta',q_0',F')$ denote this new automaton, which has $k^2+k+1$ states. Lemma \[lm Nb prime\] below states that all the information we need is contained in $\mathcal{N}'_b$. See an example of this automaton in Figure \[pagprod\]. ![[**The automaton $\mathcal{N}'_{ACC}$.**]{} For the automaton to be readable, we use the notations $A =\binom{A}{A}$, $\overline{A}=\binom{A}{C}$, $C=\binom{C}{C}$ and $\overline{C}=\binom{C}{A}$. When the label of a transition is not given, it is by default set to the letter at the bottom of its ending state.\[pagprod\]](BehNicNic_Fig2.eps){width="\textwidth1"} \[lm Nb prime\] Let $u$ be a word in $\mathcal{B}^*$, and let $q_u$ be the state reached after reading $u$ in $\mathcal{N}_b'$ from its initial state. The words $u$ can be classified as follows: - if $q_u\in F'$ then $\pi_0(u)$ does not contains $b$ but $\pi_1(u)$ does (this is a success in our settings); - if $q_u$ is the sink state then $\pi_0(u)$ contains $b$ (this is contradictory in our settings); - if $q_u\notin F'$ and $q_u$ is not the sink state, then neither $\pi_0(u)$ nor $\pi_1(u)$ contains $b$ (this is a failure in our settings). #### From automata to probabilities. The automaton $\mathcal{N}'_b$ is readily transformed into a Markov chain, by changing the label of any transition $q\xrightarrow{a}q'$, where $a=\binom{\alpha}{\beta}\in \mathcal{B}$, into the probability $\nu(\alpha)\times p_{\alpha,\beta}(1)$. If there are several transitions from $q$ to $q'$, the edge is labelled by the sum of the associated probabilities. Let $\mathcal{C}_b$ denote this Markov chain. The random variable $Q_n$ associated to the state reached after reading a random word of size $n$ under the M0 model is formally defined by: $$\forall q\in Q',\ \Pr\left(Q_n=q\right) = \sum_{\substack{u=\binom{v}{w}\in\mathcal{B}^n\\ \delta'(q'_0,u)=q}} \nu(v)\times p_{v\rightarrow w}(1).$$ Then, if $\mathbb{P}_b$ is the transition matrix of $\mathcal{C}_b$ and if $V_{q}$ is the probability vector with $1$ on position $q\in Q'$ and $0$ elsewhere, the random state $Q_n$ reached from the initial state after $n$ steps verifies $$\label{matrix formula} \forall q\in Q',\ \Pr\left(Q_n=q\right) = V_{q'_0}^t\times \mathbb{P}_b^n\times V_{q}.$$ From this and by Lemma \[lm Nb prime\] we can compute all the needed probabilities : $$\begin{aligned} \label{eq:S1S0} \Pr\Big(S(1)\in \mathcal{A}^*b\mathcal{A}^*\mid S(0)\notin \mathcal{A}^*b\mathcal{A}^*\Big) & = \frac{\Pr(S(1)\in \mathcal{A}^*b\mathcal{A}^*\text{ and } S(0)\notin \mathcal{A}^*b\mathcal{A}^*)}{\Pr(S(0)\notin \mathcal{A}^*b\mathcal{A}^*)} \\ & = \frac{\Pr(Q_n\in F')}{\Pr(Q_n=\text{sink})}\\ & = \frac{\sum_{q\in F'}V_{q'_0}^t\times \mathbb{P}_b^n\times V_{q}}{V_{q'_0}^t\times \mathbb{P}_b^n\times V_{\text{sink}}}\end{aligned}$$ We therefore get our main result. \[th main\] Let $b\in\mathcal{A}^k$ and $\mathcal{N}'_b=(Q',\delta',q'_0,F')$ be its automaton, with associated matrix $\mathbb{P}_b$. The probability $\mathfrak{p}_n$ that a sequence of length $n$ contains $b$ at time $1$ given that it does not contains $b$ at time $0$ is exactly $$\mathfrak{p}_n= \Pr\Big(S(1)\in \mathcal{A}^*b\mathcal{A}^*\mid S(0)\notin \mathcal{A}^*b\mathcal{A}^*\Big) = \frac{V_{q'_0}^t\times \mathbb{P}_b^n\times \left(\sum_{q\in F'} V_{q}\right)}{V_{q'_0}^t\times \mathbb{P}_b^n\times V_{\text{sink}}}.$$ Applying Theorem \[th main\] and Equation , we obtain that the expected waiting time $\Ex(T_n)\approx\frac{1}{\mathfrak{p}_n}$ until a binding site $b$ of length $k$ appears in a promoter of length $n$ can be approximated by $$\label{eq:automata} \Ex(T_n)\approx\frac{1}{\mathfrak{p}_n}=\frac{1}{\Pr\Big(S(1)\in \mathcal{A}^*b\mathcal{A}^*\mid S(0)\notin \mathcal{A}^*b\mathcal{A}^*\Big)}=\frac{V_{q'_0}^t\times \mathbb{P}_b^n\times V_{\text{sink}}}{V_{q'_0}^t\times \mathbb{P}_b^n\times \left(\sum_{q\in F'} V_{q}\right)}.$$ #### Complexity. The automaton $\mathcal{N}'_b$, and the associated Markov chain $\mathcal{C}_b$ can be built in time and space $O(|\mathcal{A}|^2k^2)$. Once done, the whole calculation reduces to the computation of the row vector $V_{q'_0}^t\times \mathbb{P}_b^n$, which can be done iteratively using the simple relation $$V_{q'_0}^t\times \mathbb{P}_b^{i+1} = \underbrace{\left(V_{q'_0}^t\times \mathbb{P}_b^{i}\right)}_{\text{row vector}}\times \mathbb{P}_b.$$ Hence this consists of $n$ products of a vector by a matrix. Moreover, this matrix is a square matrix of dimension $k^2+k+1$, which is sparse since it has exactly $(k^2+k+1)|\mathcal{A}|^2$ non-zero values. Therefore, the probability of Theorem \[th main\] can be computed in time $O(n\times k^2\times |\mathcal{A}|^2)$, using $O(|\mathcal{A}|^2k^2)$ space. #### Web access to the code. -2ex URL provides the `C` code used in this section. Biological results {#sec:bioresults} ================== Applying Equation for obtaining the automaton results and using Theorem 1 from [@BehVin2010], we computed the expected waiting time $\Ex(T_{1000})$ of all $k$-mers in the M0 model for $k$ from $5$ to $10$ to appear in a promoter sequence of length 1000 bp. The parameters of model M0 have been estimated as described in Section \[sec:models\] and are depicted in Table \[estimations\]. Figure \[fig:scatter\] provides an overall comparison of the waiting time computed by automata with respect to the previous computations of [@BehVin2010] for $k=5$ and $k=10$. ![\[fig:scatter\] [**Overall comparisons of waiting times of [@BehVin2010] (BV) versus the automata method (BNN) for $5$- and $10$-mers.**]{}](BehNicNic_Fig3.eps){width="\textwidth"} As can be observed in this scatterplot, the computed waiting times based on the automaton approach globally confirm the results of [@BehVin2010]. However, there are some outliers exhibiting longer waiting times than predicted by [@BehVin2010]. The four most extreme outliers that deviate from the bisecting line correspond to `AAAAA`, `TTTTT`, `CCCCC`, `GGGGG` and to `AAAAAAAAAA`, `CCCCCCCCCC`, `GGGGGGGGGG`, `TTTTTTTTTT` respectively. Other outliers are $k$-mers like e.g. `CGCGC`, `TCTCT` and `CGCGCGCGCG`, `TCTCTCTCTC`. Tables \[tab:corrank5\], \[tab:corrank7\] and \[tab:corrank10\] show all $5$-, $7$- and $10$-mers for which $\frac{\Ex_{\operatorname{BNN}}(T_{1000})}{\Ex_{\operatorname{BV}}(T_{1000})}>1.05$ where $\Ex_{\operatorname{BV}}(T_{1000})$ denotes the expected waiting time according to [@BehVin2010] and $\Ex_{\operatorname{BNN}}(T_{1000})$ according to our automaton approach, i.e. $k$-mers with significantly longer waiting times than predicted by [@BehVin2010]. ----------- ------------------------------------------- ------ ------------------------------------------ ------ -------------------------------------------------------------------------------- $\Ex_{\operatorname{BNN}}(T_{1000})/10^6$ Rank $\Ex_{\operatorname{BV}}(T_{1000})/10^6$ Rank $\frac{\Ex_{\operatorname{BNN}}(T_{1000})}{\Ex_{\operatorname{BV}}(T_{1000})}$ [CCCCC]{} 9.105 1021 6.304 1 1.44 [GGGGG]{} 9.570 1022 6.666 142 1.44 [TTTTT]{} 10.401 1023 7.457 993 1.39 [AAAAA]{} 10.656 1024 7.654 1024 1.39 [CGCGC]{} 7.047 699 6.446 11 1.09 [TCCCC]{} 7.076 737 6.477 17 1.09 [CCCCT]{} 7.076 738 6.477 21 1.09 [GCGCG]{} 7.127 787 6.518 31 1.09 [CTCTC]{} 7.263 883 6.679 148 1.09 [CACAC]{} 7.337 945 6.750 217 1.09 [GGGGA]{} 7.428 971 6.814 318 1.09 [AGGGG]{} 7.428 972 6.814 322 1.09 [TCTCT]{} 7.508 978 6.910 477 1.09 [GTGTG]{} 7.511 981 6.914 486 1.09 [GAGAG]{} 7.587 997 6.987 573 1.09 [ACACA]{} 7.625 1002 7.019 605 1.09 [TGTGT]{} 7.677 1010 7.073 735 1.09 [AGAGA]{} 7.796 1016 7.185 833 1.09 [TTTTC]{} 7.710 1013 7.169 823 1.08 [CTTTT]{} 7.710 1014 7.169 827 1.08 [TATAT]{} 8.135 1019 7.535 1003 1.08 [ATATA]{} 8.178 1020 7.575 1014 1.08 [GAAAA]{} 7.959 1017 7.407 988 1.07 [AAAAG]{} 7.959 1018 7.407 992 1.07 [TTCCC]{} 7.090 751 6.679 144 1.06 [CCCTT]{} 7.090 752 6.679 152 1.06 [TTTCC]{} 7.312 924 6.910 473 1.06 [CCTTT]{} 7.312 925 6.910 481 1.06 [GGGAA]{} 7.411 966 6.987 574 1.06 [AAGGG]{} 7.411 967 6.987 582 1.06 [GGAAA]{} 7.599 1000 7.185 828 1.06 [AAAGG]{} 7.599 1001 7.185 837 1.06 ----------- ------------------------------------------- ------ ------------------------------------------ ------ -------------------------------------------------------------------------------- : \[tab:corrank5\] [**Expected waiting times (generations) for 5-mers in model M0 with $\frac{\Ex_{\operatorname{BNN}}(T_{1000})}{\Ex_{\operatorname{BV}}(T_{1000})}>1.05$.**]{} $\Ex_{\operatorname{BV}}(T_{1000})$ denotes the expected waiting time according to [@BehVin2010] (BV) and $\Ex_{\operatorname{BNN}}(T_{1000})$ according to our automaton approach (BNN). Ranks refer to $5$-mers sorted by their waiting time of appearance according to the two different procedures BV and BNN; rank 1 is assigned to the fastest evolving 5-mer, rank 1024 (=$4^5$) to the slowest emerging 5-mer. ------------- ------------------------------------------- ------- ------------------------------------------ ------- -------------------------------------------------------------------------------- $\Ex_{\operatorname{BNN}}(T_{1000})/10^6$ Rank $\Ex_{\operatorname{BV}}(T_{1000})/10^6$ Rank $\frac{\Ex_{\operatorname{BNN}}(T_{1000})}{\Ex_{\operatorname{BV}}(T_{1000})}$ [CCCCCCC]{} 93.457 16257 65.518 1 1.43 [GGGGGGG]{} 101.108 16380 71.312 576 1.42 [TTTTTTT]{} 127.536 16383 92.632 16257 1.38 [AAAAAAA]{} 131.923 16384 95.990 16384 1.37 [CGCGCGC]{} 74.347 2328 67.939 50 1.09 [GCGCGCG]{} 75.250 3170 68.766 86 1.09 [CTCTCTC]{} 81.865 10928 75.280 3235 1.09 [CACACAC]{} 83.101 12466 76.448 4042 1.09 [GTGTGTG]{} 85.914 14531 79.102 7786 1.09 [TCTCTCT]{} 85.978 14535 79.117 7829 1.09 [GAGAGAG]{} 87.211 15312 80.329 8656 1.09 [ACACACA]{} 87.721 15337 80.754 9267 1.09 [TGTGTGT]{} 89.145 15620 82.131 11616 1.09 [TATATAT]{} 101.469 16381 94.057 16304 1.08 [ATATATA]{} 101.988 16382 94.536 16338 1.08 [AGAGAGA]{} 90.953 16191 83.829 12794 1.08 [TCCCCCC]{} 73.461 1495 68.495 65 1.07 [CCCCCCT]{} 73.461 1496 68.495 71 1.07 [GGGGGGA]{} 79.292 7867 74.080 2158 1.07 [AGGGGGG]{} 79.292 7868 74.080 2153 1.07 [TTTTTTC]{} 92.782 16249 87.773 15367 1.06 [CTTTTTT]{} 92.782 16250 87.773 15366 1.06 [GAAAAAA]{} 96.810 16376 91.645 16255 1.06 [AAAAAAG]{} 96.810 16377 91.645 16254 1.06 ------------- ------------------------------------------- ------- ------------------------------------------ ------- -------------------------------------------------------------------------------- : \[tab:corrank7\] [**Expected waiting times (generations) for 7-mers in model M0 with $\frac{\Ex_{\operatorname{BNN}}(T_{1000})}{\Ex_{\operatorname{BV}}(T_{1000})}>1.05$.**]{} $\Ex_{\operatorname{BV}}(T_{1000})$ denotes the expected waiting time according to [@BehVin2010] (BV) and $\Ex_{\operatorname{BNN}}(T_{1000})$ according to our automaton approach (BNN). Ranks refer to $7$-mers sorted by their waiting time of appearance according to the two different procedures BV and BNN; rank 1 is assigned to the fastest evolving 7-mer, rank 16384 (=$4^7$) to the slowest emerging 7-mer. ---------------- ------------------------------------------- --------- ------------------------------------------ --------- -------------------------------------------------------------------------------- $\Ex_{\operatorname{BNN}}(T_{1000})/10^6$ Rank $\Ex_{\operatorname{BV}}(T_{1000})/10^6$ Rank $\frac{\Ex_{\operatorname{BNN}}(T_{1000})}{\Ex_{\operatorname{BV}}(T_{1000})}$ [CCCCCCCCCC]{} 3577.003 511668 2545.561 1 1.41 [GGGGGGGGGG]{} 4042.505 937454 2893.573 8844 1.40 [TTTTTTTTTT]{} 6387.187 1048575 4702.438 1047553 1.36 [AAAAAAAAAA]{} 6703.254 1048576 4943.605 1048576 1.36 [GCGCGCGCGC]{} 2953.939 16095 2713.901 443 1.09 [CGCGCGCGCG]{} 2953.939 16096 2713.901 523 1.09 [TCTCTCTCTC]{} 3706.263 658915 3426.738 337146 1.08 [CTCTCTCTCT]{} 3706.263 658916 3426.738 337202 1.08 [CACACACACA]{} 3799.148 773143 3513.991 421031 1.08 [ACACACACAC]{} 3799.148 773144 3513.991 421142 1.08 [TGTGTGTGTG]{} 3951.253 876168 3657.531 625393 1.08 [GTGTGTGTGT]{} 3951.253 876169 3657.531 625471 1.08 [GAGAGAGAGA]{} 4050.273 950059 3750.629 702887 1.08 [AGAGAGAGAG]{} 4050.273 950060 3750.629 703066 1.08 [TATATATATA]{} 5176.970 1048573 4821.512 1048005 1.07 [ATATATATAT]{} 5176.970 1048574 4821.512 1048120 1.07 ---------------- ------------------------------------------- --------- ------------------------------------------ --------- -------------------------------------------------------------------------------- : \[tab:corrank10\] [**Expected waiting times (generations) for 10-mers in model M0 with $\frac{\Ex_{\operatorname{BNN}}(T_{1000})}{\Ex_{\operatorname{BV}}(T_{1000})}>1.05$.**]{} $\Ex_{\operatorname{BV}}(T_{1000})$ denotes the expected waiting time according to [@BehVin2010] (BV) and $\Ex_{\operatorname{BNN}}(T_{1000})$ according to our automaton approach (BNN). Ranks refer to $10$-mers sorted by their waiting time of appearance according to the two different procedures BV and BNN; rank 1 is assigned to the fastest evolving 10-mer, rank 1048576 (=$4^{10}$) to the slowest emerging 10-mer. We use in the following the million of generations (in short Mgen) as unit of time, where a generation is 20 years. The discrepancy between the two procedures can attain up to around 40%, e.g. `CCCCC` has a discrepancy of 44% with $\Ex_{\operatorname{BNN}}(T_{1000})=9.105\text{~Mgen}$ and $\Ex_{\operatorname{BV}}(T_{1000})=6.304\text{~Mgen}$, `CCCCCCC` a discrepancy of 43% with $\Ex_{\operatorname{BNN}}(T_{1000})=93.457\text{~Mgen}$ and $\Ex_{\operatorname{BV}}(T_{1000})=65.518\text{~Mgen}$, and `CCCCCCCCCC` has a discrepancy of 41% with $\Ex_{\operatorname{BNN}}(T_{1000})$ $=3577.003\text{~Mgen}$ and $\Ex_{\operatorname{BV}}(T_{1000})=2545.561\text{~Mgen}$. Strikingly, most of the $k$-mers with significant discrepancy feature a high autocorrelation, i.e. they can appear overlapping in so called clumps. For example, the 5-mer `CCCCC` could appear twice in the clump `CCCCCC` (at positions 1 and 2), `CGCGC` could appear three times in the clump `CGCGCGCGC` (at positions 1, 3 and 5). In order to distinguish between different levels of autocorrelation of $k$-mers, let $$\mathcal{P}(b):=\{p\in\{1,\dots,k-1\}:b_i=b_{i+p}\text{ for all }i=1,\dots,k-p\}$$ denote the set of periods of a $k$-mer $b=(b_1,\dots,b_k)$. A $k$-mer $b$ is called non-periodic or non-autocorrelated if and only if $\mathcal{P}(b)=\emptyset$. Furthermore, for a periodic $k$-mer $b$ let $p_0(b)$ denote its minimal period. For example, $p_0(\texttt{CCCCC})=1$, $p_0(\texttt{CGCGC})=2$, $p_0(\texttt{CGACG})=3$ and $p_0(\texttt{CGATC})=4$. We then call a word $p$-periodic if and only if its minimal period is $p$. As can be observed in Tables \[tab:corrank5\], \[tab:corrank7\] and \[tab:corrank10\], half of the 5-mers, two-thirds of the 7-mers and all of the 10-mers with $\frac{\Ex_{\operatorname{BNN}}(T_{1000})}{\Ex_{\operatorname{BV}}(T_{1000})}>1.05$ are either 1- or 2-periodic, i.e. show a high degree of autocorrelation. [@BehVin2010] already investigated the speed of TF binding site emergence and its biological implications for the evolution of transcriptional regulation in detail and we do not want to elaborate on this again. However, in line with [@BehVin2010], we want to emphasize that the speed of TF binding site emergence is primarily influenced by its nucleotide composition. The goal in the following will be to investigate the impact of autocorrelation regarding TF binding sites. More precisely, we want to answer the question: Do existing TF binding sites show significant autocorrelation or can this aspect be neglected when studying the speed of TF binding site emergence? To investigate this, starting from the JASPAR CORE database for vertebrates Version 4 ([@jaspar]), we extracted all the human TF binding sites of length $k$, $5\leq k\leq 10$, ending up with a set of 37 position count matrices (PCMs) for the 37 different TFs in analogy to [@BehVin2010]. In order to make these PCMS accessible for our framework based on $k$-mers, we converted a PCM into a set of $k$-mers by setting a threshold of 0.95 of the maximal PCM score and extracted all $k$-mers with a score above this threshold. For example, the PCM $$\begin{array}{c} \text{A}\\ \text{C}\\ \text{G}\\ \text{T}\\ \end{array} \left( \begin{array}{cccccccccc} 0&0&0&4&2&0&1&0&6&3\\ 32&30&35&27&5&28&31&24&25&26\\ 1&1&0&0&15&1&0&3&0&3\\ 2&4&0&4&13&6&3&8&4&3 \end{array} \right)$$ of the TF SP1 is then translated into the following set of 10-mers: $\{\texttt{CCCCACCCCC}$, $\texttt{CCCCCCCCCC}$, $\texttt{CCCCGCCCCC}$, $\texttt{CCCCTCCCCC}\}$. Applying this procedure, in total we obtain 372 different JASPAR $k$-mers, $5\leq k\leq10$, for the 37 different human TFs. We then screened all JASPAR $k$-mers for 1-periodicity, 2-periodicity,..., $(k-1)$-periodicity. To evaluate the degree of autocorrelation of a given JASPAR TF given by its set of $k$-mers, we then computed the proportion of 1-periodic, 2-periodic,..., $(k-1)$-periodic and of non-periodic $k$-mers in this set. The results are depicted in Figure \[fig:jaspar\]. ![[**Barplot of the degree of autocorrelation of JASPAR TF binding sites.**]{} For every of the 37 JASPAR TFs each given by a set of $k$-mers, the proportion of $p$-periodic and non-periodic $k$-mers in this set was calculated, $p$ ranging from 1 to $k-1$. Additionally, the same proportions were computed for all possible $k$-mers, $k$ ranging from 5 to 10 (“Background”).\[fig:jaspar\]](BehNicNic_Fig4.eps){height="\textwidth"} As can be seen, some TFs like SP1, FOXL1, YY1, GATA3, GATA2 and ETS1 exhibit a high autocorrelation while 14 of the 37 TFs show no autocorrelation at all (USF1, SPI1,..., AP1). In order to test whether autocorrelated $k$-mers are enriched among JASPAR TF binding sites, as a background we screened all possible $k$-mers, i.e. all $b=(b_1,\dots,b_k)\in\mathcal{A}^k$, $\mathcal{A}=\{\text{A,C,G,T}\}$, $k$ ranging from 5 to 10, for autocorrelation in the same way as JASPAR $k$-mers. The resulting proportions of periodic and non-periodic words of this background are also depicted in Figure \[fig:jaspar\]. In total, among the JASPAR $k$-mers, there are 168 autocorrelated words (i.e. words that are $p$-periodic for one $p\in\{1,\dots,k-1\}$) and 204 non-autocorrelated words. The background set contains 435,828 autocorrelated and 961,932 non-autocorrelated $k$-mers. Performing Fisher’s Exact Test for Count Data with the alternative “greater”, we obtain a $p$-value of 1.119e-08. We can thus conclude that autocorrelated words are significantly enriched among JASPAR $k$-mers. Consequently, existing TF binding sites indeed feature a significant proportion of autocorrelation. Linear behaviour of $\mathfrak{P}_n$ {#sec:linear} ====================================   (60,0)(0,20) (-30,-28) [![\[fig:flexn\] [**Plots of the probability $\mathfrak{p}_n$ (left) and of the expected waiting time $\Ex(T_n)$ (right)**]{}. (Top) $b=\texttt{AAAAA}$ (blue) and $b'=\texttt{CGCGC}$ (magenta); (Down) $b=\texttt{CCCCCCCCCC}$ (blue) and $b'=\texttt{ATATATATAT}$ (magenta). In the linear plots of the probability, the anchors values for $n=1000$ and $n=2000$ (computed by automata) are represented by boxes; the straight lines are the straight lines going through the corresponding points and the circles are test values also computed by automata. The fit is perfect as expected from singularity analysis. ](BehNicNic_Fig5.eps "fig:"){height="5.6cm" width="5.6cm"}]{} (40,-31) [![\[fig:flexn\] [**Plots of the probability $\mathfrak{p}_n$ (left) and of the expected waiting time $\Ex(T_n)$ (right)**]{}. (Top) $b=\texttt{AAAAA}$ (blue) and $b'=\texttt{CGCGC}$ (magenta); (Down) $b=\texttt{CCCCCCCCCC}$ (blue) and $b'=\texttt{ATATATATAT}$ (magenta). In the linear plots of the probability, the anchors values for $n=1000$ and $n=2000$ (computed by automata) are represented by boxes; the straight lines are the straight lines going through the corresponding points and the circles are test values also computed by automata. The fit is perfect as expected from singularity analysis. ](BehNicNic_Fig6.eps "fig:"){height="5.6cm" width="5.6cm"}]{} (-30,-83) [![\[fig:flexn\] [**Plots of the probability $\mathfrak{p}_n$ (left) and of the expected waiting time $\Ex(T_n)$ (right)**]{}. (Top) $b=\texttt{AAAAA}$ (blue) and $b'=\texttt{CGCGC}$ (magenta); (Down) $b=\texttt{CCCCCCCCCC}$ (blue) and $b'=\texttt{ATATATATAT}$ (magenta). In the linear plots of the probability, the anchors values for $n=1000$ and $n=2000$ (computed by automata) are represented by boxes; the straight lines are the straight lines going through the corresponding points and the circles are test values also computed by automata. The fit is perfect as expected from singularity analysis. ](BehNicNic_Fig7.eps "fig:"){height="5.6cm" width="5.6cm"}]{} (40,-86) [![\[fig:flexn\] [**Plots of the probability $\mathfrak{p}_n$ (left) and of the expected waiting time $\Ex(T_n)$ (right)**]{}. (Top) $b=\texttt{AAAAA}$ (blue) and $b'=\texttt{CGCGC}$ (magenta); (Down) $b=\texttt{CCCCCCCCCC}$ (blue) and $b'=\texttt{ATATATATAT}$ (magenta). In the linear plots of the probability, the anchors values for $n=1000$ and $n=2000$ (computed by automata) are represented by boxes; the straight lines are the straight lines going through the corresponding points and the circles are test values also computed by automata. The fit is perfect as expected from singularity analysis. ](BehNicNic_Fig8.eps "fig:"){height="5.6cm" width="5.6cm"}]{} (19,-24)[$n$]{} (-25,24)[$\mathfrak{p}_n$]{} (8,20)[`CGCGC`]{} (12,5)[`AAAAA`]{} (90,-24)[$n$]{} (45,24)[$\Ex(T_n)/10^7$]{} (60,-22)[`CGCGC`]{} (80,-18)[`AAAAA`]{} (19,-79)[$n$]{} (-25,-30)[$\mathfrak{p}_n$]{} (0,-40)[`CCCCCCCCCC`]{} (12,-55)[`ATATATATAT`]{} (90,-79)[$n$]{} (45,-32)[$\Ex(T_n)/10^{10}$]{} (56,-77)[`CCCCCCCCCC`]{} (80,-73)[`ATATATATAT`]{}   In Section \[sec:auto\] we considered by automata a parallel computation on two sequences, $S(0)$ and $S(1)$. It is possible to do a relevant mathematical analysis with the random sequence $S(0)$ only. The corresponding computations have however a much higher complexity than the automaton approach. This analysis is defined on counting in a random sequence $S(0)$ the number of putative-hit positions where, given a $k$-mer $b$, a putative-hit position is any position of $S(0)$ that can lead by mutation to an occurrence of $b$ is $S(1)$, assuming that a single mutation has occurred. For any $k$-mer $b$ [@Nicodeme2011] provides a combinatorial construction using clumps (see [@BaClFaNi08]) that (i) considers [*all*]{} the sequences that avoid the $k$-mer $b$, and (ii) counts [*all*]{} the [*putative-hit*]{} position in these sequences. In the following, let $H_n$ denote the number of putative-hit positions in a sequence $S(0)$ randomly chosen within the set of sequences of length $n$ that do not contain the $k$-mer $b$, where the letters are drawn with respect to the distribution $\nu$ and where we put a probability mass $1$ to the set [^2]. As a consequence of singularity analysis of rational functions, [@Nicodeme2011] proves that $$\label{eq:ExpHn} \Ex(H_n)=c_1\!\times\! n +c_2 +O(A^n)\qquad (A<1).$$ It is clear that, using the asymptotic Landau’s $\Theta$ notation, we do not have $$\mathfrak{p}_n = \Theta (\Ex(H_n)),$$ since, for $n$ large enough, this would imply that $\mathfrak{p}_n>1$. However, for $$\max_{\alpha\neq \beta\in\mathcal{A}}(p_{\alpha,\beta}(1))\ll 1\quad\text{and}\quad n \ll 1\big/\!\!\max_{\alpha\neq\beta\in\mathcal{A}}(p_{\alpha,\beta}(1)),$$ the probability that two or more putative-hit positions simultaneously mutate to provide the $k$-mer $b$ in sequence $S(1)$ is an event of second order small probability. With these conditions, we have $$\label{eq:asymppn} \mathfrak{p}_n \approx \rho_{b,\nu,p}\times \Ex(H_n)= \rho_{b,\nu,p}\times (c_1\!\times\! n +c_2) +O(A^n),$$ where $\rho_{b,\nu,p}$ is a constant of the order of magnitude of the constants $p_{\alpha, \beta}(1)$ with $\alpha\neq \beta$, its value depending upon these constants, the distribution $\nu$ and the correlation structure of the $k$-mer $b$. See Figure \[fig:flexn\] for examples. #### Available data. -2ex URL provides access to the values of the expected waiting time $\Ex(T_n)$ and the probability $\mathfrak{P}_n$ for $n=1000$ and $n=2000$ for all $k$-mers with $k$ from $5$ to $10$. It is therefore possible to compute $\mathfrak{p}_n$ and $\Ex(T_n)$ for all these $k$-mers for all $n$ from these data. It took 10 hours to compute the data. Conclusion {#sec:conclusion} ========== Using automata theory, we have developed a new procedure to compute the waiting time until a given TF binding site emerges at random in a human promoter sequence. In contrast to [@BehVin2010], we do not have to rely on any assumptions regarding the overlap structure of the TF binding site of interest. Thus, our computations are more accurate. Assuming model M0, whose parameters have been estimated in the same way as in , applying our automaton approach to all $k$-mers, $k$ ranging from 5 to 10, and comparing the resulting expected waiting times to those obtained by [@BehVin2010], we particularly observe that highly autocorrelated words like `CCCCC` or `AAAGG` actually tend to emerge slower than predicted by [@BehVin2010]. This slowdown can attain up to 40%, e.g. according to [@BehVin2010], `CCCCC` is predicted to be created in a human promoter of length 1 kb in around 6.304 Mgen while our more accurate method predicts it be generated in around 9.105 Mgen. We have shown that existing TF binding sites (from the database JASPAR; [@jaspar]) feature a significant proportion of autocorrelation. Therefore the assumption of [@BehVin2010] that TF binding sites do not appear self-overlapping when computing waiting times is problematic. The new automaton approach now incorporates the possibility of TF binding sites appearing self-overlapping into the model. Hence, the automaton approach highly improves the accuracy of the estimations for waiting times. We observed a linear behaviour with respect to the length of the promoters for the probability of finding a $k$-mer at generation $1$ that is not present at generation $0$. This implies a highly flexible and efficient approach for computing this probability for any promoter length, and in particular for lengths of highest interest, i.e. between 300 and 3000 bp. This also induces a hyperbolic behaviour for the waiting time. #### Acknowledgements. We thank Martin Vingron who initiated the previous work of [@BehVin2010], of which the present article is a follow-up. #### Disclosure statement. No competing financial interests exist. [21]{} \[1\][\#1]{} \[1\][`#1`]{} Arndt, P. F. and Hwa, T., 2005. Identification and measurement of neighbor-dependent nucleotide substitution processes. *Bioinformatics* 21, 2322–2328. Bassino, F., Clément, J., Fayolle, J., and Nicodème, P., 2008. Constructions for clump statistics. In Jacquet, P., ed., *Proceedings of the Fifth Colloquium on Mathematics and Computer Science, Blaubeuren, Germany*, 183–198. DMTCS.\ . Behrens, S. and Vingron, M., 2010. Studying the evolution of promoters: a waiting time problem. *J. Comput. Biol* 17, 1591–1606.\ . Crochemore, M. and Rytter, W., 1994. *Text Algorithms*. Oxford University Press. Dowell, R. D., 2010. Transcription factor binding variation in the evolution of gene regulation. *Trends in Genetics* 26, 468 – 475. Duret, L. and Arndt, P. F., 2008. The impact of recombination on nucleotide substitutions in the human genome. *PLoS Genet.* 4. Durrett, R. and Schmidt, D., 2007. Waiting for regulatory sequences to appear. *Ann. Appl. Probab.* 17, 1–32. Flajolet, P. and Sedgewick, R., 2009. *Analytic Combinatorics*. Cambridge University Press. Goulden, I. and Jackson, D., 1983. *Combinatorial [E]{}numeration*. John Wiley. New-York. Guibas, L. and Odlyzko, A., 1981. Periods in strings. *J. Combin. Theory* A, 19–42. Guibas, L. and Odlyzko, A., 1981. Strings overlaps, pattern matching, and non-transitive games. *J. Combin. Theory* A, 108–203. Hopcroft, J., Motwani, R., and Ullman, J., 2001. *Introduction to Automata Theory, Languages and Computation*. Addison-Wesley. Karlin, S. and Taylor, H., 1975. *A First Course in Stochastic Processes*. Academic Press. Second Edition, 557 pages. Kunarso, G., Chia, N.-Y., Jeyakani, J., Hwang, C., Lu, X., Chan, Y.-S., Ng, H.-H., and Bourque, G., 2010. Transposable elements have rewired the core regulatory network of human embryonic stem cells. *Nature Genetics* 42, 631–634. Lothaire, M., 2005. *Applied Combinatorics on Words*. Encyclopedia of Mathematics. Cambridge University Press. Nicodème, P., 2011. A clump analysis for waiting times in [DNA]{} evolution. Personal communication,  \ <http://www.lix.polytechnique.fr/Labo/Pierre.Nicodeme/pncpm12.pdf>. Odom, D. T., Dowell, R. D., Jacobsen, E. S., Gordon, W., Danford, T. W., MacIsaac, K. D., Rolfe, P. A., Conboy, C. M., Gifford, D. K., and Fraenkel, E., 2007. Tissue-specific transcriptional regulation has diverged significantly between human and mouse. *Nat. Genet.* 39, 730–732. Portales-Casamar, E., Thongjuea, S., Kwon, A. T., Arenillas, D., Zhao, X., Valen, E., Yusuf, D., Lenhard, B., Wasserman, W. W., and Sandelin, A., 2010. : the greatly expanded open-access database of transcription factor binding profiles. *Nucl. Acids Res.* 38, D105–110. Schmidt, D., Wilson, M. D., Ballester, B., Schwalie, P. C., Brown, G. D., Marshall, A., Kutter, C., Watt, S., Martinez-Jimenez, C. P., Mackay, S., Talianidis, I., Flicek, P., and Odom, D. T., 2010. Five-vertebrate chip-seq reveals the evolutionary dynamics of transcription factor binding. *Science* 328, 1036–1040. Stone, J. R. and Wray, G. A., 2001. Rapid evolution of cis-regulatory sequences via local point mutations. *Mol. Biol. Evol.* 18, 1764–1770. Wray, G. A., Hahn, M. W., Abouheif, E., Balhoff, J. P., Pizer, M., Rockman, M. V., and Romano, L. A., 2003. The evolution of transcriptional regulation in eukaryotes. *Mol. Biol. Evol.* 20, 1377–1419. [^1]: corresponding author [^2]: This is done by unconditioning with respect to the fact that $b$ does not occur in $S(0)$, [*i.e*]{} by dividing the resulting expressions by $\Pr(S(0)\not\in \mathcal{A}^{\star}b\mathcal{A}^{\star}$); see Equation .
--- abstract: 'We derive an exact formula for the dimensionality of the Hilbert space of the boundary states of $SU(2)$ Chern-Simons theory, which, according to the recent work of Ashtekar [*et al*]{}, leads to the Bekenstein-Hawking entropy of a four dimensional Schwarzschild black hole. Our result stems from the relation between the (boundary) Hilbert space of the Chern-Simons theory with the space of conformal blocks of the Wess-Zumino model on the boundary 2-sphere.' address: 'The Institute of Mathematical Sciences, CIT Campus, Madras 600113, India. ' author: - 'Romesh K. Kaul and Parthasarathi Majumdar[^1]' title: Quantum Black Hole Entropy --- .2in The issue of the Bekenstein-Hawking (B-H) [@bek], [@haw] entropy of black holes has been under intensive scrutiny for the last couple of years, following the derivation of the entropy of certain extremal charged black hole solutions of toroidally compactified heterotic string and also type IIB superstring from the underlying string theories [@sen], [@strv]. In the former case of the heterotic string, the entropy was shown to be proportional to the area of the ‘stretched’ horizon of the corresponding extremal black hole, while in the latter case it turned out to be [*precisely*]{} the B-H result. The latter result was soon generalized to a large number of four and five dimensional black holes of type II string theory and M-theory (see [@hor] for a review), all of which could be realized as certain D-brane configurations and hence saturated the BPS bound. Unfortunately, the simplest black hole of all, the four dimensional Schwarzschild black hole, does not appear to be describable in terms of BPS saturating D-brane configurations, and hence is not seemingly amenable to such a simple analysis.[^2] Results for near-extremal black holes, modeled as near-BPS states of string and M-theory, have also been obtained [@hor], pertaining both to their entropy and also Hawking radiation. In the majority of the cases considered, complicated (sometimes intersecting) configurations of D-branes were treated in the ‘effective string approximation’ [@dm]; in this approximation, computations effectively reduce to that in a two dimensional conformal field theory [@str]. The B-H entropy of a four dimensional Schwarzschild black hole has been obtained, for large areas of the event horizon, within the alternative framework of canonical quantum gravity [@ash1] by Krasnov and Ashtekar [*et al*]{} [@kras], [@ash2], up to an overall constant of $O(1)$ known as the Immirzi parameter [@imm] (which essentially characterizes inherent ambiguities in the quantization scheme, and is therefore present in the quantum theory even in the absence of black holes). The black hole spacetime is considered as a 4-fold bounded by the surface at asymptotic null infinity (on which standard asymptotically flat boundary conditions hold) and the event horizon (on which boundary conditions, special to the spherically symmetric Schwarzschild geometry are assumed). The action embodying the assumed boundary conditions consists of, over and above the Einstein-Hilbert action (in the Ashtekar variables [@ash1]), an $SU(2)$ Chern-Simons (CS) gauge theory ‘living’ on a coordinate chart of constant finite cross sectional area $A_S$ (and possessing some other properties) on the horizon. The Chern-Simons coupling parameter $k \sim A_S$. In the Hamiltonian formulation of the theory, the boundary conditions are implemented as a condition on the phase space variables restricted to the boundary (2-sphere) of a spacelike 3-surface intersecting the constant area coordinate patch on the horizon. This results in a reducible connection variable which is gauge fixed (on the boundary) to the $U(1)$ subgroup of the $SU(2)$ invariance of the CS theory. In the quantum theory, the boundary conditions, implemented as an operator equation, imply that the space of surface (boundary) quantum states is composed of subspaces given by the Hilbert space of the $U(1)$ CS theory, on the boundary 2-sphere [*with finitely many punctures $p$ labeled by spins $j_p$*]{}. Now, it has been argued [@bal] that boundary (or ‘edge’) states play the major role in producing black hole entropy. Likewise, the entropy of the black hole under consideration is assumed to emerge only from the surface states, and defined by tracing over the ‘volume’ states. It is then given by $S_{bh} = ln N_{bh}$ where $N_{bh}$ is the number of boundary CS states. This number is next obtained from the dimensionality of the Hilbert space of boundary $U(1)$ CS states on the punctured 2-sphere. Finally, this is compared with the spectrum [@rov] of the area operator [@ashle] in canonical quantum gravity, known, upto the Immirzi parameter, in terms of spins $j_p$ on the punctures $p$. For large number of punctures and large area, it is seen to be proportional to the logarithm of the dimensionality of the space of boundary CS states (i.e., the entropy). A particular choice of the Immirzi parameter then reproduces the Bekenstein Hawking value. Recall that the B-H entropy of a black hole was proposed on the basis of semiclassical analyses, and as such, is by no means beyond modification in the full quantum theory. One of the simplifying steps in the above derivation was the reduction of the gauge group of the CS theory from the original $SU(2)$ to $U(1)$, by gauge fixing the connection on the boundary 2-sphere. As admitted by the authors, this is not a necessary step. Indeed, there exist powerful results relating the state space of CS theories on 3-folds with boundary to the conformal blocks of an $SU(2)$ Wess-Zumino model of level $k$ on that boundary [@wit]. It stands to reason that the entropy, derived from such considerations, will be more exact quantum mechanically. In this paper, we focus on such a calculation of the entropy, using the formalism of two dimensional conformal field theory. More specifically, we compute the number of conformal blocks of an $ SU(2)_k $ Wess-Zumino theory on a punctured 2-sphere, for a set of punctures ${\cal P}\equiv \{1,2, \dots, p\}$ where these punctures are labeled by the spin $j_p$, [*for arbitrary level $k$ (corresponding to an arbitrary area of the cross section of the patch chosen on the horizon)*]{}.[^3] This number can be computed in terms of the so-called fusion matrices $N_{ij}^{~~r}$ [@dms] $$N^{\cal P}~=~~\sum_{\{r_i\}}~N_{j_1 j_2}^{~~~~r_1}~ N_{r_1 j_3}^{~~~~r_2}~ N_{r_2 j_4}^{~~~~r_3}~\dots \dots~ N_{r_{p-2} j_{p-1}}^{~~~~~~~~j_p} ~ \label{fun}$$ Here, each matrix element $N_{ij}^{~~r}$ is $1 ~or~ 0$, depending on whether the primary field $[\phi_r]$ is allowed or not in the conformal field theory fusion algebra for the primary fields $[\phi_i]$ and $[\phi_j] $   ($i,j,r~ =~ 0, 1/2, 1, ....k/2$): $$[\phi_i] ~ \otimes~ [\phi_j]~=~~\sum_r~N_{ij}^{~~r} [\phi_r]~ . \label{fusal}$$ Eq. (\[fun\] ) gives the number of conformal blocks with spins $j_1, j_2, \dots, j_p$ on $p$ external lines and spins $r_1, r_2, \dots, r_{p-2}$ on the internal lines. We next take recourse to the Verlinde formula [@dms] $$N_{ij}^{~~r}~=~\sum_s~{{S_{is} S_{js} S_s^{\dagger r }} \over S_{0s}}~, \label{verl}$$ where, the unitary matrix $S_{ij}$ diagonalizes the fusion matrix. Upon using the unitarity of the $S$-matrix, the algebra (\[fun\]) reduces to $$N^{\cal P}~=~ \sum_{r=0}^{k/2}~{{S_{j_1~r} S_{j_2~r} \dots S_{j_p~r}} \over (S_{0r})^{p-2}}~. \label{red}$$ Now, the matrix elements of $S_{ij}$ are known for the case under consideration ($SU(2)_k$ Wess-Zumino model); they are given by $$S_{ij}~=~\sqrt{\frac2{k+2}}~sin \left({{(2i+1)(2j+1) \pi} \over k+2} \right )~, \label{smatr}$$ where, $i,~j$ are the spin labels, $i,~j ~=~ 0, 1/2, 1, .... k/2$. Using this $S$-matrix, the number of conformal blocks for the set of punctures ${\cal P}$ is given by $$N^{\cal P}~=~{2 \over {k+2}}~\sum_{r=0}^{ k/2}~{ {\prod_{l=1}^p sin \left( {{(2j_l+1)(2r+1) \pi}\over k+2} \right) } \over {\left[ sin \left( {(2r+1) \pi \over k+2} \right)\right]^{p-2} }} ~. \label{enpi}$$ In the notation of [@ash2], eq. (\[enpi\]) gives the dimensionality, $dim ~{\cal H}^{\cal P}_S$, [*for arbitrary area of the horizon $k$ and arbitrary number of punctures*]{}. The dimensionality of the space of states ${\cal H_S}$ of CS theory on three-manifold with $S^2$ boundary is then given by summing $N^{\cal P}$ over all sets of punctures ${\cal P}: ~ N_{bh}~=~\sum_{\cal P} N^{\cal P}$. Then, the entropy of the black hole is given by $S~=~\log N_{bh}$. Observe now that eq. (\[enpi\]) can be rewritten, with appropriate redefinition of dummy variables and recognizing that the product can be written as a multiple sum, $$N^{\cal P}~=~\left ( 2 \over {k+2} \right) ~\sum_{l=1}^{k+1} sin^2 \theta_l~\sum_{m_1 = -j_1}^{j_1} \cdots \sum_{m_p=-j_p}^{j_p} \exp \{ 2i(\sum_{n=1}^p m_n)~ \theta_l \}~, \label{summ}$$ where, $\theta_l ~\equiv~ \pi l /(k+2)$. Expanding the $\sin^2 \theta_l$ and interchanging the order of the summations, a few manipulations then yield $$N^{\cal P}~=~\sum_{m_1= -j_1}^{j_1} \cdots \sum_{m_p=-j_p}^{j_p} \left[ ~\delta_{(\sum_{n=1}^p m_n), 0}~-~\frac12~ \delta_{(\sum_{n=1}^p m_n), 1}~-~ \frac12 ~\delta_{(\sum_{n=1}^p m_n), -1} ~\right ], \label{exct}$$ where, we have used the standard resolution of the periodic Kronecker deltas in terms of exponentials with period $k+2$, $$\delta_{(\sum_{n=1}^p m_n), m}~=~ \left( 1 \over {k+2} \right)~ \sum_{l=0}^{k+1} \exp \{2i~[ (\sum_{n=1}^p m_n)~-~m] \theta_l \}~. \label{resol}$$ Notice that the explicit dependence on $k+2$ is no longer present in the exact formula (\[exct\]). For large $k$ and large number of punctures $p$ our result (\[enpi\]) reduces to $$N^{\cal P}~~\sim~~\prod_{l=1}^p~(2j_l~+~1)~~\label{bigk}$$ in agreement with the result of ref. [@ash2]. Thus the semiclassical B-H formula is valid in this approximation. To see if the B-H formula relating entropy with area is valid even in the quantum theory, one needs to obtain the eigenvalues of the area operator without any assumptions about their size. This might entail a modified regularized operator which measures horizon area in the quantum theory and is, in general, a constant of motion, i.e., commutes with the Hamiltonian constraint. It appears that methods of two dimensional conformal field theory effectively describe [*quantitative*]{} quantum physics of the black holes in four spacetime dimensions. In our work, the conformal field theory enters through the relation the boundary states of the $SU(2)$ Chern-Simons theory have with the conformal blocks of the corresponding conformal field theory. The theory of irreducible representations of the simplest of the conformal field theories, $SU(2)_k$ Wess-Zumino model, is crucial to yield what may be thought of as a quantum generalizations of the semiclassical B-H entropy of the black hole. Extensions of our results to the case of charged and rotating black holes will hopefully constitute a future publication, as also attempts to understand Hawking radiation within the canonical quantum gravity approach. Discussions with S. Carlip, T. R. Govindarajan and C. Rovelli are gratefully acknowledged. J. Bekenstein, Phys. Rev. [**D7**]{}, 2333 (1973). S. Hawking, Comm. Math. Phys. [**43**]{}, 190 (1975). A. Sen, Mod. Phys. Let. [**A 10**]{}, 2081 (1995), hep-th/9504147. A. Strominger and C. Vafa, Phys. Lett. [**B379**]{}, 99 (1996), hep-th/9601029. G. Horowitz, ‘Black Hole Entropy from Near-Horizon Microstates’, hep-th/9704072. S. Das and S. Mathur, Nucl. Phys. [**B478**]{}, 561 (1996), hep-th/9606185. A. Strominger, ‘Quantum Black Hole Entropy from Near-Horizon Microstates’, hep-th/9712252. A. Ashtekar, [*Lectures on Non-perturbative Quantum Gravity*]{}, (World Scientific, 1991). K. Krasnov, “On Quantum Statistical Mechanics of a Schwarzschild black hole", gr-qc/9605047, “Quantum geomtetry and thermal radiation from black holes", gr-qc/9710006. A. Ashtekar, J. Baez, A. Corichi and K. Krasnov, “Quantum Geometry and Black Hole Entropy", gr-qc/9710007 and references therein. G. Immirzi, “Quantum Gravity and Regge Calculus", gr-qc/9701052. A. P. Balachandran. L. Chandar and A. Momen, Nucl. Phys. [**B461**]{}, 581 (1996); S. Carlip, Phys. Rev. [**D51**]{}, 632 (1995). C. Rovelli and L. Smolin, Phys. Rev. [**D52**]{}, 5743 (1995); S. Fritelli, L. Lehner, C. Rovelli, Class. Quant. Grav. [**13**]{}, 2921 (1996). A. Ashtekar and J. Lewandowski, Class. Quant. Grav. [**14**]{}, 55 (1997) and references therein. E. Witten, Comm. Math, Phys. [**121**]{}, 351 (1989). P. Di Francesco, P. Mathieu and D. Senechal, [*Conformal Field Theory*]{}, p. 375 [*et seq*]{} (Springer Verlag 1997). L. Smolin, Jour. Math. Phys. [**36**]{}, 6417 (1995). [^1]: email: kaul, partha, @imsc.ernet.in [^2]: Recently, Sfetsos and Skenderis, hep-th/9711138, have obtained the B-H result for the 4d Schwarzschild black hole by a U-duality map to the 3d BTZ black hole whose entropy has been calculated, again for large areas, by Carlip[@bal]. See also, Arguiro [*et al*]{}, hep-th/9801053. [^3]: Similar ideas using self-dual boundary conditions as outer boundary conditions have appeared in [@smol1].
--- abstract: 'The azimuthal correlations of direct photons ($\gamma_{_{dir}}$) with high transverse momentum ($p_{_{T}}$), produced at mid-rapidity ($|\eta^{\gamma_{_{dir}}}|<1$) in Au+Au collisions at center-of-mass energy $\sqrt{s_{_{NN}}}=200$ GeV, are measured and compared to those of neutral pions ($\pi^{0}$) in the same kinematic range. The measured azimuthal elliptic anisotropy of direct photon, $v_{_{2}}^{\gamma_{_{dir}}}(p_{_{T}})$, at high $p_{_{T}}$ ($8< p_{_{T}}^{\gamma_{_{dir}}}<20$ GeV/$c$) is found to be smaller than that of $\pi^{0}$ and consistent with zero when using the forward detectors ($2.4 <|\eta|< 4.0$) in reconstructing the event plane. The associated charged hadron spectra recoiled from $\gamma_{_{dir}}$ show more suppression than those recoiled from $\pi^{0}$ $(I_{_{AA}}^{\gamma_{_{dir}}-h^{\pm}} < I_{_{AA}}^{\pi^{0}-h^{\pm}})$ in the new measured kinematic range $12< p_{_{T}}^{\gamma_{_{dir}},\pi^{0}}<24$ GeV/$c$ and $3< p_{_{T}}^{assoc}<24$ GeV/$c$.' address: | University of Mississippi, Oxford, USA\ Texas A$\&$M University, College Station, USA author: - 'Ahmed M. Hamed (for the STAR Collaboration)' title: 'High-$p_{_{T}}$ Direct Photon Azimuthal Correlation Measurements' --- Electromagnetic probes ,high-$p_{_{T}}$ direct photons ,STAR Introduction {#intro} ============ A major goal of measurements at the Relativistic Heavy Ion Collider (RHIC) is to quantify the properties of the QCD matter created in heavy-ion collisions at high energy [@STAR_white]. Unlike quarks and gluons, photons do not fragment into hadrons and can be directly observed as a final state particle. Furthermore, due to their negligible coupling to the QCD matter in contrast to hadrons, direct photons are considered as a calibrated probe for the QCD medium. The previous measurements at RHIC indicate unexpected finite values of azimuthal elliptic anisotropy parameter $\it v_{_{2}}$ of charged hadrons at high-$p_{_{T}}$ [@STAR1]. The measured $\it v_{_{2}}$ at high-$p_{_{T}}$ is beyond the applicability of hydrodynamic models, and the path-length dependence of jet quenching is the only proposed explanation of $\it v_{_{2}}$ at high-$p_{_{T}}$ [@Edward]. The $\it v_{_{2}}^{\gamma_{_{dir}}}$ measurement would provide a gauge for the energy loss at high-$p_{_{T}}$. The high-$p_{_{T}}$ $\gamma_{_{dir}}$ sample unbiased spatial distribution of the hard scattering vertices in the QCD medium [@Wang_idea], in contrast to hadrons which suffer from the geometric biases. Therefore, a comparison between the spectra of the away-side charged hadrons associated with $\gamma_{_{dir}}$ vs. $\pi^{0}$ can provide a benchmark for the energy loss and its dependence on the path-length. Although the previous measurements have indicated similar level and pattern of suppression for the away-side of $\gamma_{_{dir}}$ and $\pi^{0}$ [@STAR2], the current work explores a softer region in the fragmentation functions where a more significant difference is expected [@Renk_gamma0]. Analysis and Results ==================== Electromagnetic neutral clusters -------------------------------- The STAR detector is well suited for measuring azimuthal angular correlations due to the large coverage in pseudorapidity and full coverage in azimuth ($\phi$). While the Barrel Electromagnetic Calorimeter (BEMC) [@STAR_BEMC] measures the electromagnetic energy with high resolution, the Barrel Shower Maximum Detector (BSMD) provides fine spatial resolution and enhances the rejection power for the hadrons. The Time Projection Chamber (TPC: $|\eta|<1$) [@STAR_TPC] identifies charged particles, measures their momenta, and allows for a charged-particle veto cut with the BEMC matching. The Forward Time Projection Chamber (FTPC: $2.4 <|\eta|< 4.0$) [@STAR_FTPC] is used to measure the charged particles’ momenta and to reconstruct the event plane angle. Using the BEMC to select events (*i.e.* “trigger") with high-$p_{_{T}}$ $\gamma$, the STAR experiment collected an integrated luminosity of 23 $p$b$^{-1}$ of p+p collisions in 2009 and 973 $\mu$b$^{-1}$ of Au+Au collisions in 2011. In this analysis, events having a primary vertex within $\pm 55$ cm of the center of the TPC along the beamline in Au+Au and $\pm 80$ cm in p+p are selected. In addition, each event must have at least one electromagnetic cluster with $E_{_{T}} > 8$ GeV for the event plane correlation analysis and $E_{_{T}} > 12$ GeV for the charged hadron correlation analysis. More than 97$\%$ of these clusters have deposited energy greater than 0.5 GeV in each layer of the BSMD. A trigger tower is rejected if it has a track with $p > 3.0 $ GeV/$c$ pointing to it, which reduces the number of the electromagnetic clusters by only $\sim 7$%. $v_{_{2}}$ of neutral particles ------------------------------- The $v_{_{2}}$ is determined using the standard method [@flow2]: $$v_{_{2}}(p_{_{T}}) = \langle \langle \cos 2(\phi_{p_{_{T}}}-\psi_{_{\textsc{\scriptsize{EP}}}})\rangle \rangle,$$ where the brackets denote statistical averaging over particles and events, $\phi_{p_{_{T}}}$ is the azimuthal angle of the neutral particle with certain value of $p_{_{T}}$, and $\psi_{_{\textsc{\scriptsize{EP}}}}$ is the azimuthal angle of the event plane. The event plane is reconstructed from charged particles, within the detector acceptance, with $p_{_{T}} < 2 $ GeV/$c$, and determined by $$\psi_{_{\textsc{\scriptsize{EP}}}} = \frac{1}{2} \tan^{-1} (\frac{\sum_{i}\sin(2\phi_{i})}{\sum_{i}\cos(2\phi_{i})} ),$$ where $\phi_{i}$ are the azimuthal angles of all the particles used to define the event plane. In this analysis, the charged-track quality criteria are similar to those used in previous STAR analyses [@flow]. The event plane is measured using two different detectors in their pseudorapidity coverage: 1) using all the selected tracks inside the TPC, and 2) using all tracks inside the FTPC in order to reduce the “non-flow" contributions (azimuthal correlations not related to the event plane). Since the event plane is only an approximation to the true reaction plane, the observed correlation is divided by the event plane resolution. The event plane resolution is estimated using the sub-event method in which the full event is divided up randomly into two sub-events as described in [@flow2]. Biases due to the non-uniform acceptance of the detector are removed according to the method in [@shift]. Azimuthal correlations of a neutral trigger particle with charged hadrons ------------------------------------------------------------------------- The azimuthal correlations of a neutral trigger particle with charged hadrons, measured as the number of associated particles per neutral cluster per $\Delta\phi$ (“correlation functions”), are used in both p+p and Au+Au collisions to determine the (jet) associated particle yields in the near- ($\Delta\phi\sim$ 0) and away-sides ($\Delta\phi\sim$ $\pi$). The near- and away-side yields, $Y^{n}$ and $Y^{a}$, of associated particles per trigger are extracted by integrating the $\mathrm (1/N_{_{trig}}) dN/d(\Delta\phi)$ distributions over $\mid\Delta\phi\mid$ $\leq$ 0.63 and $\mid\Delta\phi -\pi\mid$ $\leq$ 0.63, respectively. The yield is corrected for the tracking efficiency of charged particles as a function of event multiplicity. Transverse shower profile analysis ---------------------------------- A crucial part of the analysis is to discriminate between showers from $\gamma_{_{dir}}$ and two close $\gamma$’s from high-$p_{_{T}}$ $\pi^{0}$ symmetric decays. At $p_{_{T}}^{\pi^0} \sim 8$ GeV/$c$, the angular separation between the two $\gamma$’s resulting from a $\pi^{0}$ decay is small, but a $\pi^{0}$ shower is generally broader than a single $\gamma$ shower. The BSMD is capable of $2\gamma$/$1\gamma$ separation up to $p_{_{T}}^{\pi^0} \sim 24$ GeV/$c$ due to its high granularity ($\Delta\eta\sim 0.007$, $\Delta\phi\sim 0.007$). The shower shape is quantified as the cluster energy, measured by the BEMC, normalized by the position-weighted energy moment, measured by the BSMD strips [@STAR2]. The shower profile cuts were tuned to obtain a nearly $\gamma_{_{dir}}$-free ($\pi^{0}_{_{rich}}$) sample and a sample rich in $\gamma_{_{dir}}$ ($\gamma_{_{rich}}$). Since the shower-shape analysis is only effective for rejecting two close $\gamma$ showers, the $\gamma_{_{rich}}$ sample contains a mixture of direct photons and contamination from fragmentation photons ($\gamma_{_{frag}}$) and photons from asymmetric hadron ($\pi^0$ and $\eta$) decays. The $v_{_{2}}^{\gamma_{_{rich}}}$ and $v_{_{2}}^{\pi^{0}}$ are measured as discussed in section 2.2 and the away (near)-side yields of associated particles per $\gamma_{_{rich}}$ and $\pi^{0}_{_{rich}}$ triggers ($Y^{a(n)}_{\gamma_{_{rich}}+h}$ and $Y^{a(n)}_{\pi^{0}_{_{rich}}+h}$) are measured as discussed in section 2.3. *v$_{_{2}}$ of direct photons* ------------------------------ Assuming zero near-side yield for $\gamma_{_{dir}}$ triggers and a sample of $\pi^{0}_{_{rich}}$ free of $\gamma_{_{dir}}$, the $\it v_{_{2}}^{\gamma_{_{dir}}}$ is given by: $$v_{_{2}}^{\gamma_{_{dir}}}=\frac {v_{_{2}}^{\gamma_{_{rich}}}- {\cal{R}}v_{_{2}}^{\pi^{0}_{_{rich}}}} {1-\cal{R}},$$ where $\cal{R}$=$\frac{N^{\pi^{0}_{_{rich}}}}{N^{\gamma_{_{rich}}}}$, and the numbers of $\pi^{0}_{rich}$ and $\gamma_{_{rich}}$ triggers are represented by $N^{\pi^{0}_{_{rich}}}$ and $N^{\gamma_{_{rich}}}$ respectively. Although the $\cal{R}$ quantity approximates all background triggers in the $\gamma_{_{rich}}$ sample to the measured $\pi^{0}_{_{rich}}$ triggers, all background to $\gamma_{_{dir}}$ is subtracted assuming that all background triggers have the same correlation function as the $\pi^{0}_{_{rich}}$ sample [@STAR2]. The value of $\cal{R}$ is measured in [@STAR2] and found to be $\sim 30\%$ in central Au+Au. In Eq. 3 all background sources for $\gamma_{_{dir}}$ are assumed to have the same $\it v_{_{2}}$ as the measured $\pi^{0}$.\ \ Figures 1 and 2 show the $\it v_{_{2}}^{\pi^{0}}$ and $v_{_{2}}^{\gamma_{_{dir}}}$ for ($8< p_{_{T}}^{\gamma_{_{dir}}}<20$ GeV/$c$) from Au+Au data set of Run 2011 using the TPC ($|\eta|<1$) and FTPC ($2.4 <|\eta|< 4.0$). The results of Fig. 1 are consistent with those from different STAR data set of Run 2007 [@ahmed], and the results of Fig. 2 agree with other measurements [@ALICE; @PHENIX]. While using the FTPC in determining the event plane (Fig. 2) the $\it v_{_{2}}^{\gamma_{_{dir}}}$ is consistent with zero. Assuming the dominant source of direct photons is prompt hard production, the zero value implies no remaining bias in the event-plane determination. Accordingly, the measured value of $\it v_{_{2}}^{\pi^{0}}$ would be the effect of path-length dependent energy loss. Extraction of $\gamma_{_{dir}}$ associated yields \[subsec:gammadir\] --------------------------------------------------------------------- Assuming zero near-side yield for $\gamma_{_{dir}}$ triggers and a sample of $\pi^{0}_{_{rich}}$ free of $\gamma_{_{dir}}$, the away-side yield of hadrons correlated with the $\gamma_{_{dir}}$ is extracted as $$\begin{aligned} Y_{\gamma_{_{dir}}+h}&=\frac{Y^{a}_{\gamma_{_{rich}}+h}-{\cal{R}} Y^{a}_{\pi^{0}_{_{rich}}+h}}{1-{\cal{R}}}, \nonumber \\ {\rm where~} {\cal{R}}&=\frac{N^{\pi^{0}_{_{rich}}}}{N^{\gamma_{_{rich}}}}=\frac{Y^{n}_{\gamma_{_{rich}}+h}}{Y^{n}_{\pi^{0}_{_{rich}}+h}}, {\rm ~~~and~} 1-{\cal{R}}=\frac{N^{\gamma_{_{dir}}}}{N^{\gamma_{_{rich}}}}. \label{eq:gammadir}\end{aligned}$$ Here, $Y^{a(n)}_{\gamma_{_{rich}}+h}$ and $Y^{a(n)}_{\pi^{0}_{_{rich}}+h}$ are the away (near)-side yields of associated particles per $\gamma_{_{rich}}$ and $\pi^{0}_{_{rich}}$ triggers, respectively. The ratio ${\cal{R}}$ is equivalent to the fraction of “background” triggers in the $\gamma_{_{rich}}$ trigger sample, and $N^{\gamma_{_{dir}}}$ and $N^{\gamma_{_{rich}}}$ are the numbers of $\gamma_{_{dir}}$ and $\gamma_{_{rich}}$ triggers, respectively. The value of ${\cal{R}}$ is found to be $\sim 55\%$ in p+p and decreases to $\sim 30\%$ in central Au+Au with little dependence on $p_{T}^{trig}$. All background to $\gamma_{_{dir}}$ is subtracted with the assumption that the background triggers have the same correlation function as the $\pi^{0}_{_{rich}}$ sample. In order to quantify the away-side suppression, we calculate the quantity $I_{_{AA}}$, which is defined as the ratio of the integrated yield of the away-side associated particles per trigger particle in Au+Au to that in p+p collisions. The values of $I_{_{AA}}^{\gamma_{_{dir}}-h^{\pm}}$ and $I_{_{AA}}^{\pi^{0}-h^{\pm}}$, as shown in Fig. 3, are $z_{_{T}}$ ($z_{_{T}} = p_{_{T}}^{assoc}/p_{_{T}}^{trig})$ independent in agreement with results of [@STAR2] where the recoiled parton from $\gamma_{_{dir}}$ and $\pi^{0}$ experience constant fractional energy loss in the QCD medium. It is also observed that the charged hadron spectra recoiled from $\gamma_{_{dir}}$ show unexpectedly more suppression than those recoiled from $\pi^{0}$ $(I_{_{AA}}^{\gamma_{_{dir}}-h^{\pm}} < I_{_{AA}}^{\pi^{0}-h^{\pm}})$ within the covered kinematics range $12< p_{_{T}}^{\gamma_{_{dir}},\pi^{0}}<24$ GeV/$c$ and $3< p_{_{T}}^{assoc}<24$ GeV/$c$. Conclusions =========== The STAR experiment has reported the first $\it v_{_{2}}^{\gamma_{_{dir}}}$ at high-$p_{_{T}}$ ($8< p_{_{T}}^{\gamma_{_{dir}}}<20$ GeV/$c$), and explored new kinematic range ($12< p_{_{T}}^{\gamma_{_{dir}},\pi^{0}}<24$ GeV/$c$) and ($3< p_{_{T}}^{assoc}<24$ GeV/$c$) for $I_{_{AA}}$ measurements of $\gamma_{_{dir}}-h$ correlations at $\sqrt{s_{_{NN}}}=200$ GeV. Using the mid-rapidity detectors in determining the event plane, the measured value of $\it v_{_{2}}^{\gamma_{_{dir}}}$ is non-zero, and is probably due to biases in the event-plane determination. Using the forward detectors in determining the event plane could eliminate remaining biases, and the measured $\it v_{_{2}}^{\gamma_{_{dir}}}$ is consistent with zero. The zero value of $\it v_{_{2}}^{\gamma_{_{dir}}}$ suggests a negligible contribution of jet-medium photons [@Theory2], and negligible effects of ${\gamma_{_{frag}}}$ [@Theory1] on the $v_{_{2}}^{\gamma_{_{dir}}}$ over the covered kinematics range. The measured finite value of $v_{_{2}}^{\pi^{0}}$, using the forward detectors in determining the event plane, is apparently due to the path-length dependence of energy loss. The $\gamma_{_{dir}}-h$ correlation results indicate that the associated charged hadron spectra recoiled from $\gamma_{_{dir}}$ show more suppression than those recoiled from $\pi^{0}$ $(I_{_{AA}}^{\gamma_{_{dir}}-h^{\pm}} < I_{_{AA}}^{\pi^{0}-h^{\pm}})$ within the covered kinematic range, in contrast to the theoretical predictions [@Wang]. The disagreement with the theoretical expectations may indicate that the lost energy is distributed to lower $p_{_{T}}$ of the associated particles in the case of a $\gamma_{_{dir}}$ trigger than a $\pi^{0}$ trigger. To further test this, one must explore the region of low $p_{_{T}}^{assoc}$ and $z_{_{T}}$. STAR Collaboration, J. Adams [*et al*]{}, Nucl. Phys. A [**757**]{}, 102 (2005). STAR Collaboration, J. Adams [*et al*]{}, Phys. Rev. Lett. [**93**]{}, 252301 (2004). E. V. Shuryak, Phys. Rev. C [**66**]{}, 027902 (2002). X.-N.Wang, Z. Huang and I. Sarcevic, Phys. Rev. Lett. [**77**]{}, 231 (1996). STAR Collaboration, B. I. Abelev [*et al*]{}, Phys. Rev. C [**82**]{}, 034909 (2010). T. Renk, Phys. Rev. C[**74**]{}, 034906 (2006). M. Beddo [*et al*]{}, Nucl. Instrum. Meth. A [**499**]{}, 725 (2003). M. Anderson [*et al*]{}, Nucl. Instrum. Meth. A [**499**]{}, 659 (2003). K. H. Ackermann [*et al*]{}, Nucl. Instrum. Meth. A [**499**]{}, 713 (2003). A. M. Poskanzer and S. A. Voloshin , Phys. Rev. C [**58**]{}, 1671 (1998). STAR Collaboration, B. I. Abelev [*et al*]{}, Phys. Rev. C [**77**]{}, 054901 (2008). E877 Collaboration, J. Barrette [*et al*]{}, Phys. Rev. C [**56**]{}, 3254 (1997). A. Hamed (STAR Collaboration) J. Phys: Conf. Ser. [**270**]{}, 012010 (2011). D. Lohner (Alice Collaboration) J. Phys.: Conf. Ser. [**446**]{}, 012028 (2013). PHENIX Collaboration, A. Adare [*et al*]{}, Phys. Rev. Lett.[**109**]{}, 122302 (2012). B. G. Zakharov, JETP Lett. [**80**]{}, 1 (2004). R. J. Fries, B. M$\ddot{u}$ller, and D. K. Srivastava, Phys. Rev. Lett. [**90**]{}, 132301 (2003). H. Zhang [*et al*]{}, Nucl. Phys. A [**830**]{}, 443c (2009).
--- abstract: | Numerical predictions for the global characteristics of proton-proton interactions are given for the LHC energy. Possibilities for the discovery of the antishadow scattering mode and its physical implications are discussed.\ PACS numbers: 12.40.Pp; 13.85.Dz; 13.85.Lg author: - | [S. M. Troshin, N. E.Tyurin]{}\ *[Institute for High Energy Physics]{},\ *[Protvino, Moscow Region, 142284 Russia]{}** title: '**[Diffraction at the LHC – antishadow scattering?]{}**' --- Introduction {#introduction .unnumbered} ============ Soft hadronic interactions observe a time oscillating pattern in the interest from a high-energy physics community. The peaks of the interest coincide as usual with the beginning of the new machine operation. Nowadays RHIC is preparing for operation and LHC would start to provide first results in the not too distant future. Under these circumstances the interest to experimental and theoretical studies in this field is increasing. There are many open problems in hadron physics at large distances and their importance has not been overshadowed by the exciting expectations of the new particles discoveries in the newly opening energy range of the LHC. The most global characteristic of the hadronic collision is the total cross–section and the most important problem here is the nature of the total–cross section rising energy dependence. There are various approaches which provide total cross-section rising with energy but the underlying microscopic mechanism leading to this increase remains obscure. However, the growing understanding how QCD works at large distances could finally lead to a final explanation of this longstanding problem [@mart]. In this connection the TOTEM experiment [@vel] approved recently at the LHC could be more valuable than just a tool for checking numerous model predictions and background and luminosity estimates. It could have a definite discovery potential and our main goal in this note is to discuss one of the such aspects related to the possible observation of the antishadow scattering mode at the LHC. When will asymptotics be seen? ============================== The answer on the above question currently is model dependent. There are many model parameterizations for the total cross-sections using $\ln^2 s$ dependence for $\sigma_{tot}(s)$. This implies the saturation of the Froissart–Martin bound and what is unnatural the presence of the asymptotical contributions already at the very moderate energies. On the other side the power-like parameterizations of $\sigma_{tot}(s)$ neglect the Froissart–Martin bound considering it as a matter of the distant unknown asymptopia. It seems that the both approaches are limited and their limitations reflect the real energy range available for the analysis of the experimental data. For example, it is not clear whether the power–like parameterizations respect unitarity limit for the partial–wave amplitudes $|f_l(s)|\leq 1$. We are keeping in mind here only the accelerator data (cosmic ray data will be briefly commented below). Unitarity is an important principle and the unitarization procedure of some input power-like “amplitude” leads to the complicated energy dependence of $\sigma_{tot}(s)$ which can be approximated by the various functions depending on the particular energy range under consideration. Moreover, unitarity implies the appearance of the new scattering mode – antishadow (see [@ech] and references therein). Here we provide numerical estimates at LHC energies based on the $U$-matrix unitarization method [@ltkhs] and the particular model for $U$-matrix [@chpr] and argue that antishadow mode could be revealed already at the LHC energy $\sqrt{s}=14$ TeV. Antishadow scattering at LHC ============================ In the impact parameter representation the unitarity equation written for the elastic scattering amplitude $f(s,b)$ at high energies has the form $$Im f(s,b)=|f(s,b)|^2+\eta(s,b) \label{unt}$$ where the inelastic overlap function $\eta(s,b)$ is the sum of all inelastic channel contributions. It can be expressed as a sum of $n$–particle production cross–sections at the given impact parameter $$\eta(s,b)=\sum_n\sigma_n(s,b).$$ Unitarity equation has the two solutions for the case of pure imaginary amplitude: $$f(s,b)=\frac{i}{2}[1\pm \sqrt{1-4\eta(s,b)}].\label{usol}$$ Eikonal unitarization with pure imaginary eikonal corresponds to the choice of the particular solution with sign minus. In the $U$–matrix approach the form of the elastic scattering amplitude in the impact parameter representation is the following: $$f(s,b)=\frac{U(s,b)}{1-iU(s,b)}. \label{um}$$ $U(s,b)$ is the generalized reaction matrix, which is considered as an input dynamical quantity similar to eikonal function. Inelastic overlap function is connected with $U(s,b)$ by the relation $$\eta(s,b)=\frac{Im U(s,b)}{|1-iU(s,b)|^{2}}\label{uf}.$$ Construction of particular models in the framework of the $U$–matrix approach proceeds the standard steps, i.e. the basic dynamics as well as the notions on hadron structure are used to obtain a particular form for the $U$–matrix. However, the two unitarization schemes ($U$–matrix and eikonal) lead to different predictions for the inelastic cross–sections and for the ratio of elastic to total cross-section. This ratio in the $U$–matrix unitarization scheme reaches its maximal possible value at $s\rightarrow \infty$, i.e. $$\frac{\sigma_{el}(s)}{\sigma_{tot}(s)}\rightarrow 1,$$ which reflects in fact that the bound for the partial–wave amplitude in the $U$–matrix approach is $|f(s,b)|\leq 1$ while the bound for the case of imaginary eikonal is (black disk limit): $|f(s,b)|\leq 1/2$. When the amplitude exceeds the black disk limit (in central collisions at high energies) then the scattering at such impact parameters turns out to be of an antishadow nature. In this antishadow scattering mode the elastic amplitude increases with decrease of the inelastic channels contribution. The shadow scattering mode is considered usually as the only possible one. But the two solutions of the unitarity equation have an equal meaning and the antishadow scattering mode could also appear in the central collisions first as the energy becomes higher. The both scattering modes are realized in a natural way under the $U$–matrix unitarization despite the two modes are described by the two different solutions of unitarity. Appearance of the antishadow scattering mode is consistent with the basic idea that the particle production is the driving force for elastic scattering. Indeed, the imaginary part of the generalized reaction matrix is the sum of inelastic channel contributions: $$Im U(s,b)=\sum_n \bar{U}_n(s,b),\label{vvv}$$ where $n$ runs over all inelastic states and $$\bar{U}_n(s,b)=\int d\Gamma_n |U_n(s,b,\{\xi_n\}|^2$$ and $d\Gamma_n$ is the $n$–particle element of the phase space volume. The functions $U_n(s,b,\{\xi_n\})$ are determined by the dynamics of $2\rightarrow n$ processes. Thus, the quantity $ImU(s,b)$ itself is a shadow of the inelastic processes. However, unitarity leads to self–damping of the inelastic channels [@bbl] and increase of the function $ImU(s,b)$ results in decrease of the inelastic overlap function $\eta(s,b)$ in accord with Eq. (\[uf\]) when $ImU(s,b)$ exceeds unity. Let us consider the transition to the antishadow scattering mode [@phl]. With conventional parameterizations of the $U$–matrix the inelastic overlap function increases with energies at modest values of $s$. It reaches its maximum value $\eta(s,b=0)=1/4$ at some energy $s=s_0$ and beyond this energy the antishadow scattering mode appears at small values of $b$. The region of energies and impact parameters corresponding to the antishadow scattering mode is determined by the conditions $Im f(s,b)> 1/2$ and $\eta(s,b)< 1/4$. The quantitative analysis of the experimental data [@pras] gives the threshold value: $\sqrt{s_0}\simeq 2$ TeV. Thus, the function $\eta(s,b)$ becomes peripheral when energy is increasing. At such energies the inelastic overlap function reaches its maximum value at $b=R(s)$ where $R(s)$ is the interaction radius. So, beyond the transition threshold there are two regions in impact parameter space: the central region of antishadow scattering at $b< R(s)$ and the peripheral region of shadow scattering at $b> R(s)$. The region of the LHC energies is the one where antishadow scattering mode is to be presented. It will be demonstrated in the next section that this mode can be revealed directly measuring $\sigma_{el}(s)$ and $\sigma_{tot}(s)$ and not only through the analysis of impact parameter distributions. Estimates and transition to asymptotics ======================================= We use chiral quark model for the $U$–matrix [@chpr]. The function $U(s,b)$ is chosen in the model as a product of the averaged quark amplitudes $$U(s,b) = \prod^{N}_{Q=1} \langle f_Q(s,b)\rangle$$ in accordance with assumed quasi-independent nature of the valence quark scattering in some effective field. The essential point here is the rise with energy of the number of the scatterers like $\sqrt{s}$ (cf. [@chpr]). The $b$–dependence of the function $\langle f_Q \rangle$ is related to the quark formfactor $F_Q(q)$ and has a simple form $\langle f_Q(b)\rangle\propto\exp(-m_Qb/\xi )$, i.e. the valence quarks in the model have a complicated structure with quark matter distribution approximated by the function $\langle f_Q(b)\rangle$. The generalized reaction matrix (in a pure imaginary case) gets the following form $$U(s,b) = ig\left [1+\alpha \frac{\sqrt{s}}{m_Q}\right]^N \exp(-Mb/\xi ), \label{x}$$ where $M =\sum^N_{Q=1}m_Q$. Here $m_Q$ is the mass of constituent quark, which is taken to be $0.35$ $GeV$, $N$ is the total number of valence quarks in the colliding hadrons, i.e. $N=6$ for $pp$–scattering. The values for the other parameters were obtained in [@pras]: $g=0.24$, $\xi=2.5$, $\alpha=0.56\cdot 10^{-4}$. These parameters were adjusted to the experimental data on the total cross–sections in the range up to the Tevatron energy. With such small number of free parameters the model is in a rather good agreement with the data [@pras]. For the LHC energy $\sqrt{s}= 14$ $TeV$ the model gives $$\label{s} \sigma_{tot}\simeq 230\; \mbox{mb}$$ and $$\label{r} \sigma_{el}/\sigma_{tot}\simeq 0.67.$$ Thus, the antishadow scattering mode could be discovered at the LHC by measuring $\sigma_{el}/\sigma_{tot}$ ratio which is greater than the black disc value $1/2$. However, the LHC energy is not in the asymptotic region yet; the total, elastic and inelastic cross-sections behave like $$\label{tot} \sigma_{tot,el}\propto \ln^2\left[g\left(1+\alpha \frac{\sqrt{s}}{m_Q}\right)^N\right],\;$$ $$\label{ine} \sigma_{inel}\propto \ln\left[g\left(1+\alpha \frac{\sqrt{s}}{m_Q}\right)^N\right].$$ True asymptotical regime $$\label{tota} \sigma_{tot,el}\propto \ln^2 s,\;\; \sigma_{inel}\propto \ln s$$ is expected at $\sqrt{s}> 100$ $TeV$. Another predictions of the chiral quark model is decreasing energy dependence of the the cross-section of the inelastic diffraction at $s>s_0$. Decrease of diffractive production cross–section at high energies ($s>s_0$) is due to the fact that $\eta (s,b)$ becomes peripheral at $s > s_0$ and the whole picture corresponds to the antishadow scattering at $b < R(s)$ and to the shadow scattering at $b>R(s)$ where $R(s)$ is the interaction radius: $$\frac{d\sigma_{diff}}{dM^2_X}\simeq \frac{8\pi g^*\xi ^2}{M_X^2} \eta(s,0).$$ The parameter $g^*<1$ is the probability of the excitation of a constituent quark during interaction. Diffractive production cross–section has familiar $1/M_X^2$ dependence which is related in this model to the geometrical size of excited constituent quark. At the LHC energy $\sqrt{s}=14$ $TeV$ the value of the single diffractive inelastic cross-sections is limited by the value $$\label{ind} \sigma _{diff}(s)\leq 2.4\;\mbox{mb}.$$ The above predicted values for the global characteristics of $pp$ – interactions at the LHC differ from the most common predictions of the other models. First of all total cross–section is predicted to be twice as much of the standard predictions in the range 95-120 mb [@vels] and it also overshoots the existing cosmic ray data. However, extracting proton–proton cross sections from cosmic ray experiments is model dependent and far from straightforward (see, e.g. [@bl] and references therein). Those experiments measure the attenuation lengths of showers initiated by the cosmic particles in the atmosphere and are sensitive to the model dependent parameter called inelasticity. So the disagreement of the particular model with the cosmic ray measurements means that the data should be recalculated in the framework of this model and in addition assumptions on the energy dependence of inelasticity should be involved also. Discussions and conclusion ========================== The main goal of this note is to point out that the antishadow scattering mode at the LHC can be detected measuring elastic to total cross section ratio which is predicted to be greater than the black disc limit $1/2$. The considered model estimates also the total cross section values significantly higher than the values conventional parameterizations provide. The studies of soft interactions at the LHC energies can lead to the discoveries of fundamental importance. The genesis of hadron scattering with rising energy can be described as transition from the grey to black disc and eventually to black ring with the antishadow scattering mode in the center. It is worth noting that the appearance of the antishadow scattering mode at the LHC implies a somewhat unusual scattering picture. At high energies the proton should be represented as a very loosely bounded composite system and it appears that this system has a high probability to reinstate itself only in the central collisions where all of its parts participate in the coherent interactions. Therefore the central collisions are mostly responsible for elastic processes while the peripheral ones where only few parts of weekly bounded protons are involved result mainly in the production of secondary particles. This leads to the peripheral impact parameter profile of the inelastic overlap function. The above picture would imply interesting consequences for the multiplicities in hadronic collisions, i.e. up to the threshold energy $s_0$ the picture will correspond to the fragmentation concept [@ben] which supposes larger multiplicity for the higher value of momentum transfer. The increase of the mean multiplicity in hadron interactions with $t$ [@tro] is in agreement with the hadronic experimental data. However, when the energy becomes greater than $s_0$ and antishadow mode develops, momentum transfer dependence of multiplicity would change. Loosely speaking the picture described above correspond to the scattering of extended objects at lower energies and transition to the scattering of weakly bounded systems at higher energies. This picture has an illustrative value and is in general compliance with asymptotic freedom of QCD and parton picture. Finally, we would like to note that the numerical predictions depend on the particular choice of the model for the $U$-matrix, but appearance of the antishadow scattering mode is an inherent feature of the considered approach. Acknowledgements {#acknowledgements .unnumbered} ================ Authors are grateful to W. Kienzle, A. Krisch, W. Lorenzon and V. Roinishvili for the interesting discussions. This work was supported in part by the RFBR Grant No. 99-02-17995. [99]{} A. Martin, CERN-TH.7284/94 Preprint, 1994. G. Matthiae, In “Future Physics and Accelerators”, Edited by M. Chaichian, K. Huitu, R. Orava, World Scientific, 1995, 245. S. M. Troshin and N. E. Tyurin, Phys. Part. Nucl. 30 (1999) 550. A. A. Logunov, V. I. Savrin, N. E. Tyurin and O. A. Khrustalev, Teor. Mat. Fiz. **6 (1971) 157; S.M. Troshin and N.E. Tyurin, Nuovo Cim. **106A (1993) 327; Proc. of the Vth Blois Workshop on Elastic and Diffractive Scattering, Providence, Rhode Island, June 1993, p. 387; Phys. Rev. **D49 (1994) 4427; Z. Phys. C**64 (1994) 311. M. Baker and R. Blankenbecler, Phys. Rev. **128 (1962) 415. S. M. Troshin and N. E. Tyurin, Phys. Lett. **B 316 (1993) 175. P. M. Nadolsky, S. M. Troshin and N. E. Tyurin, Z. Phys. C **69 (1995) 131 . M. M. Block, F. Halzen and T. Stanev, hep-ph/9908222 Preprint, 1999. J. Velasco , J. Perez-Peraza, A. Gallegos-Cruz, M. Alvarez-Madrigal, A. Faus-Golfe, A. Sanchez-Hertz, hep-ph/9910484 Preprint, 1999. J. Benecke, T. T. Chou, C. N. Yang and E. Yen, Phys. Rev. **188 (1969) 2159; T. T. Chou and C. N. Yang, Phys. Rev. D **50 (1994) 590. S. M. Troshin, Sov. J. Nucl. Phys. **25 (1977) 472.********************
--- abstract: 'Disaster analysis in social media content is one of the interesting research domains having abundance of data. However, there is a lack of labeled data that can be used to train machine learning models for disaster analysis applications. Active learning is one of the possible solutions to such problem. To this aim, in this paper we propose and assess the efficacy of an active learning based framework for disaster analysis using images shared on social media outlets. Specifically, we analyze the performance of different active learning techniques employing several sampling and disagreement strategies. Moreover, we collect a large-scale dataset covering images from eight common types of natural disasters. The experimental results show that the use of active learning techniques for disaster analysis using images results in a performance comparable to that obtained using human annotated images, and could be used in frameworks for disaster analysis in images without tedious job of manual annotation.' author: - - - - - bibliography: - 'sigproc.bib' date: 'Received: date / Accepted: date' title: Active Learning for Event Detection in Support of Disaster Analysis Applications --- Introduction ============ Introduction {#sec:introduction} ============ Natural disasters, such as floods and earthquakes, may cause significant loss in terms of human lives and property. In such situations, an instant access to relevant information may help with timely recovery efforts. In recent years, social media outlets have been widely utilized to gather disaster related information [@ahmad2018social]. However, the use of social media content also comes with lots of challenges. One such challenge is filtering out irrelevant information. To this aim, several frameworks have been proposed in the recent literature that rely on different classification and feature extraction techniques. One of the requirements of classification applications is the availability of sufficient training samples. However, annotation of training samples is a tedious and time consuming job, which requires lots of efforts. One of the possible solutions to reduce human labor in data annotation is the use of active learning techniques. Active learning has been widely utilized in a wide range of application domains having large quantities of unlabeled data and less quantities of labeled data. Such domains include Natural Language Processing (NLP), multimedia analysis and remote sensing [@liu2019generative; @sener2018active; @tuia2011survey; @ahmad2018event; @zhang2019active]. Active learning techniques have been recently used with Convolutional Neural Networks (CNNs) and Long-short Term Memory (LSTM) based frameworks to improve their overall performance [@sener2018active; @karlos2019investigating]. Disaster analysis is relatively a new application that still lacks large collections of labeled data [@said2019natural]. We believe it could benefit from active learning. In this paper, we study and analyze the efficacy of utilizing active learning techniques in disaster analysis in social media images by employing and evaluating the performance of different active learning techniques in terms of classification accuracy. We mainly focus on the most commonly used scenario of active learning, namely, pool-based sampling that fits well in our disaster analysis task. In pool-based sampling, samples are drawn from a pool of unlabeled images into the initial small labeled training set. Under the above mentioned settings, we rely on two most commonly used query techniques; namely, (i) uncertainty sampling and (ii) query by committee. We further evaluate the performance of these techniques with different sampling and disagreement strategies. For uncertainty sampling, we employ three different sampling strategies; namely, least confidence (LC), margin sampling (MS) and entropy sampling (ES). On the other hand, for query by committee based active learning approach, we explore and evaluate the capabilities of this approach with three different disagreement strategies; namely, vote entropy (VE), consensus entropy (CE) and max disagreement (MD). Moreover, we analyze and evaluate the performance of these methods using different number of queries by including a single image in the training set from the unlabeled pool of images to analyze how quickly each of the methods attains maximum accuracy. To the best of our knowledge no prior works explored such detailed analysis of active learning techniques in the relative new domain of disaster analysis applications. Moreover, considering the lack of large-scale (in terms of images as well as the number of disaster types/classes covered) benchmark datasets in the domain, we also provide a benchmark dataset containing a large number of images from most common types of natural disasters, as detailed in Section \[sec:dataset\]. The main contributions of this work are: - Stemming from the fact that machine learning techniques are driven by training data and annotating large volumes of data is a tedious and time consuming job, we carry out an analysis and evaluation study of active learning techniques with diversified set of sampling/disagreement strategies in support of disaster analysis applications. - Through the introduction of the active learning techniques, we demonstrate that comparable accuracy can be achieved with active learning without involving human annotators in the tedious job of annotating large training sets, and active learning could be used in disaster analysis frameworks to obtain better results in scenarios where less annotated data is available. - We also analyze and evaluate the performance of the methods using different numbers of queries/iterations, which helps to provide a baseline for future work in the domain. - We also provide a benchmark dataset for disaster analysis applications covering images from eight different types of natural disasters. The rest of the paper is organized as follows: Section \[sec:related\] discusses the related work. Section \[sec:background\] provides the background and reviews concepts of the active learning techniques. In Section \[sec:techniques\] and \[sec:dataset\] provide details of the proposed methodology and dataset, respectively. The details of the experimental setup, experiments and results are provided in Section \[sec:results\]. Finally, Section \[sec:conclusion\] concludes this study. Related Work {#sec:related} ============ In recent years, disaster analysis of images shared on social media outlets received great attention from the research community. Several interesting solutions relying on diversified sets of strategies have been proposed to effectively utilize the available information. A majority of the efforts in this regard rely on multi-modal information including visual features and meta-data comprised of textual, temporal and geo-location information [@said2019natural]. For instance, Benjamin et al. [@bischke2017detection] utilized the additional information available in the form of meta-data along with visual features extracted through an existing deep model; namely, AlexNet, pre-trained on ImageNet [@deng2009imagenet]. Both types of information are then evaluated individually and in combination with flood-related images obtained from social media. Similarly, the work in [@ahmad2018social] also demonstrates better results for visual features over textual and other information from meta-data in disaster analysis. The majority of the visual features based frameworks for disaster analysis rely on existing pre-trained models either as feature descriptors or the models are fine-tuned on disaster related images. To this aim, the existing models pre-trained on both ImageNet [@deng2009imagenet] and Places [@zhou2014learning] datasets have been employed. For instance, in [@alam2018processing], an existing model; namely, VGGNet-16 [@simonyan2014very] pre-trained on ImageNet is fine-tuned on disaster related images for categorization of the images into different categories, such as informative and non-informative, damage severity and humanitarian categories. Ahmad et al. [@ahmad2018comparative] utilized existing models pre-trained on both ImageNet and Places dataset as feature descriptors both individually and in different combinations. The authors also evaluate the performance of several handcrafted visual features extracted. More recently, disaster analysis of images shared on social media has also been introduced as a sub-task in a benchmark competition; namely, MediaEval[^1] for two consecutive years. In MediaEval-2017 [@bischke2017detection], the task focused on the classification of social media imagery into flood-related and non-flooded images. On the other hand, the task in MediaEval-2018 [@bischke2018multimediasatellite] focused on the identification of passable and non-passable roads in social media images. Majority of the solutions proposed for the classification of images into flooded and non-flooded categories in MediaEval-2017 relied on deep models (e.g., [@ahmad2018social; @bischke2017detection; @nogueira2017data; @avgerinakis2017visual]). For instance, in [@ahmad2018social] an ensemble framework relying on several deep models used as feature descriptors has been proposed. Similar trend has been observed in MediaEval-2018 for the identification and classification of passable roads through information available on social media, where majority of the methods relied on ensembles of deep models (e.g., [@ahmad2019automatic; @feng2018extraction; @Zhao2018multimediasatellite; @Anastasia2018multimediasatellite; @Armin2018multimediasatellite; @bischke2018multimediasatellite]). For instance, in [@ahmad2019automatic] multiple deep models were jointly utilized in an early, late and double fusion manner. In the literature, disaster analysis in images has been mostly treated as a supervised learning task where classification models are trained on training samples annotated with human annotators. Two benchmark datasets, namely DIRSM [@bischke2017multimedia] and FCSM [@bischke2018multimediasatellite], have been mostly reported in the literature [@said2019natural]. The datasets provide a limited set of images, which are not sufficient to train deep models. Moreover, both datasets cover flood related images, only. We believe active learning techniques could be useful to cover the limitation of lack of sufficient annotated training data in the domain. Active learning: definitions and concepts {#sec:background} ========================================= Active learning is a semi-supervised learning technique which selects the training data it wants to learn from [@zhang2019active]. Selecting good training samples from the data enables active learning techniques to perform significantly better with fewer training samples compared to passive learning methods [@kading2018active]. In passive learning methods, a large chuck of the data is randomly collected from an underlying distribution for training purposes. The main advantage of active learning over passive learning is the ability to make a decision on the basis of the responses from the previous queries for choosing instances from the unlabelled pool of images. In this work, we mainly rely on pool-based sampling methods where samples are drawn from a large pool of unlabelled samples; namely, $u=\{x_i\}_{i=1}^{n}$. An initial training set also known as the seed denoted as $\upsilon=\{x'_i\}_{i=1}^{n'}$ is used the train the initial model, $\theta$, and is populated by picking and annotating the instances with $y_i=\{y_1,y_2, ... m\}$ from the unlabelled pool of samples, iteratively. In the next subsections, we provide a detailed description of the two query techniques (i.e., active learning schemes) used in this work along with the different sampling and disagreement methods used by those methods. Uncertainty Sampling -------------------- Uncertainty Sampling is one of the most common and widely used active learning techniques. With uncertainty sampling, the active learner queries the most uncertain instances (i.e., the samples for which the learner is least certain how to label). The technique is called uncertainty sampling because of its use of posterior probabilities in making decision, and is often straight forward for probabilistic learning models. For example, in case of binary classification, uncertainty sampling techniques simply ask for the instance that has a posterior probability of being positive around 0.5. For the selection of the samples, we employed several variants of this technique based on the informativeness measure of the unlabelled instances with three different sampling strategies; namely, (i) least confidence, (ii) margin sampling and (iii) entropy sampling. Next, we provide detailed description of those sampling strategies. ### *Least Confidence Query Strategy* This sampling strategy aims to choose the instance from the pool for which the learner has the least confidence about its most likely label as shown by equation \[equ:LC\], where $x$, $y^{'}$ and $\theta$ represent the sample, the most probably label and the underlying model, respectively. The strategy is more suitable for multi-class classification. For example, if we have two unlabeled instances; namely, D1 and D2, having probabilities (p1, p2 and p3) with values (0.9,0.09,0.01) and (0.2,0.5,0.3) for class labels A, B and C, respectively, the Least Confidence (LC) query strategy selects D2 to be labeled as the learner is less sure about its most likely label. This example is illustrated in Figure \[fig:theme\_2\]. One way to interpret this query strategy is that the model selects an instance believed to be mislabeled. $$\small \centering LC(X)= argmax_{x} 1 - p_{\theta} (y^{'} | x) \label{equ:LC}$$ ![An illustration of the working mechanism of the different sampling strategies used for uncertainty sampling. The sampling strategies; namely, LC, MS and ES, are represented in red, green and yellow colors, respectively. LC and MS consider the top 1 and 2 most probable labels while ES decides on the basis of the complete probability distribution considering all classes.[]{data-label="fig:theme_2"}](theme_2.pdf){width="0.85\linewidth"} ### *Margin Sampling* One shortcoming of the LC query strategy is the decision on the basis of the most probable label only. The LC query strategy does not consider the rest of the labels which might be useful in the selection process. In order to cope with this limitation, Margin Sampling (MS) incorporates the posterior probability of the second most likely label by selecting an instance having the least difference between the top two most probable labels. Let’s suppose $ y_1^{'}$ and $ y_2^{'}$ are the top two most probable labels for a sample $ x$ under a model $ \theta $. Then the margin between the two samples can be represented by equation \[equ:MS\]. Considering the previous example presented in Figure \[fig:theme\_2\], margin sampling selects D2 as the difference between its two most probable labels (i.e., $0.5 - 0.3 = 0.2$) is less than the difference between the two most probable labels of D1 (i.e., $0.9 - 0.09 = 0.81$). The low difference between the labels of D2 indicates that the instance is ambiguous and thus getting the true label of the instance would help in the classification process. $$\centering MS(X)= p_{\theta} (y1^{'} | x) - p_{\theta} (y2^{'} | x) \label{equ:MS}$$ ### *Entropy Sampling* MS considers the top two most probable labels in the decision making process; however, for a dataset with higher number of class labels, the top two most probable labels are not sufficient to represent the probability distribution. To this aim, the Entropy Sampling (ES) strategy efficiently utilizes the probability distribution by calculating the entropy of each instance using equation \[equ:etropy\], where $P(y|x)$ represents the posterior probability while $H$ is the uncertainty measure and $Y$ is the output class. Subsequently, an instance with the highest value is queried. In case of our example shown in Figure \[fig:theme\_2\], D1 yields a value of 0.155 while D2 has a value of 0.447. Therefore entropy sampling selects the instance D2 for labelling. In case of binary classification, entropy sampling performs as margin and least confident sampling. However, it is most useful for probabilistic multi-class classification problems. $$\small \centering ES(x)= -\sum_{y\epsilon{Y}}P_{\theta}(y|x)\log_{2} P_{\theta}(y|x) \label{equ:etropy}$$ Query By Committee ------------------ The other active learning technique employed in this work is based on the query by Committee strategy. In this method, a query of different competing hypotheses (i.e., trained classifiers represented as $C={( \theta_{1}, \theta_{2}, \theta_{3} . . . \theta_{n}})$ of the current labelled data set namely $\lambda$ is maintained. The queries are then selected by measuring the disagreement between these hypotheses. The aim of the query by committee strategy is to reduce the version space, which is the set of hypotheses consistent with the current labelled set. For example, if machine learning is used to search for the best model within the version space then the aim of the query by committee method is to constrain the size of this space as much as possible leading to a more precise search with as few labelled instances as possible [@settles2009active]. In case of several hypotheses, the instance to be labeled next is chosen by measuring the disagreement among the hypotheses. Different strategies can be utilized to measure the disagreement, in this study we use three different strategies as detailed below. ### Vote Entropy Vote entropy can be considered as query by Committee generalization of the entropy based uncertainty sampling, and is calculated by equation \[equ:VE\], where $y_{i} $ is the vector of all possible labels, $C$ represents the committee of the classifiers while $V(y_{i})$ is the total number of votes for label $y_{'}$. Suppose there are three classifiers (i.e., committee size is 3), three classes \[0,1,2\] and five unlabeled instances. Then, in order to calculate the vote entropy, every classifier is first asked for its prediction for all the unlabelled instances. Suppose the predictions returned for a single instance by all the three classifiers is \[0, 1, 0\] (i.e., classifier 1 predicts that the instance lies in class-0, classifier 2 predicts it as a sample from class-1 and classifier 3 also predicts it as class-0). Each instance has a corresponding probability distribution (i.e., the distribution of class labels when picking the classifier at random). In the stated example, there are two votes for 0, one vote for 1 and 0 votes for 2. Therefore, the probability distribution for this instance is \[0. 6666, 0.3333, 0\]. Among all the five instances, vote entropy selects the instance which has the largest entropy of this vote distribution. $$\small \centering VE(x)= arg_{x}max - \sum_{i} \dfrac{V(y_{i})}{C} \log \dfrac{V(y_{i})}{C} \label{equ:VE}$$ ### Consensus Entropy In consensus entropy, instead of calculating the probability distribution of the votes, the average of the class probabilities provided by each classifier in the committee is calculated. This average class probability is called the consensus probability. Once the consensus probability is calculated using equation \[equ:CE\] (where $C$ represents the committee of the classifiers), its entropy is computed and the instance with the largest entropy is selected to be labelled by the labeler. $$\small \centering CE(x)= \dfrac{1}{C}\sum_{c = 1}^{C}P_{\theta}(y_{i}) \label{equ:CE}$$ ### Max disagreement The Max disagreement sampling technique calculates the disagreement of each learner by using the consensus probability and then selects the instance with the largest disagreement. In this way, it deals with the issue of the other two strategies which take the actual disagreement into account in a weak sense. Methodology {#sec:techniques} =========== Figure \[fig:methodology\] provides the block diagram of the proposed methodology. The framework is composed of three main components; namely, (i) feature extraction, (ii) collection/annotation of the training samples through active learning and (iii) classification/evaluation. For feature extraction, we rely on an existing pre-trained model. For collection/annotation of the training samples, several active learning techniques are utilized. The classification phase is based on Support Vector Machines (SVMs). The feature extraction and classification phases are rather standard, and the main strength of the proposed framework stems from the active learning part where we collect/annotate relevant training samples from an unlabeled pool of images retrieved from social media outlets. In the next subsections, we provide detailed analysis of those phases. ![image](methodology.pdf){width="0.78\linewidth"} Feature Extraction and classification ------------------------------------- For feature extraction, we rely on an existing deep model, ResNet-50 [@he2016deep], pre-trained on ImageNet [@deng2009imagenet]. The model is used as feature descriptor without any retraining and fine-tuning. The basic motivation for using the existing pre-trained models as feature descriptor comes from our previous work [@ahmad2018comparative; @ahmad2019deep1] where we have shown outstanding generalization capabilities on disaster images. Features are extracted from the top fully connected layer resulting in a 1000 dimensional feature vector and the classification phase is based on SVM. Active Learning --------------- In this phase, as a first step, we divide the images collected from social media into two sub-sets; namely, (i) initial training set, which is also known as the *seed* and is annotated with human annotators, and (ii) unlabeled pool of images. An SVM classifier is then trained on the initial small labeled training set and the initial accuracy is recorded in the second step. The training set is then populated by querying images from the unlabeled pool of images in step 3, iteratively. To this aim, we employed two methods; namely, (i) Uncertainty Sampling and (ii) Query By Committee. For each method, three different sampling/disagreement strategies are utilized as described in Section \[sec:background\]. Steps 2 and step 3 are repeated for a given number of iterations as detailed in the experimental setup Section \[sec:results\]. Dataset collection {#sec:dataset} ================== Our new collected dataset covers images from most common types of natural disasters; including, cyclone, drought, earthquake, floods, landslides, thunderstorm, snowstorm and wildfires. The images are downloaded from social media platforms using the corresponding keywords. The collection of the images is divided into two sub-sets; namely, an initial training set also known as seed and an unlabeled pool of images. For our initial training which is the only part of the training set annotated by human annotators, a subset composed of 160 images collected for each class/type of disaster is randomly selected and annotated by human annotators in a crowd sourcing study. Similarly, the test set, which is composed of 2,516 images, has also been manually examined and annotated in the crowd-sourcing study. The rest of the collected images are treated as an unlabeled pool of images containing a large portion of irrelevant images. Moreover, in the comparison against baselines, for one of the methods as detailed in Section \[sec:results\], we also manually annotated the unlabeled pool of images resulting in around 2500 additional annotated images. Figure \[fig:sample\_images\] provides some sample images from the dataset. ![Sample images from the dataset.[]{data-label="fig:sample_images"}](sample_images.png){width="0.9\linewidth"} Experimental Setup and Results {#sec:results} ============================== Experimental Setup ------------------ The objective of our experiments is manifold. Our objective is to analyze the performance of active learning in support of disaster analysis in images shared on social networks. We also aim to analyze the performance of different active learning techniques when using different sampling/disagreement strategies. Moreover, we want to analyze the difference in the performances of a model/classifier trained on human annotated dataset and training samples collected through the active learning techniques. To achieve those objectives, we performed the following experiments: - First, we analyze the performance of two commonly used techniques of pool-based sampling active learning; namely, uncertainty sampling and query by committee. - Then, we investigate the impact of using different sampling and disagreement strategies in conjunction with active learning methods on their overall performance. - Finally, we assess and evaluate the performance of active learning techniques against two baseline methods where one of the fully supervised classifiers is trained on labeled data annotated by human annotators while the other is trained on the complete pool of images that includes irrelevant ones. We used the same experimental setup for all our experimental studies. Specifically, our initial training set (seed), annotated manually, is composed of 160 images covering 20 samples from each of the eight different types of natural disasters. Moreover, we used a different number of iterations (max 2000) in our experiments. In each iteration, a single image from the pool of unlabelled images is included in the training set. Experimental results -------------------- Table \[tab:uncertainity\_results\] provides the evaluation results of the uncertainty sampling method with three different sampling strategies; namely, LC, MS and ES using a variable number of iterations ranging from 1 to 2000 (step size of 250). As expected, the accuracy improves by adding relevant samples from the unlabeled pool of images to the initial training set in each iteration until the accuracy stabilizes for all three methods. Here one important observation is the variation in the performances of the method with the three different sampling techniques as the LC considers only the most probable label, MS considers the top two while ES makes use of all the labels in it decision of choosing a sample from the pool. No significant difference was observed when the number of iterations is around 2000. However, higher variations were observed in the accuracy of the different sampling strategies when the number of iterations is below 1000. At beginning, surprisingly, MS and LC strategies performed well compared to ES, which shows the importance of the make use of most probably labels only in the decision making process. However, relying on the most probably label increases dependence on the accuracy of the initial model/classifier trained on the initial small training set. In Table \[tab:query\_results\], we provide the experimental results of query by committee based active learning method with different disagreement strategies given a number of iterations. Overall, better accuracy is obtained compared to the uncertainty sampling methods, which is mainly due to employing several hypotheses/models in the sample selection process. As far as the comparison of the disagreement strategies is concerned, slightly better results are observed for the CE and MD strategies compared to the VE. In order to better analyze the variations in the accuracy of these methods with different sampling and disagreement strategies at different iterations, Figure \[plot2\] provides the performance of the methods with different sampling and disagreement strategies at each iteration. As can be seen, both the methods start at lower accuracy with all sampling and disagreement strategies and improve iteratively. Compared to uncertainty sampling, the cures are more smoother for query by committee method. Moreover, the accuracy improves more rapidly and achieves stability sooner (i.e., after 1000 iterations the accuracy is stabilized). ![Comparison of both methods with different sampling strategies[]{data-label="plot2"}](plot_all.JPG){width="0.95\linewidth"} The main focus of the paper is to analyze and evaluate the importance/application of active learning techniques in disaster analysis and to show how the active learning component can further improve the performances of disaster analysis frameworks with less annotated data. Thus, in order to show the effectiveness of the active learning methods, instead of sate-of-the-art methods, we compare the results against two extreme cases reported as baseline 1 and baseline 2 as shown in Figure \[fig:comparison\]. In the first baseline method, an SVM is trained on human annotated training set, where relevant samples were collected and annotated by human observers from the pool of images. In the experiment, features are extracted with the same deep model (i.e., ResNet) using the same parameters for the SVM classifier. Moreover, a significant amount of training samples (i.e., around 2500) have been used for training the classifier. In the second case, we trained an SVM classifier on the complete pool of images without removing the irrelevant images with the aim to analyze how much the irrelevant images affect the performance of the classifier. As can be seen in most of the cases the active learning methods have comparable results to those obtained from the baseline 1 with fully supervised method, which uses a human annotated training set. Those results illustrate the effectiveness of the active learning techniques where a small annotated dataset is utilized to obtain better results without involving human annotators in the tedious job of annotation large training sets. In the second case, the accuracy has been reduced significantly showing the efficacy of the active learning techniques able to pick right samples for training among the pool of images. ![Comparisons of the active learning methods against baseline.[]{data-label="fig:comparison"}](comparisons.png){width="0.95\linewidth"} ### Lessons learned The lessons learned from the experiments are: - The accuracy improves by adding relevant samples from the unlabeled pool of images to the initial training set in each iteration until the accuracy stabilizes at certain point. - Better accuracy against the baseline methods illustrates the effectiveness of the active learning techniques where a small annotated dataset is utilized to obtain better results without involving human annotators in the tedious job of annotation large training sets. Conclusion {#sec:conclusion} ========== In this paper we presented an active learning approach for the disaster analysis in images shared on social media outlets. We mainly used two techniques with several sampling and disagreement strategies for each of the methods. Our experimental results illustrate the effectiveness of using active learning techniques and their ability to produce results comparable to those obtained using human annotated training sets. Our experimental results also illustrate that the classification accuracy improves with the inclusion of images from the unlabelled pool of images in each iteration using active learning. Furthermore, our proposed iterative technique ultimately achieves stability in terms of classification accuracy through the progressive inclusion of images from the unlabelled pool of images. Finally, it has been demonstrated that the query by committee active learning method is more effective for the disaster analysis in images compared to the uncertainty sampling based active learning methods. [^1]: http://www.multimediaeval.org
--- abstract: 'We introduce the spin and momentum dependent [*force operator*]{} which is defined by the Hamiltonian of a [*clean*]{} semiconductor quantum wire with homogeneous Rashba spin-orbit (SO) coupling attached to two ideal (i.e., free of spin and charge interactions) leads. Its expectation value in the spin-polarized electronic wave packet injected through the leads explains why the center of the packet gets deflected in the transverse direction. Moreover, the corresponding [*spin density*]{} will be dragged along the transverse direction to generate an out-of-plane spin accumulation of opposite signs on the lateral edges of the wire, as expected in the phenomenology of the spin Hall effect, when spin-$\uparrow$ and spin-$\downarrow$ polarized packets (mimicking the injection of conventional unpolarized charge current) propagate simultaneously through the wire. We also demonstrate that spin coherence of the injected spin-polarized wave packet will gradually diminish (thereby diminishing the “force”) along the SO coupled wire due to the entanglement of spin and orbital degrees of freedom of a single electron, even in the absence of any impurity scattering.' author: - 'Branislav K. Nikoli'' c' - 'Liviu P. Z\^ arbo' - Sven Welack title: 'Transverse Spin-Orbit Force in the Spin Hall Effect in Ballistic Semiconductor Wires' --- The classical Hall effect [@classical_hall] is one of the most widely known phenomena of condensed matter physics because it represents manifestation of the fundamental concepts of classical electrodynamics—such as the Lorentz force—in a complicated solid state environment. A perpendicular magnetic field ${\bf B}$ exerts the Lorentz force ${\bf F} = q {\bf v} \times {\bf B}$ on current ${\bf I}$ flowing longitudinally through metallic or semiconductor wire, thereby separating charges in the transverse direction. The charges then accumulate on the lateral edges of the wire to produce a transverse “Hall voltage” in the direction $q {\bf I} \times {\bf B}$. Thus, Hall-effect measurements reveal the nature of the current carriers. Recent optical detection [@kato; @wunderlich] of the accumulation of spin-$\uparrow$ and spin-$\downarrow$ electrons on the opposite lateral edges of current carrying semiconductor wires opens new realm of the [*spin Hall effect*]{}. This phenomenon occurs in the absence of any external magnetic fields. Instead, it requires the presence of SO couplings, which are tiny relativistic corrections that can, nevertheless, be much stronger in semiconductors than in vacuum. [@rashba_review] Besides deepening our fundamental understanding of the role of SO couplings in solids, [@rashba_review; @spintronics] the spin Hall effect offers new opportunities in the design of all-electrical semiconductor spintronic devices that do not require ferromagnetic elements or cumbersome-to-control external magnetic fields. [@spintronics] While experimental detection of the strong signatures of the spin Hall effect brings to an end decades of theoretical speculation for its existence, it is still unclear what spin-dependent forces are responsible for the observed spin separation in different semiconductor systems. One potential mechanism—asymmetric scattering of spin-$\uparrow$ and spin-$\downarrow$ electrons off impurities with SO interaction—was invoked in the 1970s to predict the emergence of [*pure*]{} (i.e., not accompanied by charge transport) spin current, in the transverse direction to the flow of longitudinal unpolarized charge current, which would deposit spins of opposite signs on the two lateral edges of the sample. [@extrinsic] However, it has been argued [@bernevig] that in systems with weak SO coupling and, therefore, no spin-splitting of the energy bands such spin Hall effect of the [*extrinsic*]{} type (which vanishes in the absence of impurities) is too small to be observed in present experiments [@kato] (unless it is enhanced by particular mechanisms involving intrinsic SO coupling of the bulk crystal [@engel2005a]). Much of the recent revival of interest in the spin Hall effect has been ignited by the predictions [@murakami; @sinova] for substantially larger transverse pure spin Hall current as a response to the longitudinal electric field in semiconductors with strong SO coupling which spin-splits energy bands and induces Berry phase correction to the group velocity of Bloch wave packets. [@wave_packet] However, unusual properties of such [*intrinsic*]{} spin Hall current in infinite homogeneous systems, which depends on the whole Fermi sea (i.e., it is determined solely by the equilibrium Fermi-Dirac distribution function and spin-split Bloch band structure) and it is not conserved in the bulk due to the presence of SO coupling, [@murakami; @sinova] have led to arguments that its non-zero value does not correspond to any real transport of spins [@rashba_eq; @zhang] so that no spin accumulation near the boundaries and interfaces could be induced by any intrinsic mechanism (i.e., in the absence of impurities [@zhang]). On the other hand, quantum transport analysis of spin-charge spatial propagation through [*clean*]{} semiconductor wires, which is formulated in terms of genuine nonequilibrium and Fermi surface quantities (i.e., conserved spin currents [@ring_hall; @meso_hall; @meso_hall_1] and spin densities [@accumulation]), predicts that spin Hall accumulation [@kato; @wunderlich] of opposite signs on its lateral edges will emerge due to strong SO coupling within the wire region. [@accumulation] Such [*mesoscopic*]{} spin Hall effect is determined by the processes on the mesoscale set by the spin precession length, [@meso_hall; @accumulation] and depends on the whole measuring geometry (i.e., boundaries, interfaces, and the attached electrodes) due to the effects of confinement on the dynamics of transported spin in the presence of SO couplings in finite-size semiconductor structures. [@chao; @purity] Thus, to resolve the discrepancy between different theoretical answers to such fundamental question as—[*Are SO interaction terms in the effective Hamiltonian of a clean spin-split semiconductor wire capable of generating the spin Hall like accumulation on its edges?*]{}—it is highly desirable to develop a picture of the transverse motion of spin density that would be as transparent as the familiar picture of the transverse drift of charges due to the Lorentz force in the classical Hall effect. Here we offer such a picture by analyzing the [*spin-dependent*]{} “force”, which can be associated with any SO coupled quantum Hamiltonian, and its effect on the semiclassical dynamics of spin density of individual electrons that are injected as spin-polarized wave packets into the Rashba SO coupled clean semiconductor quantum wire attached to two ideal (i.e., interaction and disorder free) leads. The effective mass Hamiltonian of the ballistic Rashba quantum wire is given by $$\label{eq:rashba} \hat{H} = \frac{\hat{\bf p}^2}{2m^*} + \frac{\alpha}{\hbar}\left(\hat{\bm \sigma}\times\hat{\bf p}\right) \cdot {\bf z} + V_{\rm conf}(y),$$ where $\hat{\bf p}$ is the momentum operator in 2D space, $\hat{\bm \sigma}=(\hat{\sigma}^x,\hat{\sigma}^y,\hat{\sigma}^z)$ is the vector of the Pauli spin operators, and $V_{\rm conf}(y)$ is the transverse potential confining electrons to a wire of finite width. We assume that the wire of dimensions $L_x \times L_y$ is realized using the two-dimensional electron gas (2DEG), with ${\bf z}$ being the unit vector orthogonal to its plane. Within the 2DEG, carriers are subjected to the Rashba SO coupling of strength $\alpha$, which arises due to the structure inversion asymmetry [@rashba_review] (of the confining potential and differing band discontinuities at the heterostructure quantum well interface [@pfeffer]). This Hamiltonian generates a spin-dependent force operator which can be extracted [@schliemann_force; @shen_force] within the Heisenberg picture [@ballentine] as $$\begin{aligned} \label{eq:force} \hat{\bf F}_H & = & m^* \frac{d {\bf r}^2_H}{dt^2} = \frac{m^*}{\hbar^2} [\hat{H},[\hat{\bf r}_H,\hat{H}]] \\ & = & \frac{2 \alpha^2 m^* }{\hbar^3} (\hat{\bf p}_H \times {\bf z}) \otimes \hat{\sigma}^z_H - \frac{d V_{\rm conf}(\hat{y}_H)}{d \hat{y}_H} {\bf y} \nonumber.\end{aligned}$$ Here the Heisenberg picture operators carry the time dependence of quantum evolution, i.e., $\hat{\bf p}_H (t) = e^{i\hat{H}t/\hbar} \hat{\bf p} e^{-i\hat{H}t/\hbar}$, $\hat{\sigma}^z_H(t) = e^{i\hat{H}t/\hbar} \hat{\sigma}^z e^{-i\hat{H}t/\hbar}$, and $\hat{y}_H (t) = e^{i\hat{H}t/\hbar} \hat{y} e^{-i\hat{H}t/\hbar}$, where $\hat{\sigma}^z$, $\hat{\bf p}$, and $\hat{y}$ are in the Schr" odinger picture and, therefore, time-independent. Since the force operator [@shen_force] depends on spin through $\hat{\sigma}^z_H$, which is a genuine (internal) quantum degree of freedom, [@ballentine] it does not have any classical analog. Its physical meaning (i.e., measurable predictions) is contained in the quantum-mechanical expectation values, such as $\langle \hat{F}_y \rangle (t) = \langle \Psi(t=0) | \hat{F}_H^y (t) | \Psi(t=0) \rangle$ obtained by acting with the transverse component $\hat{F}_H^y$ of the vector of the force operator $(\hat{F}_H^x,\hat{F}_H^y)$ on the quantum state $|\Psi(t=0) \rangle$ of an electron. While such “force” can always be associated with a given quantum Hamiltonian, its usefulness in understanding the evolution of quantum systems is limited—the local nature of the force equation cannot be reconciled with inherent non-locality of quantum mechanics. For example, if the force “pushes” the volume of a wave function locally, one has to find a new [*global*]{} wave function in accord with the boundary conditions at [infinity]{} (the same problem remains well-hidden in the Heisenberg picture where time dependence is carried by the operators while wave functions are time-independent). Nevertheless, analyzing the dynamics of spin and probability densities in terms of the action of local forces can be insightful for particles described by wave packets (whose probability distribution is small compared to the typical length scale over which the force varies). [@wave_packet; @ballentine] Therefore, we examine in Fig. \[fig:force\] the transverse SO “force” $\langle \hat{F}_y \rangle$ in the spin wave packet state, which at $t=0$ resides in the left lead as fully spin-polarized (along the $z$-axis) and spatially localized wave function [@ohe; @serra] $$\label{eq:packet} \Psi(t=0) = C \sin{\left( \frac{\pi y} {(L_y+1)a} \right)} e^{i k_x x- \delta k_x^2 x^2/4} \otimes \chi_\sigma.$$ This is a pure and separable quantum state $|\Psi(t=0) \rangle = |\Phi\rangle \otimes |\sigma \rangle$ in the tensor product of the orbital and spin Hilbert spaces ${\mathcal H}_o \otimes {\mathcal H}_s$. The orbital factor state $\langle x,y |\Phi \rangle$ consists of the lowest subband of the hard wall transverse confining potential and a Gaussian wave packet along the $x$-axis whose parameters are chosen to be $k_xa=0.44$ and $\delta k_xa=0.1$ ($C$ is the normalization constant determined from $\langle \Phi | \Phi \rangle=1$). The spin factor state is an eigenstate of $\hat{\sigma}^z$, i.e., $\chi_\uparrow = \left(\begin{array}{c} 1 \\ 0 \end{array} \right)$ or $\chi_\downarrow = \left( \begin{array}{c} 0 \\ 1 \end{array} \right)$. Unlike the case [@schliemann_force] of an infinite 2DEG, the exact solutions of the Heisenberg equation of motion for $\hat{\sigma}^z_H(t)$, $\hat{y}_H(t)$ and $\hat{\bf p}_H(t)$ entering in Eq. (\[eq:force\]) are not available for quantum wires of finite width. Thus, we compute the expectation value $\langle \Psi(t) | \hat{F}_y | \Psi(t) \rangle$ in the Schr" odinger picture by applying the evolution operators $e^{-i\hat{H}t/\hbar}$ present in Eq. (\[eq:force\]) on the wave functions $|\Psi(t) \rangle = \sum_{n} e^{-iE_n t/\hbar} |E_n \rangle \langle E_n| \Psi(t=0) \rangle$. To obtain the exact eigenstates [@serra; @governale; @usaj] $|E_n \rangle$ and eigenvalues $E_n$, we employ the discretized version of the Hamiltonian Eq. (\[eq:rashba\]). That is, we represent the Hamiltonian of the Rashba spin-split quantum wire in the basis of states $|{\bf m} \rangle \otimes |\sigma \rangle$, where $|{\bf m} \rangle$ are $s$-orbitals $\langle {\bf r}|{\bf m}\rangle = \psi({\bf r}-{\bf m})$ located at sites ${\bf m}=(m_x,m_y)$ of the $L_x \times L_y$ lattice with the lattice spacing $a$ (typically [@purity] $a \simeq 3$ nm). This representation extracts the two energy scales from the Rashba Hamiltonian Eq. (\[eq:rashba\]): $t_{\rm o}=\hbar^2/(2m^*a^2)$ characterizing hopping between the nearest-neighbor sites without spin-flip; and $t_{\rm SO}=\alpha/ 2 a$ for the same hopping process when it involves spin flip. [@accumulation; @purity] The wave vector of the Gaussian packet $k_xa=0.44$ is chosen [@accumulation] to correspond to the Fermi energy $E_F = -3.8 t_{\rm o}$ close to the bottom of the band where tight-binding dispersion relation reduces to the parabolic one of the Hamiltonian Eq. (\[eq:rashba\]). In this representation one can directly compute the commutators in the definition of the force operator Eq. (\[eq:force\]), thereby bypassing subtleties which arise when evaluating the transverse component of the force operator $-dV_{\rm conf}(\hat{y}_H){\bf y}/d\hat{y}_H$ stemming from the hard wall boundary conditions. [@rokhsar] ![(Color online) The expectation value of the transverse component of the SO force operator (upper panel) in the quantum state of propagating spin wave packet along the two-probe nanowire. The middle panel shows the corresponding transverse position of the center of the wave packet as a function of its longitudinal coordinate. The initial state in the left lead is fully spin-polarized wave packet Eq. (\[eq:packet\]), which is injected into the SO region of the size $L_x \times L_y \equiv 100a \times 31a$ ($a \simeq 3$ nm) with strong Rashba coupling $t_{\rm SO}=\alpha/2a=0.1 t_{\rm o}$ and the corresponding spin precession length $L_{\rm SO} = \pi t_{\rm o} a/2t_{\rm SO} \approx 15.7a<L_x$ (middle panel) or weak SO coupling $t_{\rm SO} = 0.01 t_{\rm o}$ and $L_{\rm SO} \approx 157a>L_x$ (lower panel).[]{data-label="fig:force"}](spin_force.eps "fig:") ![(Color online) The expectation value of the transverse component of the SO force operator (upper panel) in the quantum state of propagating spin wave packet along the two-probe nanowire. The middle panel shows the corresponding transverse position of the center of the wave packet as a function of its longitudinal coordinate. The initial state in the left lead is fully spin-polarized wave packet Eq. (\[eq:packet\]), which is injected into the SO region of the size $L_x \times L_y \equiv 100a \times 31a$ ($a \simeq 3$ nm) with strong Rashba coupling $t_{\rm SO}=\alpha/2a=0.1 t_{\rm o}$ and the corresponding spin precession length $L_{\rm SO} = \pi t_{\rm o} a/2t_{\rm SO} \approx 15.7a<L_x$ (middle panel) or weak SO coupling $t_{\rm SO} = 0.01 t_{\rm o}$ and $L_{\rm SO} \approx 157a>L_x$ (lower panel).[]{data-label="fig:force"}](y_motion.eps "fig:") ![(Color online) The expectation value of the transverse component of the SO force operator (upper panel) in the quantum state of propagating spin wave packet along the two-probe nanowire. The middle panel shows the corresponding transverse position of the center of the wave packet as a function of its longitudinal coordinate. The initial state in the left lead is fully spin-polarized wave packet Eq. (\[eq:packet\]), which is injected into the SO region of the size $L_x \times L_y \equiv 100a \times 31a$ ($a \simeq 3$ nm) with strong Rashba coupling $t_{\rm SO}=\alpha/2a=0.1 t_{\rm o}$ and the corresponding spin precession length $L_{\rm SO} = \pi t_{\rm o} a/2t_{\rm SO} \approx 15.7a<L_x$ (middle panel) or weak SO coupling $t_{\rm SO} = 0.01 t_{\rm o}$ and $L_{\rm SO} \approx 157a>L_x$ (lower panel).[]{data-label="fig:force"}](y_motion_001.eps "fig:") Figure \[fig:force\] shows that as soon as the front of the spin-polarized wave packet enters the strongly SO coupled region, its center $\langle \hat{y} \rangle (t) = \langle \Psi(t) | \hat{y} | \Psi(t) \rangle$ will be deflected along the $y$-axis in the same direction as is the direction of the transverse SO “force”. However, due to its inertia the packet does not follow fast oscillations of the SO “force” occurring on the scale of the spin precession length [@accumulation; @purity] $L_{\rm SO} = \pi t_{\rm o}a/2t_{\rm SO}$ on which spin precesses by an angle $\pi$ (note that the spin splitting generates a finite difference of the Fermi momenta, which is the same for all subbands of the quantum wire in the case of parabolic energy-momentum dispersion, so that $L_{\rm SO}$ is equal for all channels [@governale]). In contrast to an infinite 2DEG of the intrinsic spin Hall effect, [@sinova; @rashba_eq; @zhang] in quantum wires electron motion is confined in the transverse direction and the effective momentum-dependent Rashba magnetic field ${\bf B}_R({\bf k})$ is, therefore, nearly parallel to this direction. [@governale; @purity] Thus, the change of the direction of the transverse SO “force” is due to the fact that the $z$-axis polarized spin will start precessing within the SO region since it is not an eigenstate of the Zeeman term $\hat{\bm \sigma} \cdot {\bf B}_R({\bf k})$ \[i.e., of the Rashba term in Eq. (\[eq:rashba\])\]. The transverse SO “force” and the motion of the center of the wave packets in Fig. \[fig:force\] suggests that when [*two*]{} electrons with opposite spin-polarizations are injected [*simultaneously*]{} into the SO coupled quantum wire with perfectly homogeneous [@ohe] Rashba coupling, the initially unpolarized mixed spin state will evolve during propagation through the wire to develop a non-zero spin density at its lateral edges. This intuitive picture is confirmed by plotting in Fig. \[fig:spin\_density\] the spin density, $$\begin{aligned} \label{eq:spin_density} {\bf S}_{\bf m}(t) & = & \frac{\hbar}{2} \langle \Psi(t) | \hat{\bm \sigma} \otimes |{\bf m} \rangle \langle {\bf m}| \Psi(t) \rangle \nonumber \\ & = & \frac{\hbar}{2} \sum_{\sigma,\sigma^\prime} c_{{\bf m},\sigma^\prime}^*(t) c_{{\bf m},\sigma}(t) \langle \sigma^\prime | \hat{\bm \sigma}| \sigma \rangle,\end{aligned}$$ corresponding to the coherent evolution of two spin wave packets, $|\Psi (t=0) \rangle = |\Phi \rangle \otimes |\!\! \uparrow \rangle$ and $|\Psi (t=0) \rangle = |\Phi \rangle \otimes |\!\! \downarrow \rangle$, across the wire. The mechanism underlying the decay of the transverse SO “force” intensity is explained in Fig. \[fig:decoherence\], where we demonstrate that (initially coherent) spin precession is also accompanied by [*spin decoherence*]{}. [@purity; @galindo] These two processes are encoded in the rotation of the spin polarization vector ${\bf P}$ and the reduction of its magnitude ($|{\bf P}| = 1$ for fully coherent pure states $\hat{\rho}_s^2 = \hat{\rho}_s$), respectively. The spin polarization vector is extracted from the density matrix $\hat{\rho}_s = (1+{\bf P} \cdot \hat{\bm \sigma})/2$ of the spin subsystem. [@ballentine] The spin density matrix $\hat{\rho}_s$ is obtained as the exact reduced density matrix at each instant of time by tracing the pure state density matrix $\hat{\rho}(t) = |\Psi (t) \rangle \langle \Psi(t)|$ over the orbital degrees of freedom, $$\begin{aligned} \label{eq:rho} \hat{\rho}_s (t) & = & {\rm Tr}_o |\Psi (t) \rangle \langle \Psi(t)| = \sum_{\bf m} \langle {\bf m} |\Psi (t) \rangle \langle \Psi(t)|{\bf m}\rangle \nonumber \\ & = & \sum_{\bf m,\sigma,\sigma^\prime} c_{{\bf m},\sigma}(t) |\sigma \rangle \langle \sigma^\prime|c_{{\bf m},\sigma^\prime}^*(t).\end{aligned}$$ The dynamics of the spin polarization vector and the spin density shown in Fig. \[fig:decoherence\] are in one-to-one correspondence $$\frac{\hbar}{2}{\bf P}(t) = \frac{\hbar}{2} {\rm Tr}_s \,\left[ \hat{\rho}_s(t) \hat{\bm{\sigma}} \right] = \sum_{\bf m} {\bf S}_{\bf m}(t).$$ The incoming quantum state from the left lead in Fig. \[fig:decoherence\] is separable $|\Psi (t=0) \rangle = \sum_{{\bf m},\sigma} c_{{\bf m},\sigma}(t=0) |{\bf m} \rangle \otimes |\sigma \rangle = |\Phi \rangle \otimes |\!\! \uparrow \rangle$, and therefore fully spin coherent $|{\bf P}|=1$. However, in the course of propagation through SO coupled quantum wires it will coherently evolve into a [*non-separable*]{} [@ballentine] state where spin and orbital subsystems of the same electron appear to be [*entangled*]{}. [@purity; @peres] Note that Fig. \[fig:decoherence\] also shows that at the instant when the center of the wave packet enters the wire region, its quantum state is already highly entangled as quantified by the non-zero von Neumann entropy (associated with the reduced density matrix of either the spin $\hat{\rho}_s$ or the orbital subsystem $\hat{\rho}_o$) $$\begin{aligned} \label{eq:entropy} S(\hat{\rho}_s) = S(\hat{\rho}_o) & = & -\frac{1+|{\bf P}|}{2} \log_2 \left( \frac{ 1+|{\bf P}| }{2} \right) \nonumber \\ && - \frac{1-|{\bf P}|}{2} \log_2 \left( \frac{1-|{\bf P}|}{2} \right),\end{aligned}$$ which is a unique measure [@galindo] of the degree of entanglement for pure bipartite states (such as the full state $|\Psi (t) \rangle$ which remains pure due to the absence of inelastic processes along the quantum wire). While this loss of spin coherence (or polarization) is analogous to the well-known DP spin relaxation in diffusive SO coupled systems, [@spintronics; @dp] here the decay of the spin polarization vector takes place without any scattering off impurities (or averaging over an ensemble of electrons propagating through ballistic SO coupled quantum dot structures [@chao]). Instead, it arises due to wave packet spreading (cf. lower panel of Fig. \[fig:decoherence\]), as well as due to the presence of interfaces [@purity] (the wave packet is partially reflected at the lead/SO-region interface for strong Rashba coupling) and boundaries [@purity; @chao] of the confined structure. Thus, the decoherence mechanism revealed by Fig. \[fig:decoherence\] is also highly relevant for the interpretation of experiments on the transport of spin coherence in high-mobility semiconductor [@awschalom] and molecular spintronic devices. [@cnt] The interplay of the oscillating and decaying (induced by spin precession and spin decoherence, respectively) transverse SO “force” and wave packet inertia leads to spin-$\uparrow$ electron exiting the wire with its center deflected toward the left lateral edge and the spin-$\downarrow$ density appearing on the right edge [@accumulation] for strong SO coupling $t_{\rm SO}=0.1t_{\rm o}$ in Figs. \[fig:force\] and  \[fig:spin\_density\]. This picture is only apparently counterintuitive to the naïve conclusion drawn from the form of the force operator itself Eq. (\[eq:force\]), which would suggest that spin-$\uparrow$ electron is always deflected to the right while moving along the Rashba SO region. While such situation appears in wires shorter than $L_{\rm SO}$ (as shown in the lower panel of Fig. \[fig:force\]), in general, one has to take into account the ratio $L_x/L_{\rm SO}$, as well as the strength of the SO force $\propto \alpha^2$, to decipher the sign of the spin accumulation on the lateral edges and the sign of the corresponding spin currents that will be pushed into the transverse leads attached at those edges. [@meso_hall] When we inject pairs of spin-$\uparrow$ and spin-$\downarrow$ polarized wave packets one after another, thereby simulating the flow of unpolarized ballistic current through the lead–wire–lead structure (where electron does not feel any electric field within the clean quantum wire region), [@accumulation] we find in Fig. \[fig:she\_accumulation\] that the deflection of the spin densities of individual electrons in the transverse direction will generate non-zero spin accumulation components $S_z({\bf r})$ and $S_x({\bf r})$ of the opposite sign on the lateral edges of the wire. While recent experiments find $S_z({\bf r})$ with such properties to be the strong signature of the spin Hall effect, [@kato; @wunderlich] here we confirm the conjecture of Ref.  that $S_x({\bf r})$ can also emerge as a distinctive feature of the mesoscopic spin Hall effect in confined Rashba spin-split structures—it arises due to the precession (Fig. \[fig:decoherence\]) of transversally deflected spins. Note that $S_x({\bf r}) \neq 0$ accumulations cannot be explained by arguments based on the texturelike structure [@governale] of the spin density of the eigenstates in infinite Rashba quantum wires where [@governale; @usaj] $S_x({\bf r}) \equiv 0$. In conclusion, the spin-dependent force operator, defined by the SO coupling terms of the Hamiltonian of a ballistic spin-split semiconductor quantum wire, will act on the injected spin-polarized wave packets to deflect spin-$\uparrow$ and spin-$\downarrow$ electrons in the opposite transverse directions. This effect, combined with precession and decoherence of the deflected spin, will lead to non-zero $z$- and $x$-components of the spin density with opposite signs on the lateral edges of the wire, which represents an example of the spin Hall effect phenomenology [@extrinsic; @accumulation] that has been observed in recent experiments. [@kato; @wunderlich] The intuitively appealing picture of the transverse SO quantum-mechanical force operator (as a counterpart of the classical Lorentz force), which depends on spin through $\hat{\sigma}^z$, the strength of the Rashba SO coupling through $\alpha^2$, and the momentum operator through the cross product $\hat{\bf p} \times {\bf z}$, allows one to differentiate symmetry properties of the two spin Hall accumulation components upon changing the Rashba electric field (i.e., the sign of $\alpha$) or the direction of the packet propagation: $S_z({\bf r})_{-\alpha} = S_z({\bf r})_{\alpha}$ and $S_z({\bf r})_{- {\bf p}} = - S_z({\bf r})_{\bf p}$ vs. $S_x({\bf r})_{-\alpha} = - S_x({\bf r})_{\alpha}$ (due to opposite spin precession for $-\alpha$) and $S_x({\bf r})_{-{\bf p}} = S_x({\bf r})_{\bf p}$. These features are in full accord with experimentally observed behavior of the $S_z({\bf r})$ spin Hall accumulation under the inversion of the bias voltage, [@wunderlich] as well as with the formal quantitative quantum transport analysis [@accumulation] of the [*nonequilibrium*]{} spin accumulation induced by the flow of unpolarized charge current through ballistic SO coupled two-probe nanostructures. Finally, we note that $\alpha^2$ dependence of the transverse SO “force” is incompatible with the $\alpha$-independent (i.e., “universal”) intrinsic spin Hall conductivity $\sigma_{sH}=e/8\pi$ (describing the pure transverse spin Hall current $j_y^z = \sigma_{sH} E_x$ of the $z$-axis polarized spin in response to the longitudinally applied electric field $E_x$) of an infinite homogeneous Rashba spin-split 2DEG in the clean limit, which has been obtained within various bulk transport approaches. [@sinova; @rashba_eq; @zhang; @wave_packet; @niu] On the other hand, it supports the picture of the SO coupling dependent spin Hall accumulations [@accumulation] $S_z({\bf r})$, $S_x({\bf r})$ and the corresponding spin Hall conductances [@meso_hall] (describing the $z-$ and the $x$-component of the nonequilibrium spin Hall current in the transverse leads attached at the lateral edges of the Rashba wire) of the [*mesoscopic*]{} spin Hall effect in confined structures. [@ring_hall; @meso_hall; @meso_hall_1] By the same token, the sign of the spin accumulation on the edges (i.e., whether the spin current flows to the right or to the left in the transverse direction [@meso_hall]) cannot be determined from the properties [@niu] of $\sigma_{sH}$. Instead one has to take into account the strength of the SO coupling $\alpha$ and the size of the device in the units of the characteristic mesoscale $L_{\rm SO}$, as demonstrated by Figs. \[fig:force\] and  \[fig:she\_accumulation\]. This requirement stems from the oscillatory character of the transverse SO “force” brought about by the spin precession of the deflected spins in the effective magnetic field of the Rashba SO coupled wires of finite width. We are grateful to S. Souma, S. Murakami, Q. Niu, and J. Sinova for insightful discussions and E. I. Rashba for enlightening criticism. Acknowledgment is made to the donors of the American Chemical Society Petroleum Research Fund for partial support of this research. [1]{} , edited by C. L. Chien and C. W. Westgate (Plenum, New York, 1980). Y. K. Kato, R. C. Myers, A. C. Gossard, and D. D. Awschalom, Science [**306**]{}, 1910 (2004). J. Wunderlich, B. Kaestner, J. Sinova, and T. Jungwirth, Phys. Rev. Lett. [**94**]{}, 047204 (2005). E. I. Rashba, Physica E [**20**]{}, 189 (2004). I. Žuti' c, J. Fabian, and S. Das Sarma, Rev. Mod. Phys. [**76**]{}, 323 (2004). M. I. D’yakonov and V. I. Perel’, Pis’ma Zh. Eksp. Theor. Fiz. [**13**]{}, 657 (1971) \[JETP Lett. [**13**]{}, 467 (1971)\]; J. E. Hirsch, Phys. Rev. Lett. [**83**]{}, 1834 (1999); S. Zhang, Phys. Rev. Lett. [**85**]{}, 393 (2000). B. A. Bernevig and S.-C. Zhang, cond-mat/0412550. H.-A. Engel, B. I. Halperin, and E. I. Rashba, cond-mat/0505535. S. Murakami, N. Nagaosa, and S.-C. Zhang, Science [**301**]{}, 1348 (2003); Phys. Rev. B [**69**]{}, 235206 (2004). J. Sinova, D. Culcer, Q. Niu, N. A. Sinitsyn, T. Jungwirth, and A. H. MacDonald, Phys. Rev. Lett. [**92**]{}, 126603 (2004). D. Culcer, J. Sinova, N. A. Sinitsyn, T. Jungwirth, A. H. MacDonald, and Q. Niu, Phys. Rev. Lett. [**93**]{}, 046602 (2004); G. Sundaram and Q. Niu, Phys. Rev. B [**59**]{}, 14915 (1999). E. I. Rashba, Phys. Rev. B [**70**]{}, 161201(R) (2004); [**68**]{}, 241315(R) (2003). S. Zhang and Z. Yang, Phys. Rev. Lett. [**94**]{}, 066602 (2005). S. Souma and B. K. Nikoli' c, Phys. Rev. Lett. [**94**]{}, 106602 (2005). B. K. Nikoli' c, L. P. Z\^ arbo, and S. Souma, cond-mat/0408693 (to appear in Phys. Rev. B). L. Sheng, D. N. Sheng, and C. S. Ting, Phys. Rev. Lett. [**94**]{}, 016602 (2005); E. M. Hankiewicz, L. W. Molenkamp, T. Jungwirth, and J. Sinova, Phys. Rev. B [**70**]{}, 241301(R) (2004). B. K. Nikoli' c, S. Souma, L. P. Z\^ arbo, and J. Sinova, cond-mat/0412595. C.-H. Chang, A. G. Mal’shukov, and K. A. Chao, Phys. Rev. B [**70**]{}, 245309 (2004); O. Zaitsev, D. Frustaglia, and K. Richter, Phys. Rev. Lett. [**94**]{}, 026809 (2005). B. K. Nikoli' c and S. Souma, Phys. Rev. B [**71**]{}, 195328 (2005). P. Pfeffer, Phys. Rev. B [**59**]{}, 15902 (1998). J. Schliemann, D. Loss, and R. M. Westervelt, Phys. Rev. Lett. [**94**]{}, 206801 (2005). J. Li, L. Hu, and S.-Q. Shen, Phys. Rev. B [**71**]{}, 241305(R) (2005). L. E. Ballentine, [*Quantum Mechanics: A Modern Development*]{} (World Scientific, Singapore, 1998). M. Val' in-Rodr' iguez, A. Puente, and L. Serra, Eur. Phys. J. B [**34**]{}, 359 (2003). J. I. Ohe, M. Yamamoto, T. Ohtsuki, and J. Nitta, cond-mat/0409161. M. Governale and U. Z" ulicke, Solid State Comm. [**131**]{}, 581 (2004). G. Usaj and C. A. Balseiro, cond-mat/0405065. D. S. Rokhsar, Am. J. Phys. [**64**]{}, 1416 (1996). M. I. D’yakonov and V. I. Perel’, Fiz. Tverd Tela [**13**]{}, 3581 (1971) \[Sov. Phys. Solid Stat [**13**]{}, 3023 (1972)\]; Zh. ' Eksp. Teor. Fiz. [**60**]{}, 1954 (1971) \[Sov. Phys. JETP [**33**]{}, 1053 (1971)\]. A. Galindo and A. Martin-Delgado, Rev. Mod. Phys. [**74**]{}, 347 (2002). A. Peres and D. R. Terno, Rev. Mod. Phys. [**76**]{}, 93 (2004). J. M. Kikkawa and D. D. Awschalom, Nature (London) [**397**]{}, 139 (1999). B. W. Alphenaar, K. Tsukagoshi, and M. Wagner, J. of Appl. Phys. [**89**]{}, 6863 (2001). P. Zhang, J. Shi, D. Xiao, and Q. Niu, cond-mat/0503505.
--- abstract: 'We analytically and numerically investigate the properties of $s$-wave holographic superconductors by considering the effects of scalar and gauge fields on the background geometry in five dimensional Einstein-Gauss-Bonnet gravity. We assume the gauge field to be in the form of the Power-Maxwell nonlinear electrodynamics. We employ the Sturm-Liouville eigenvalue problem for analytical calculation of the critical temperature and the shooting method for the numerical investigation. Our numerical and analytical results indicate that higher curvature corrections affect condensation of the holographic superconductors with backreaction. We observe that the backreaction can decrease the critical temperature of the holographic superconductors, while the Power-Maxwell electrodynamics and Gauss-Bonnet coefficient term may increase the critical temperature of the holographic superconductors. We find that the critical exponent has the mean-field value $\beta=1/2$, regardless of the values of Gauss-Bonnet coefficient, backreaction and Power-Maxwell parameters.' address: | $^1$ Physics Department and Biruni Observatory, College of Sciences, Shiraz University, Shiraz 71454, Iran\ $^2$ Research Institute for Astronomy and Astrophysics of Maragha (RIAAM), P.O. Box 55134-441, Maragha, Iran author: - 'Hamid Reza Salahi$^{1}$[^1], Ahmad Sheykhi$^{1,2}$[^2] and Afshin Montakhab $^{1}$ [^3]' title: 'Effects of Backreaction on Power-Maxwell Holographic Superconductors in Gauss-Bonnet Gravity' --- Introduction ============ In $2008$, Hartnol et.al., put forwarded a new step on the application of the gauge/gravity duality in condensed-matter physics [@Har; @Har2]. They have claimed that some properties of strongly coupled superconductors can be potentially described by classical general relativity living in one higher dimension. This novel idea is usually called *holographic superconductors*. The motivation is to shed light on the understanding the mechanism governing the high-temperature superconductors in condensed-matter physics. The holographic $s$-wave superconductor model known as Abelian-Higgs model was first established in [@Har; @Har2]. The well-known duality between anti-de Sitter (AdS) spacetime and the conformal field theories (CFT) [@Mal] implies that there is a correspondence between the gravity in the $d$-dimensional spacetime and the gauge field theory livening on its $(d-1)$-dimensional boundary. According to the idea of the holographic superconductors given in [@Har], in the gravity side, a Maxwell field and a charged scalar field are introduced to describe the $U(1)$ symmetry and the scalar operator in the dual field theory, respectively. This holographic model undergoes a phase transition from black hole with no hair (normal phase/conductor phase) to the case with scalar hair at low temperatures (superconducting phase) [@Gub]. Following [@Har; @Har2], an overwhelming number of papers have appeared which try to investigate various properties of the holographic superconductors from different perspective [@Hor; @Mus; @RGC1; @P.GWWY; @P.BGRL; @P.MRM; @P.CW; @P.ZGJZ; @RGC2]. The studies were also generalized to other gravity theories. In the context of Gauss-Bonnet gravity, the phase transition of the holographic superconductors were explored in [@Wang1; @Wang2; @RGC3; @Ruth; @GBHSC]. The motivation is to study the effects of higher order gravity corrections on the critical temperature of the holographic superconductors. Considering the holographic $p$-wave and $s$-wave superconductors in $(3+1)$-dimensional boundary field theories, it was shown that when Gauss-Bonnet coefficients become larger the operators on the boundary field theory will be harder to condense [@RGC3]. Taking the backreaction of the gauge and scalar field on the background geometry into account, numerical as well as analytical study on the holographic superconductors in five dimensional Einstein-Gauss-Bonnet gravity were carried out in [@Ruth]. It was observed that the temperature of the superconductor decreases with increasing the backreaction, although the effect of the Gauss-Bonnet coupling is more subtle: the critical temperature first decreases then increases as the coupling tends towards the Chern-Simons value in a backreaction dependent fashion [@Ruth]. In addition to the correction on the gravity side of the action, it is also interesting to consider the corrections to the gauge field on the matter side of the action. In particular, it is interesting to investigate the effects of the nonlinear corrections to the gauge field on the condensation and critical temperature of the holographic superconductors. It was argued that in the Schwarzschild AdS black hole background, the higher nonlinear electrodynamics corrections make the condensation harder [@Zi; @shey1]. When the gauge field is in the form of Born-Infeld nonlinear electrodynamics, analytical study, based on the Sturm-Liouville eigenvalue problem, of holographic superconductors in Einstein [@AnalyBI] and Gauss-Bonnet gravity [@Gan1; @Lala] have been carried out. In the background of $d$-dimensional Schwarzschild AdS black hole, the properties of Power-Maxwell holographic superconductors have been explored in the probe limit [@PM] and away from the probe limit [@PMb]. In our recent paper [@SSM], we have analytically as well as a numerically studied the holographic $s$-wave superconductors in Gauss-Bonnet gravity with Power-Maxwell electrodynamics. However, in that work, we did not investigate the effects of backreaction and limited our study to the case where scalar and gauge fields do not have an effect on the background metric. Our purpose in the present work is to disclose the effects of the backreaction on the phase transition and critical temperature of the Power-Maxwell holographic superconductors in Gauss-Bonnet gravity. The organization of this paper is as follows. In the next section, we provide the basic field equations of Power-Maxwell holographic superconductors in the background of Gauss-Bonnet-AdS black holes by taking into account the backreaction. In section \[M\], based on the Sturm-Liouville eigenvalue problem, we find a relation between the critical temperature and charge density of the backreacting holographic superconductor with Maxwell field in Gauss-Bonnet gravity. In section \[PM\], we extend the study to the case of Power-Maxwell nonlinear electrodynamics. By applying the shooting method, we also compare our analytical calculations with numerical results in this section. In section \[CriE\], we calculate the critical exponent and the condensation values of the Power-Maxwell holographic superconductor with backreaction. We finish with conclusion and discussion in section \[Con\]. Backreacting Gauss-Bonnet Holographic Superconductors {#Int} ====================================================== To study a $(3+1)$-dimensional holographic superconductor, we begin with a $(4+1)$-dimensional action of Einstein-Gauss-Bonnet-AdS gravity which is coupled to Power-Maxwell field and a charged scalar field, $$\begin{aligned} \label{Act} S=&\int d^{5}x \sqrt{-g}\frac{1}{2\kappa^2} \left[( R-2 \Lambda) +\frac{\alpha}{2} (R^{2}-4 R^{\mu\nu} R_{\mu\nu}+R^{\mu\nu\rho\sigma}R_{\mu\nu\rho\sigma}\right] \nonumber \\&+ \int d^{5}x \sqrt{-g}\left[-b(F_{\mu\nu}F^{\mu\nu})^q- |\nabla\psi- ieA \psi|^2 - m^2 |\psi|^2 \right],\end{aligned}$$ where $\kappa^2=8\pi G_5$ with $G_5$ is the $5$-dimensional gravitational constant, $\Lambda =-{6}/{l^2}$ is the negative cosmological constant, where $l$ is the AdS radius of spacetime, and $\alpha$ is the Gauss-Bonnet coefficient. Here, $R$ and $R_{\mu\nu}$ and $R_{\mu\nu\sigma\rho}$ are, respectively, Ricci scalar, Ricci tensor and Riemann curvature tensor. $F^{\mu\nu}$ is the electromagnetic field tensor and $q$ is the power parameter of the Power-Maxwell field. $\psi$ is complex scalar field with the charge $e$ and the mass $m$, and $A$ is the gauge field. Also, $b$ is coupling constant and due to positivity of energy density has sign $(-1)^{q+1}$ [@HM; @HM2]. For latter convenience we shall take $b={(-1/2)^{q+1}}$. With this choice, the Power-Maxwell Lagrangian will reduce to the Maxwell Lagrangian in the limit $q=1$. It is easy to check that by re-scaling $\psi \rightarrow \tilde{\psi}/e$, $\phi \rightarrow \tilde{\phi}/e$ and $b \rightarrow \tilde{b} e^{2q-2}$, a factor $1/e^2$ will appear in front of matter part of action (\[Act\]). Thus, the probe limit can be deduced when $\kappa^2/e^2 \rightarrow 0$. In order to take the backreaction into account, in this paper, we keep $\kappa^2/e^2$ finite and for simplicity we set $e$ as unity. Taking the backreaction into account, the plane-symmetric black hole solution with an asymptotically AdS behavior in $5$-dimensional spacetime may be written $$\begin{aligned} \label{metric} ds^{2}&=-e^{-\chi(r)} f(r) dt^{2}+\frac{dr^2}{f(r)}+\frac{r^{2}}{l_{\rm eff}^2}\left(dx^2+dy^2+dz^2\right),\end{aligned}$$ where $$\begin{aligned} \label{le} l_{\rm eff}^2\equiv\frac{2 \alpha}{1-\sqrt{1-\frac{4\alpha}{l^2}}},\end{aligned}$$ is the effective AdS radius of the spacetime. The ratio of $l_{\rm eff}/l$ can be smaller than unity for $\alpha>0$, while for $\alpha<0$ it is obvious that $l_{\rm eff}/l$ is larger than unity. Superconductivity phase transition is dual to formation of charged matter field in the bulk, and for occurrence this phase transition in bulk, one needs to prevent the charged matter field to falls into the black hole, thus we expect greater curvature of spacetime in bulk make condensation harder which corresponds to the positive values of $\alpha$. Also for $\alpha<0$, we shall see that the scalar field can be formed easier, means at higher temperature. The Hawking temperature of black hole is given by $$\begin{aligned} \label{Temp} T= \frac{f'(r_+) e^{-\chi(r+)/2}}{4\pi},\end{aligned}$$ where $r_+$ is the black hole horizon and the prime denotes derivative with respect to $r$. We choose the electromagnetic gauge potential and scalar field as $$\begin{aligned} \label{phipsi} \psi=\psi(r), \ \ \ \ \ \ \ \ \ A=\phi(r)dt.\end{aligned}$$ Without lose of generality, we can take $\phi(r)$ and $\psi(r)$ real. The equation of motions can be obtained by varying action (\[Act\]) with respect to the metric and matter fields. We find: $$\begin{aligned} \label{eompsi} \psi ''+ \psi '\left(\frac{f'}{f}+\frac{3}{r}-\frac{\chi'}{2}\right) +\psi \left(\frac{\phi^2}{f^2}-\frac{m^2}{f}\right)=0,\end{aligned}$$ $$\begin{aligned} \label{eomphi} \phi''+\phi' \left(\frac{3}{(2q-1)r}+\frac{\chi'}{2} \right)- \frac{2e^{(1-q)\chi}\psi^2 \phi'^{2-2q}}{q(2q-1)f}\phi=0,\end{aligned}$$ $$\begin{aligned} \label{eomchi} \chi' \Big(1-\frac{2\alpha f}{r^2} \Big) + \frac{4r\kappa^2}{3} \Big(\psi'^2+ \frac{e^{\chi}\phi^2\psi^2}{f^2} \Big)=0,\end{aligned}$$ $$\begin{aligned} \label{eomf} f' \Big(1-\frac{2\alpha f}{r^2} \Big) +\frac{2f}{r}-\frac{4r}{l^2}+ \frac{2r\kappa^2}{3} \Big(m^2 \psi^2+f\psi'^2+ \frac{e^{\chi}\phi^2\psi^2}{f}+\frac{(2q-1)}{2} e^{q\chi}\phi'^{2q} \Big)=0.\end{aligned}$$ In order to solve the above field equations, we need appropriate boundary conditions both on the horizon $r_+$, which is defined by $f(r_+)$=0, and on the AdS boundary where $r\rightarrow \infty$. On the horizon, the regularity condition imposes $$\begin{aligned} \label{HBC1} \phi(r_+)=0, \ \ \ \ \ \ \ \ \ \ \psi'(r_+)=\frac{m^2 \psi(r_+)}{f'(r_+)},\end{aligned}$$ and thus from Eqs. (\[eomchi\]) and (\[eomf\]) we have $$\begin{aligned} \label{Hchi} \chi'(r_+)=-\frac{4\kappa^2 r_+}{3} \left( \psi'(r_+)^2+\frac{e^{\chi(r_+)}\phi'(r_+)^2 \psi(r_+)^2} {f'(r_+)^2} \right),\end{aligned}$$ $$\begin{aligned} \label{Hf} f'(r_+)= \frac{4r_+}{l^2} -\frac{2 \kappa^2 r_+}{3} \left(m^2 \psi(r_+)^2 + \frac{(2q-1)}{2} e^{q\chi(r_+)}\phi'(r_+)^{2q} \right).\end{aligned}$$ Since our solutions are asymptotically AdS, thus as $r\rightarrow \infty$, we have $$\begin{aligned} \label{BC} \chi(r) \rightarrow 0, \ \ \ \ f(r) \approx \frac{r^2}{l_{\rm eff}^2}, \ \ \ \ \phi(r) \approx \mu-\frac{\rho^{\frac{1}{2q-1}}}{r^{\frac{4-2q}{2q-1}}}, \ \ \ \ \psi \approx \frac{\psi_{-}}{r^{\Delta_{-}}} +\frac{\psi_{+}} {r^{\Delta_{+}}},\end{aligned}$$ where $\mu$ and $\rho$ are, respectively, chemical potential and charge density of the CFT boundary, and $\Delta_{\pm}$ is defined as $$\begin{aligned} \label{delta} \Delta_{\pm} = 2 \pm \sqrt {4 + m^2 l_{\rm eff}^2}.\end{aligned}$$ According to the AdS/CFT correspondence, $\psi_{\pm}=<\mathcal{O_{\pm}}>$, where $\mathcal{O_{\pm}}$ is the dual operator to the scalar field with the conformal dimension $\Delta_{\pm}$. We have the freedom to impose boundary conditions such that either $\psi_-$ or $\psi_+$ vanish. We prefer to keep fixed $\Delta_{\pm}$ while we vary $\alpha$, thus we set $\tilde{m}^2= m^2 l_{\rm eff}^2$. For example, for $\tilde{m}^2=-3$, we have $\Delta_+=3$ for all values of parameter $\alpha$. It is important to note that, unlike other known electrodynamics, the boundary condition for the gauge field $\phi(r)$ given in Eq. (\[BC\]), depends on the power parameter $q$. Using boundary condition (\[BC\]) and the fact that $\phi$ should be finite as $r \rightarrow \infty$, we require that $(4-2q)/{(2q-1)}>0$ which restricts $q$ to ranges as $1/2<q<2$. It is easier to work in the dimensionless variable, $z=r_+/r$, instead of variable $r$. Under this transformation, equations of motion (\[eompsi\])-(\[eomf\]) become $$\begin{aligned} \label{eompsiz} \psi'' + \left(\frac{f'}{f}-\frac{1}{z}-\frac{\chi'}{2}\right)\psi' + \frac{r_{+}^2}{z^4}\left(\frac{\phi^2 e^{\chi}}{f^2}- \frac{m^2}{f}\right)\psi=0,\end{aligned}$$ $$\begin{aligned} \label{eomphiz} \phi'' + \left(\frac{4q-5}{(2q-1)z}+\frac{\chi'}{2}\right)\phi' - \frac{2r_{+}^{2q}\psi^2 \phi'^{2-2q}}{(-1)^{2q} q (2q-1) z^{4q} f}\phi=0,\end{aligned}$$ $$\begin{aligned} \label{chiz} \chi' \left(1-\frac{2 \alpha z^2 f}{r_+^2}\right)-\frac{4 \kappa ^2 r_+^2}{3 z^3} \left(\frac{e^{\chi } \psi^2 \phi^2}{f^2}+\frac{z^4 \psi '^2}{r_+^2}\right)=0,\end{aligned}$$ $$\begin{aligned} \label{fz} && f' \left(1-\frac{2 \alpha z^2 f}{r_+^2}\right)-\frac{2 f}{z}+\frac{4 r_+^2}{ l^2z^3}-\frac{2\kappa ^2 r_+^2}{3 z^3} \Bigg[\frac{z^4 f \psi'^2}{r_+^2} \nonumber \\ && +\frac{e^{\chi} \psi^2 \phi^2}{f} +m^2 \psi^2 -\frac{1}{2} (1-2 q) e^{q \chi} (-1)^{2 q}\left(\frac{z^2 \phi'}{r_+}\right)^{2 q}\Bigg]=0.\end{aligned}$$ Here the prime indicates the derivative with respect to the new coordinate $z$ which ranges in the interval $[0,1]$, where $z=0$ and $z=1$ correspond to the boundary and horizon, respectively. Since near the critical point the expectation value of scalar operator ($<\mathcal{O_{\pm}}>$) is small, we can select it as an expansion parameter $$\begin{aligned} \label{so} \epsilon \equiv <\mathcal{O}_i>,\end{aligned}$$ where $i=\pm$. Using the fact that $\epsilon \ll$, we can expand $f$ and $\chi$ around the Gauss-Bonnet AdS spacetime as $$\begin{aligned} \label{fex} f=f_{0}+\epsilon^2 f_{2}+\epsilon^4f_{4}+...,\end{aligned}$$ $$\begin{aligned} \label{chiexpand} \chi=\epsilon^2\chi_{2}+\epsilon^4\chi_{4}+....\end{aligned}$$ Note that since we are interested in solution in which condensation is small, $\psi$ and $\phi$ can also be expanded as $$\begin{aligned} \psi=\epsilon \psi_{1}+\epsilon^3 \psi_{3}+\epsilon^5\psi_{5}+...,\end{aligned}$$ $$\begin{aligned} \label{phiex} \phi=\phi_{0}+\epsilon^2\phi_{2}+\epsilon^4\phi_{4}+...\end{aligned}$$ We further assume the chemical potential is expanded as [@CPH], $$\begin{aligned} \mu=\mu_{0}+\epsilon^2\delta \mu_{2}+...,\end{aligned}$$ where $\delta \mu_{2}>0$. Thus near the critical point for the order parameter as the function of chemical potential we have $$\begin{aligned} \epsilon\thickapprox\Bigg(\frac{\mu-\mu_{0}}{\delta \mu_{2}}\Bigg)^{1/2},\end{aligned}$$ It is obvious when $\mu \rightarrow \mu_{0}$, the order parameter approaches zero which indicate phase transition point. Thus phase transition occurs at the critical value $\mu_{c}=\mu_{0}$. Let us note that the order parameter grows with exponent $1/2$ which is the universal result from the Ginzburg-Landau mean field theory. In the next two sections we solve the field equations (\[eompsiz\])-(\[fz\]) by using expansions (\[fex\])-(\[phiex\]), for the linear Maxwell field as well as the nonlinear Power-Maxwell electrodynamics. Critical temperature of GB holographic superconductors with Maxwell field {#M} ========================================================================= In this section, by using the Sturm-Liouville eigenvalue problem, we obtain the relation between the critical temperature and charge density of the $s$-wave holographic superconductor with backreaction in Gauss-Bonnet-AdS black holes. The Maxwell theory corresponds to $q=1$. Employing the matching method, the holographic superconductors in Gauss-Bonnet gravity with backreaction for the Maxwell [@SK] and the nonlinear Born-Infeld electrodynamics have been studied [@Gan1]. However, it was shown that the matching method is less accurate than Sturm-Liouville method and the obtained results from Sturm-Liouville method are in a better agreement with the numerical results. At zeroth order for the expansion parameter, Eq. (\[eomphiz\]) may be written as $$\begin{aligned} \phi_0''(z) - \frac{\phi_0'(z)}{z} =0,\end{aligned}$$ which is the equation of motion of the electromagnetic field in the Maxwell theory and has solution $\phi_0(z)=\mu_0 \left(1-z^2 \right)$ with $\mu_0 = \rho/r_+^2$. At the critical point, we have $\mu_0=\mu_c= \rho/r_{+c}^2$, where $r_{+c}$ is the radius of the horizon at the phase transition point. Therefore, solution of $\phi_0 (z)$ at the critical point may be written as $$\begin{aligned} \label{phi0} \phi_0=r_{+c} \zeta \left(1-z^2 \right), \ \ \ \ \ \ \ \ \zeta \equiv \rho/r_{+c}^3.\end{aligned}$$ Inserting back this solution into Eq. (\[fz\]), we find the metric function at the zeroth order: $$\begin{aligned} \label{f0} f_0(z)=r_+^2 g(z)=\frac{r_+^2}{2 \alpha z^2} \left(1-\sqrt{1-\frac{4 \alpha}{l^2} \left(1-z^4\right) +\frac{8 \alpha}{3} \zeta ^2 \kappa ^2 z^4 \left(1-z^2\right)}\right),\end{aligned}$$ where we have used the fact that on the horizon $f_{0}(1)=0$, and we have defined a new function $g(z)$ for convenience. We note that $f_{0}(z)$ restores the metric function of Gauss-Bonnet-AdS gravity in the probe limit as $\kappa\rightarrow0$. At the first order approximation, the asymptotic AdS boundary conditions for $\psi$ can be expressed as $$\begin{aligned} \psi_{1} \approx \frac{\psi_{-}}{r_{+}^{\_{-}}} z^{\Delta_{-}}+ \frac{\psi_{+}}{r_{+}^{\Delta_{+}}}z^{\Delta_{+}}.\end{aligned}$$ Near the boundary $z=0$, we introduce trial function $F(z)$ $$\begin{aligned} \label{psiF} \psi_{1}(z)= \frac{<\mathcal{O}_i>}{r_{+}^{\triangle_{i}}} z^ {\triangle_{i}}F(z),\end{aligned}$$ with boundary condition $F(0)=1$ and $F'(0)=0$. Substituting Eq. (\[psiF\]) into (\[eompsiz\]) we arrive at $$\begin{aligned} \label{F} &&F''(z)+F'(z) \left(\frac{g'(z)}{g(z)}+\frac{2 \Delta_i -1}{z} \right)\nonumber \\ &&+F(z)\Bigg[ \frac{\Delta_i}{z} \left( \frac{g'(z)}{g(z)} +\frac{\Delta_i -2}{z}\right) \Delta_i z^2 g(z)^2 -\frac{m^2}{z^4 g(z)}+\frac{\zeta ^2 (z^2-1)^2}{g(z)^2 z^4}\Bigg]=0.\end{aligned}$$ We can convert Eq. (\[F\]) into the Standard Sturm-Liouville equation, namely $$\begin{aligned} \label{SL} [T(z)F'(z)]' - Q(z) F(z)+\zeta^2 P(z) F(z)=0,\end{aligned}$$ where $$\begin{aligned} \label{QP} Q(z)&=&-T(z)\Bigg[ \frac{\Delta_i}{z} \left( \frac{g'(z)}{g(z)}+\frac{\Delta_i -2}{z}\right) \Delta_i z^2 g(z)^2 -\frac{m^2}{z^4 g(z)}\Bigg], \nonumber \\ P(z)&=& T(z)\frac{ (z^2-1)^2}{g(z)^2 z^4}.\end{aligned}$$ According to the Sturm-Liouville eigenvalue problem, $\zeta^2$ can be obtained via $$\begin{aligned} \label{zeta2} \zeta^2=\frac{\int_{0}^{1}[T(z)[F'(z)]^2+Q(z)F^2(z)]dz}{\int_{0}^{1}P(z)F^2(z)dz}.\end{aligned}$$ In order to determine $T(z)$ we need to solve equation $$\begin{aligned} \label{Tz} T(z) p(z)=T'(z),\end{aligned}$$ where $p(z)$ is $$\begin{aligned} \label{p} p(z)=\left(\frac{g'(z)}{g(z)}+\frac{2 \Delta_i -1}{z} \right).\end{aligned}$$ Since $\alpha$ is small, we can expand the above expression for $p(z)$ and keep terms up to $\mathcal{O}(\alpha^2)$. Then we put the result in Eq. (\[Tz\]) and obtain the following solution for $T(z)$ $$\begin{aligned} T(z)&=&z^{2 \Delta_i +1} \Bigg(3 \left(z^{-4}-1 \right)+2 \zeta ^2 \kappa ^2 \left(z^2-1\right)\Bigg)\nonumber \\ && \times \exp\Bigg\{\Bigg(2+\alpha \left[2 \zeta ^2 \kappa ^2 z^4(z^2-1)-3z^4+6\right]\Bigg) \frac{\alpha z^4}{6} \Bigg(2 \zeta ^2 \kappa ^2 \left(z^2-1\right)-3\Bigg) \Bigg\}.\end{aligned}$$ For small backreaction parameter, $\kappa$, the explicit expressions for $T(z)$, $Q(z)$ and $P(z)$ up to second order terms of $\alpha$ and $\kappa$, are given by $$\begin{aligned} T(z) &\approx & z^{2 \Delta_i +1} \Bigg \{ 3 (z^{-4} -1)+\Bigg[ 2 \zeta ^2 \kappa ^2 \Big(z^2-1\Big) \Bigg(1+\alpha \Big(1+3 \alpha-2 (5 \alpha +1) z^4 +6 \alpha z^8\Big) \nonumber \\ &&- \alpha \Big(z^2+1\Big) \Big(\alpha \left(2 z^4-3\right)-1\Big) \Bigg) \Bigg] \Bigg\} +\mathcal{O}(\alpha^3)+\mathcal{O}(\kappa^4),\end{aligned}$$ $$\begin{aligned} && Q(z) \approx z^{2 \Delta_i -5} \Bigg\{ 3 \Delta_i \Bigg(4+s \Delta_i z^4-\Delta_i +2 \alpha ^2 (\Delta_i +8) z^{12}-\alpha (5 \alpha +1) (\Delta_i +4) z^8\Bigg)\nonumber \\ && +2 \Delta_i \zeta ^2 \kappa ^2 z^4 \Bigg(6 \alpha ^2 z^8 \Big[\Delta_i +8-(\Delta_i +10) z^2\Big] -2 \alpha (5 \alpha +1) z^4 \Big[\Delta_i +4-(\Delta_i +6) z^2\Big] \nonumber \\ &&+s \Delta_i -s (\Delta_i +2) z^2+\Bigg)+3 \tilde{m}^2 \Bigg\}+\mathcal{O}(\alpha^3)+\mathcal{O}(\kappa^4),\end{aligned}$$ $$\begin{aligned} P(z) &&\approx \frac{1}{(z^2+1)^2} z^{2 \Delta_i -3} (z^2-1) \Bigg\{3 \left(\alpha ^2+2 \alpha -1\right) \left(z^2+1\right)-z^4 \left[2 (1-\alpha ) \zeta ^2 \kappa ^2+3 \alpha (\alpha +1)\right]\nonumber \\ && -3 \alpha (\alpha +1) z^6+\alpha ^2 z^8 \left(4 \zeta ^2 \kappa ^2+3\right)+3 \alpha ^2 z^{10} -2 \alpha ^2 \zeta ^2 \kappa ^2 z^{12} \Bigg \}+\mathcal{O}(\alpha^3)+\mathcal{O}(\kappa^4),\end{aligned}$$ where $s=3 \alpha^2+\alpha+1$ and hereafter we set $l=1$ for simplicity. In order to use Sturm-Liouville eigenvalue problem, we will use iteration method in the rest of this section. We take $\kappa=\kappa_n \Delta \kappa$ where $\Delta \kappa=\kappa_{n+1}-\kappa_n$ is step size of iterative procedure and we choose $\Delta \kappa=0.05$. Using the fact that $$\begin{aligned} \zeta^2 \kappa^2 = \zeta^2 \kappa_n^2=\Big(\zeta^2|_{\kappa_{n-1}}\Big)\kappa_n^2+ \mathcal{O}(\Delta \kappa)^4, \end{aligned}$$ and taking $\kappa_{-1}=\zeta|_{\kappa_{-1}}=0$, we obtain the minimum eigenvalue of Eq. (\[SL\]). We also take the trial function $F(z)=1-a z^2$. For example for $\tilde{m}^2=-3$, $\alpha=0.05$ and $\kappa=0$, we have $$\begin{aligned} \zeta^2_{\kappa_0}=\frac{-566.794 a^2+1096.44 a-737.301}{-7.02708 a^2+24.3982 a-26.0408},\end{aligned}$$ which attains its minimum $\zeta^2_{\rm min}=19.9456$ for $a=0.7147$. In the second iteration, we take $\kappa=0.05$ and $\zeta^2|_{\kappa_0}=19.9456$ in calculation of integrals in Eq. (\[zeta2\]), and therefore for $\zeta^2_{\kappa_1}$, we get $$\begin{aligned} \zeta^2_{\kappa_1}=\frac{-559.863 a^2+1083.88 a-730.968}{-7.09007 a^2+24.5832 a-26.189}, \end{aligned}$$ which has the minimum value $\zeta^2_{\rm min}=19.7936$ at $a=0.7119$. In the Table \[tab1\] we summarize our results for $\zeta_{\rm min}$ and $a$ with different values of Gauss-Bonnet coupling parameter $\alpha$, backreaction parameter $\kappa$ and reduced mass of scalar field $\tilde{m}^2$. ---------- --------- --------------------- -------- --------------------- -------- --------------------- -------- --------------------- $\alpha$ $a$ $\zeta_{\rm min}^2$ $a $ $\zeta_{\rm min}^2$ $a$ $\zeta_{\rm min}^2$ $a$ $\zeta_{\rm min}^2$ $-0.19$ 0.7344 14.0472 0.7330 13.9836 0.7287 13.7949 0.7213 13.4909 $-0.1$ 0.7307 15.693 0.7290 15.6105 0.7238 15.3662 0.7146 14.9745 $0$ 0.7218 18.2300 0.7195 18.1097 0.7123 17.7546 0.6996 17.1902 $0.1$ 0.7050 22.1278 0.7015 21.9279 0.6904 21.3407 0.6705 20.4209 $0.2$ 0.67304 28.9837 0.6667 28.5719 0.6462 27.3751 0.6081 25.5561 ---------- --------- --------------------- -------- --------------------- -------- --------------------- -------- --------------------- : Analytical results of $\zeta^2_{\rm min}$ and $a$ for Maxwell case with different values of the backreaction $\kappa$ and GB parameter $\alpha$ for $\lambda_+$. Here we have taken $\tilde{m}^2=-3$.[]{data-label="tab1"} ---------- --------------------- --------------------- --------------------- --------------------- --------------------- --------------------- -- -- $\alpha$ Analytical Numerical Analytical Numerical Analytical Numerical $-0.19$ 0.2027 $\rho^{1/3}$ 0.2050 $\rho^{1/3}$ 0.1961 $\rho^{1/3}$ 0.1986 $\rho^{1/3}$ 0.1854 $\rho^{1/3}$ 0.1882 $\rho^{1/3}$ $-0.1$ 0.1987 $\rho^{1/3}$ 0.2008 $\rho^{1/3}$ 0.1915 $\rho^{1/3}$ 0.1938 $\rho^{1/3}$ 0.1800 $\rho^{1/3}$ 0.1825 $\rho^{1/3}$ $0$ 0.1935 $\rho^{1/3}$ 0.1953 $\rho^{1/3}$ 0.1854 $\rho^{1/3}$ 0.1874 $\rho^{1/3}$ 0.1726 $\rho^{1/3}$ 0.1764 $\rho^{1/3}$ $0.1$ 0.1868 $\rho^{1/3}$ 0.1882 $\rho^{1/3}$ 0.1775 $\rho^{1/3}$ 0.1791 $\rho^{1/3}$ 0.1630 $\rho^{1/3}$ 0.1646 $\rho^{1/3}$ $0.2$ 0.1771 $\rho^{1/3}$ 0.1779 $\rho^{1/3}$ 0.1666 $\rho^{1/3}$ 0.1668 $\rho^{1/3}$ 0.1499 $\rho^{1/3}$ 0.1500 $\rho^{1/3}$ ---------- --------------------- --------------------- --------------------- --------------------- --------------------- --------------------- -- -- : Comparison of analytical and numerical values of the critical temperature for Maxwell case with $\tilde{m}^2=-3$.[]{data-label="tab2"} Combining Eqs. (\[Temp\]), (\[Hf\]), (\[phi0\]) and using definition of $\zeta$, we obtain the following expression for the critical temperature $$\begin{aligned} \label{Tc} T_c= \frac{1}{ \pi} \left(1-\frac{ \kappa^2 \zeta^2_{\rm min}}{3}\right) \left[\frac{\rho}{\zeta_{\rm min}}\right]^{1/3}.\end{aligned}$$ We apply the iterative procedure to obtain critical temperature for different values of $\alpha$, $\kappa$ and $\tilde{m}^2$. In table \[tab2\] we summarize critical temperature of phase transition of holographic superconductor in Maxwell electrodynamics for $\Delta_+$ obtained analytically from Sturm-Liouville method. For comparison, we also provide numerical results which we obtain by using shooting method. In this numerical method we solve Eq. (\[eompsiz\]) with $\phi(z)$ and $f(z)$ given in Eqs. (\[phi0\]) and (\[f0\]). Then we find the critical charge density $\rho$ which satisfy the boundary condition $\psi_-=0$ in $z \rightarrow 0$. We obtain discrete values of critical $\rho$ which had this situation. Due to the stability condition [@SSGubser], we chose the lowest value of $\rho_c$ and by using dimensionless quantity $T^3/\rho$ we calculated critical temperature of the phase transition for different values of Gauss-Bonnet parameter and backreaction parameter. Critical temperature of GB holographic superconductor with Power-Maxwell field {#PM} ============================================================================== In this section we investigate the behavior of holographic superconductor for the general case $q \neq 1$ away from probe limit in the Gauss-Bonnet gravity. Just like previous section, we need solution of Eqs. (\[eomphiz\]), (\[chiz\]) and (\[fz\]) in order to solve (\[eompsiz\]). Using expansion (\[fex\])-(\[phiex\]) and at the zeroth order of small parameter $\epsilon$, one can easily check that $\phi_0$ and $g$ have the following solution $$\begin{aligned} \label{phiPM} \phi_0(z)=\zeta r_{+_c} \left(1-z^{\frac{2 (2-q)}{2 q-1}}\right), \ \ \ \ \ \zeta=\frac{\rho^{\frac{1}{2q-1}}}{r_{+_c}^{\frac{3}{2q-1}}},\end{aligned}$$ $$\begin{aligned} \label{gz} g(z)=\frac{1}{2 \alpha z^2} \left(1-\sqrt{1-4 \alpha \left(1-z^4\right)-\frac{4^q (2-q)^{2 q-1}}{3 (2 q-1)^{2 q-2}} 2\alpha \kappa ^2 \zeta ^{2 q} \left[\left(z^{\frac{6 q}{2 q-1}}\right)-z^4\right]}\right),\nonumber \\\end{aligned}$$ where $g(z)=f_0(z)/r_+^2$. Expanding the above expression for $g(z)$ up to $\mathcal{O} (\kappa^4)$ and $\mathcal{O} (\alpha^2)$, one gets $$\begin{aligned} \label{gz} g(z) && \approx \frac{1}{z^2}-z^2+\frac{\left(z^4-1\right)^2}{z^2} \left[\alpha -2 \alpha ^2 \left(z^4-1\right)\right]+\frac{ (2 q-1)^{2-2 q} (4-2 q)^{2 q-1} \left(z^{\frac{6 q}{2 q-1}}-z^4\right)}{3 z^2} \nonumber \\ && \times \Big\{[6\alpha ^2 \left(z^4-1\right)^2 -2 \alpha \left(z^4-1\right)+1\Big\}\kappa ^2 \zeta ^{2 q}-\frac{(2 q-1)^{4-4 q} (4-2 q)^{4 q-2} \left(z^4-z^{\frac{6 q}{2 q-1}}\right)^2}{9 z^2} \nonumber \\ &&\times \left[ 6 \alpha ^2 \left(z^4-1\right)-\alpha\right] \kappa ^4 \zeta ^{4 q}+\mathcal{O}(\alpha^3)+\mathcal{O}(\kappa^6).\end{aligned}$$ One may substitute Eq. (\[psiF\]) and Eq. (\[phiPM\]) into Eq. (\[eompsiz\]) and get an expression for $F(z)$, and then converting it to the Sturm-Liouville equation form (\[SL\]), resulting in: $$\begin{aligned} \label{TTz} && T(z) \approx z^{2 \Delta -3} \Bigg\{1-z^4 \Big[ \alpha (z^4-1) \Big(\alpha \left(2 z^4-3\right)-1\Big)\Big]+\frac{ (4-2 q)^{2 q} (2 q-1)^{2-2 q}}{6(q-2)} \nonumber \\ &&\times \Bigg[1-z^{\frac{6 q}{2 q-1}} \Big(1+\alpha +\alpha ^2 \left(6 z^8+3\right)\Big) -\alpha z^8 \left(1+5 \alpha -4 \alpha z^4-2 (5 \alpha +1) z^{\frac{4-2q}{2 q-1}}\right)\Bigg]\kappa ^2\zeta ^{2 q}\nonumber \\ && + \Bigg[1+\alpha (5 \alpha +1) z^{\frac{12 q}{2 q-1}}+z^{\frac{6 q}{2 q-1}} \Big[\alpha \Big(\alpha \left(6 z^8-3\right)-1\Big)-1\Big]-2 \alpha ^2 z^{10} \left(3 z^{\frac{6}{2 q-1}}+z^2\right)\Bigg] \kappa ^4\zeta ^{4 q} \nonumber \\ && \times\frac{2^{4 q} (2-q)^{4 q-2} (1-2 q)^{4-4 q}}{36}\Bigg\}, \\ &&P(z)\approx T(z) \frac{\left(1-z^{\frac{2 (2-q)}{2 q-1}}\right)^2}{z^4 g(z)^2}+\mathcal{O}(\alpha^4) +\mathcal{O}(\kappa^6),\\ &&Q(z)\approx-T(z)\Bigg\{ \frac{\Delta_i}{z} \left( \frac{g'(z)}{g(z)}+\frac{\Delta_i -2}{z}\right) \Delta_i z^2 g(z)^2 -\frac{m^2}{z^4 g(z)}\Bigg\}+\mathcal{O}(\alpha^4) +\mathcal{O}(\kappa^6).\end{aligned}$$ Again, using Eq. (\[SL\]), with trial function $F(z)=1-az^2$, we obtain the minimum eigenvalue $\zeta^2_{\rm min}$ for the Power-Maxwell electrodynamic case. For example, with $q=3/4$, $\alpha=0.1$, $\kappa=0.05$, and $\tilde{m}^2=-3$ and using iterative procedure, we get $$\begin{aligned} \label{zetamin} \zeta_{\kappa_1}^2=\frac{30 \left(1.0333 a^2-2.0068 a+1.3652\right)}{1.3539 a^2-4.1680 a+3.6937}.\end{aligned}$$ Varying $\zeta_{\kappa_1}^2$ with respect to $a$ to find minimum value of $\zeta^2$, we obtain $\zeta_{\rm min}^2=9.50679$ at $a=0.5675$. Also for the case $q=5/4$, $\alpha=-0.19$, $\kappa=0.1$ and $\tilde{m}^2 =0$ we obtain $$\begin{aligned} \label{zetamin2} \zeta_{\kappa_2}^2=\frac{461.3339 a^2-968.8766 a+593.0286}{a^2-3.2657 a+3.0859},\end{aligned}$$ which attains its minimum $\zeta_{\rm min}=98.9682$ at $a=0.8909$. Then, we find the critical temperature from Eqs. (\[Temp\]), (\[Hf\]) and (\[phiPM\]) as $$\begin{aligned} \label{TPM} T_c= \frac{1}{4\pi} \Bigg[4-\frac{(4-2q)^{2q}}{3 (2q-1)^{2q-1}}\kappa^2 \zeta_{\rm min}^{2q}\Bigg]\Big(\frac{\rho}{\zeta_{\rm min}^{2q-1}}\Big)^{\frac{1} 3}.\end{aligned}$$ Clearly, $T_{c}$ depends on the Power-Maxwell parameter $q$, Gauss-Bonnet parameter $\alpha$ and backreaction parameter $\kappa$. In Fig. \[fig1\], we present reduced critical temperature of phase transition for a $(3+1)$-dimensional holographic superconductor as a function of $q$ with different values of $\kappa$ and $\alpha$. For simplicity, we focus on the boundary condition which $\psi_-=0$, and as an example, we take $\tilde{m}^2=-3$ in these figure. In Fig. \[fig1\](a) we fix the backreaction parameter to $\kappa=0.05$ in order to investigate behavior of critical temperature as a function of power parameter $q$ for three allowed value of Gauss-Bonnet parameter. It clearly indicates that for any values of $\alpha$, by decreasing $q$, superconductor phase is more accessible. Also, we find out that in the presence of backreaction of the matter fields on the metric, increasing Gauss-Bonnet parameter $\alpha$ makes condensation harder and and thus the critical temperature of the phase transition decreases. It is interesting that decreasing $\alpha$ from zero to negative values in the allowed range can cause the phase transition to superconductor phase easier for any values of the power parameter $q$. We also provide Fig. \[fig1\](b) by fixing the Gauss-Bonnet parameter to $\alpha=0.1$ for studying the behavior of reduced critical temperature in terms of the power parameter $q$ for different values of the backreaction parameter $\kappa$. From this figure we see that for any values of $q$, by increasing the backreaction of the matter fields on the background geometry, which is corresponding to decreasing the charge of the scalar field, the phase transition is made harder in the Einstein-Gauss-Bonnet gravity. We mention that in the allowed range of the power parameter, there exist some un-physical regimes in which critical temperature becomes negative. For example, by increasing backreaction parameter to greater values, we may obtain negative $T_c$ which means for some values of the power parameter we do not have phase transition if complex field charge is less than some critical charge. Here we disregard these regimes and work in regimes with positive temperatures. Finally, we present table \[tab3\] to compare the results of critical temperature from analytical Sturm-Liouville method by using iterative procedure with numerical values which we established numerically by using shooting method as explained in previous section. We take different values of $\alpha$ and $\kappa$ in this table for three values of $q$ as example. ------- --------------------- --------------------- --------------------- --------------------- --------------------- --------------------- -- -- $q$ Analytical Numerical Analytical Numerical Analytical Numerical $3/4$ 0.2622 $\rho^{1/3}$ 0.2623 $\rho^{1/3}$ 0.2666 $\rho^{1/3}$ 0.2667 $\rho^{1/3}$ 0.2639 $\rho^{1/3}$ 0.2642 $\rho^{1/3}$ $1$ 0.1868 $\rho^{1/3}$ 0.1882 $\rho^{1/3}$ 0.1940$\rho^{1/3}$ 0.1959 $\rho^{1/3}$ 0.1879 $\rho^{1/3}$ 0.1908 $\rho^{1/3}$ $5/4$ 0.1134 $\rho^{1/3}$ 0.1168 $\rho^{1/3}$ 0.1208 $\rho^{1/3}$ 0.1250 $\rho^{1/3}$ 0.1124 $\rho^{1/3}$ 0.1177 $\rho^{1/3}$ ------- --------------------- --------------------- --------------------- --------------------- --------------------- --------------------- -- -- : Comparison of analytical and numerical values of critical temperature for $\tilde{m}^2=-3$ for certain values of $\kappa$ and $\alpha$.[]{data-label="tab3"} Critical exponent {#CriE} ================= In this section, we propose to analytically calculate the critical exponent of the Gauss-Bonnet holographic superconductor with backreaction in the general Power-Maxwell electrodynamics case for all allowed values of $q$. While we are near the critical point, $<\mathcal{O}_i>$ is small enough, thus we substitute Eq. (\[psiF\]) into the Eq. (\[eomphiz\]) and by using the fact that in the expansion of $\chi$ Eq.  (\[chiexpand\]) the first term is proportional to $<\mathcal{O}_i>^2$, while we are near the critical temperature we neglect $\chi'(z)$ and arrive at $$\begin{aligned} \label{critphi} \phi'' -\left(\frac{5-4q}{2q-1}\right) \frac{1}{z} \phi' -\frac{2r_{+}^{2q-2\Delta_{i}-2} z^{2\Delta_{i}-4q}F^2 \phi'^{2-2q} \phi <\mathcal{O}_i>^2}{(-1)^{2q} q (2q-1) g(z)}=0,\end{aligned}$$ where $g(z)$ is defined as in Eq. (\[gz\]). Near the critical point, $T_c \approx T_{0}$, and inspired by Eq. (\[phiPM\]), we assume that Eq. (\[critphi\]) has the following solution $$\begin{aligned} \label{phisol} \phi(z)= AT_{c}(1-z^{\frac{4-2q}{2q-1}})-(A T_{c})^{3-2q}\left(\frac {r_{+}^{2q-2\Delta_{i}-2}<\mathcal{O}_{i}>^2}{(-1)^{2q} q(2q-1)} \right)\Xi (z),\end{aligned}$$ where $$\begin{aligned} A=\frac{4\pi\zeta_{\rm min}}{4-\frac{(4-2q)^{2q}}{3 (2q-1)^{2q-1}}\kappa^2 \zeta_{\rm min}^{2q}}.\end{aligned}$$ Substituting Eq. (\[phisol\]) into (\[critphi\]) and keeping terms up to $<\mathcal{O}_i>^2$, we reach $$\begin{aligned} \label{chi} \Xi''-\left(\frac{5-4q}{2q-1}\right)\frac{\Xi'}{z}- \frac{(\frac{2q-4}{2q-1})^{2-2q}z^{\eta}(1-z^{\frac{4-2q}{2q-1}})F(z)^2}{g(z) }=0,\end{aligned}$$ where $$\begin{aligned} \eta=2\Delta_i-4q+\left(\frac{5-4q}{2q-1}\right)(2-2q).\end{aligned}$$ This is a differential equation for $\Xi(z)$ independent of $r_+$, $r_{+_c}$ and $<\mathcal{O}_i>$. Therefore $\Xi(z)$ in any $z$ has a value independent of $T$, $T_{c}$ and order parameter $<\mathcal{O}_i>$. The boundary condition for $\phi$ given by Eq. (\[BC\]), in the $z$ coordinate, can be rewritten as $$\begin{aligned} \label{phiz22} \phi(z) = \mu\left(1 - \frac{\rho^{\frac{1}{2q-1}}}{\mu r_+^{\frac{4-2q}{2q-1} }}z^{\frac{4-2q}{2q-1} }\right),\end{aligned}$$ It is reliable while $z \approx 0$, independent of temperature and order parameter. Also near the critical temperature where $\psi$ is small, Eq. (\[phiPM\]) may be expressed as $$\begin{aligned} \phi_0(z)=\frac{\rho^{\frac{1}{2q-1}}}{r_+^{\frac{3}{2q-1}}} r_{+} \left(1-z^{\frac{4-2q}{2 q-1}}\right),\end{aligned}$$ Since it is valid for all values of $z$, we can equate the above expression with Eq. (\[phiz22\]) for $z\rightarrow 0$ to find $$\begin{aligned} \label{mu} \mu = \frac{\rho^{\frac{1}{2q-1}}}{r_+^{\frac{3}{2q-1}-1}},\end{aligned}$$ Since Eq. (\[phiz22\]) implies that at infinite boundary $z=0$, the gauge field is equal to chemical potential, i.e., $\phi(z=0)=\mu$. From Eqs. (\[Temp\]) and (\[Hf\]), we realize that $r_+ \propto T$ and it is obvious from Eq. (\[TPM\]) that $\rho \propto T_c^3$. Thus by using Eq. (\[mu\]), one can find $$\begin{aligned} \label{phizero} \phi(z=0)=\mu=A \frac{T_{c}^{\frac{3}{2q-1}}}{T^{\frac{4-2q}{2q-1}}}.\end{aligned}$$ Eq. (\[phisol\]) at $z=0$ is equal to it’s infinite boundary value given in Eq. (\[phizero\]). Equating Eqs. (\[phisol\]) and (\[phizero\]), we find $$\begin{aligned} \label{eqtc} AT_{c}-A \frac{T_{c}^{\frac{3}{2q-1}}}{T^{\frac{4-2q}{2q-1}}}= (A T_{c})^{3-2q}\left(\frac {r_{+}^{2q-2\Delta_i-2}<\mathcal{O}_i>^2}{(-1)^{2q} q(2q-1)} \right)\Xi (0),\end{aligned}$$ where $\Xi(0)$ is just a constant which can be calculated numerically from Eq. (\[chi\]) with boundary conditions $\Xi(1)=\Xi'(1)=1$. Using Eqs. (\[Temp\]) and (\[Hf\]) for replacing $r_+$ with $T$ in Eq. (\[eqtc\]) and then solving the resulting equation for $<\mathcal{O}_i>$, we get $$\begin{aligned} <\mathcal{O}_i>=\gamma T_c^{\Delta_i}\left(\frac{T}{T_c} \right)^{\Delta_i-q+1}\sqrt{\left(\frac{T_c}{T}\right)^ {\frac{4-2q}{2q-1}} \left[1-\left(\frac{T}{T_c}\right)^{\frac{4-2q}{2q-1}}\right]},\end{aligned}$$ where $\gamma$ is a constant independent of $T$ and $T_c$. Using the fact that $T \approx T_c$, we can rewrite $<\mathcal{O}_i>$ as $$\begin{aligned} \label{critexp} <\mathcal{O}_i> \approx \gamma T_c^{\Delta_i} \sqrt{1-\left(\frac{T}{T_{c}}\right)^{\frac{4-2q}{2q-1}}} \approx \gamma T_c^{\Delta_i} \sqrt{1-\left[1-\left(\frac{4-2q}{2q-1}\right)t\right]}\approx \gamma T_c^{\Delta_i}\sqrt{\left(\frac{4-2q}{2q-1}\right)t} , \nonumber \\\end{aligned}$$ where $t={(T_c-T)}/{T_c}$. Eq. (\[critexp\]) indicates that the critical exponent $\beta$ of the order parameter is $1/2$ and this result is valid both for $<\mathcal{O}_{-}>$ and $<\mathcal{O}_{+}>$. It is obvious that in the presence of backreaction this exponent for Gauss-Bonnet gravity with Power-Maxwell field remains unchanged which seems to be a universal exponent. Let us note that for $q=2$, the expectation value of the condensation operator vanishes, which means there is no phase transition in upper bound of $q$. Conclusion and discussion {#Con} ========================= Analytically and based on Sturm-Liouville eigenvalue problem, we have investigated the properties of $(3+1)$-dimensional $s$-wave holographic superconductors in the background of five dimensional Gauss-Bonnet-AdS black holes with Power-Maxwell electrodynamics. We have considered the case in which the gauge and scalar fields back react on the background geometry. We find out the relation between critical temperature of phase transition and charge density is still $T_c \propto \rho^{1/3}$. Using the analytical Sturm-Liouvill method, we have calculated the proportional constant between the critical temperature and the charge density for all allowed values of the power parameter $q$, different values of the Gauss-Bonnet coupling constant $\alpha$, and backreaction parameter $\kappa$. We realized that decreasing $q$ from Maxwell case ($q=1$) to it’s lower bound $(q=1/2)$ increases the critical temperature, regardless of the values of $\alpha$ and $\kappa$. Besides, for a fixed values of $q$ and $\kappa$, critical temperature increases with decreasing the Gauss-Bonnet coefficient $\alpha$. This means that, increasing $q$ and $\alpha$ will decrease the critical condensation of the scalar field and make it harder to form. Also, we observed that taking backreaction into account, decreases the critical temperature regardless of the values of the other parameters. We have confirmed these analytical results by providing the numerical calculations based on the shooting method. Finally, our investigation of critical exponent indicates that the critical exponent $\beta$ of the superconducting phase transition for the five dimensional Power-Maxwell holographic superconductor with backreaction has the mean field value $1/2$ which seems to be a universal constant. [99]{} S. A. Hartnoll, C. P. Herzog and G. T. Horowitz, *Building a Holographic Superconductor*, Phys. Rev. Lett. [**101**]{}, 031601 (2008), \[arXiv:0803.3295\]. S. A. Hartnoll, C. P. Herzog and G. T. Horowitz, *Holographic Superconductors*, JHEP [**12**]{}, 015 (2008), \[arXiv:0810.1563\]. J. Maldacena, *The Large N Limit of Superconformal Field Theories and Supergravity*, Adv. Theor. Math. Phys. **2**, 231 (1998), \[arXiv:9711200\];\ S.S. Gubser, I.R. Klebanov, and A.M. Polyakov, *Gauge Theory Correlators from Non-Critical String Theory*, Phys. Lett. B **428**, 105 (1998), \[arXiv:9802109\];\ E. Witten, *Anti De Sitter Space And Holography*, Adv. Theor. Math. Phys. **2**, 253 (1998), \[arXiv:9802150\]. S.S. Gubser, *Breaking an Abelian gauge symmetry near a black hole horizon*, Phys. Rev. D **78**, 065034 (2008), \[arXiv:0801.2977\]. G. T. Horowitz, *Introduction to Holographic Superconductors,* Lect. Notes Phys. [**828**]{}, 313 (2011). D. Musso, *Introductory notes on holographic superconductors*, \[arXiv:1401.1504\]. R. G. Cai, Li Li, Li-Fang Li, Run-Qiu Yang, *Introduction to Holographic Superconductor Models*, Sci China Phys. Mech. Astron. [**58**]{}, 060401 (2015), \[arXiv:1502.00437\]. X. H. Ge, B. Wang, S. F. Wu, and G. H. Yang, *Analytical study on holographic superconductors in external magnetic field*, JHEP [**08**]{}, 108 (2010), \[arXiv:1002.4901\]. R. Banerjee, S. Gangopadhyay, D. Roychowdhury, and A. Lala, *Holographic s-wave condensate with non-linear electrodynamics: A nontrivial boundary value problem*, Phys. Rev. D [**87**]{}, 104001 (2013), \[arXiv:1208.5902v3\]. D. Momeni, M. Raza, and R. Myrzakulov, *More on Superconductors via Gauge/Gravity Duality with Nonlinear Maxwell Field*, Journal of Gravity, [**2013**]{}, Article ID 782512. C. M. Chen and M. F. Wu, *An analytic analysis of phase transitions in holographic superconductors*, Prog. Theor. Phys. [**126**]{}, 387 (2011), \[arXiv:1103.5130\]. H. B. Zeng, X. Gao, Y. Jiang, and H. S. Zong, *Analytical computation of critical exponents in several holographic superconductors*, JHEP [**1105**]{}, 002 (2011), \[arXiv:1012.5564\]. R. G. Cai, H. F Li, H.Q. Zhang, *Analytical studies on holographic insulator/superconductor phase transitions*, Phys. Rev. D [**83**]{}, 126007 (2011), \[arXiv:1103.5568\];\ R. G. Cai, L. Li, L. F. Li, *A holographic p-wave superconductor model*, JHEP [**1401**]{}, 032 (2014), \[ arXiv:1309.4877\]. Q. Pan, B. Wang, E. Papantonopoulos, J. Oliveira, A. B. Pavan, *Holographic Superconductors with various condensates in Einstein-Gauss-Bonnet gravity*, Phys. Rev. D [**81**]{}, 106007 (2010), \[arXiv:0912.2475\]. Q Pan, B Wang, *General holographic superconductor models with Gauss-Bonnet corrections*, Phys. Lett. B [**693**]{} 159 (2010), \[arXiv:1005.4743\]. H. F. Li, R. G. Cai, H. Q. Zhang, *Analytical studies on holographic superconductors in Gauss-Bonnet gravity*, JHEP [**04**]{}, 028 (2011), \[arXiv:1103.2833\]. L. Barclay, R. Gregory, S. Kanno, P. Sutcliffe, *Gauss-Bonnet holographic superconductors*, JHEP [**1012**]{}, 029 (2010), \[arXiv:1009.1991\]. R. G. Cai, Z.Y. Nie, H.Q. Zhang, *Holographic p-wave superconductors from Gauss-Bonnet gravity*, Phys. Rev. D [**82**]{}, 066007 (2010), \[arXiv:1007.3321\];\ R. G. Cai, Z. Y. Nie, H.Q. Zhang, *Holographic Phase Transitions of P-wave Superconductors in Gauss-Bonnet Gravity with Back-reaction*, Phys. Rev. D [**83**]{}, 066013 (2011) \[ arXiv:1012.5559\];\ Q. Pan, J. Jing, B. Wang, *Analytical investigation of the phase transition between holographic insulator and superconductor in Gauss-Bonnet gravity*, JHEP [**11**]{}, 088 (2011), \[arXiv:1105.6153\]. Z Zhao, QPan, S Chen, J Jing, *Notes on holographic superconductor models with the nonlinear electrodynamics*, Nucl. Phys. B [**871**]{} 98 (2013), \[arXiv:1212.6693\];\ J Jing, Q Pan, S Chen, *Holographic superconductor/insulator transition with logarithmic electromagnetic field in Gauss-Bonnet gravity*, Phys. Lett. B [**716**]{}, 385 (2012), \[arXiv:1209.0893\]. A. Sheykhi, F. Shaker, *Effects of backreaction and exponential nonlinear electrodynamics on the holographic superconductors*, \[arXiv:1606.04364\];\ A. Sheykhi, F. Shamsi, *Holographic Superconductors with Logarithmic Nonlinear Electrodynamics in an External Magnetic Field*, \[arXiv:1603.02678\]. C. Lai, Q. Pan, J. Jing, Y. Wang, *On analytical study of holographic superconductors with Born-Infeld electrodynamics*, Phys. Lett. B **749**, 437 (2015), \[arXiv:1508.05926\] ;\ D. Ghorai, S. Gangopadhyay, *Analytic study of higher dimensional holographic superconductors in Born-Infeld electrodynamics away from the probe limit*, \[arXiv:1511.02444\];\ P. Chaturvedi, G. Sengupta, *p-wave holographic superconductors from Born-Infeld black holes*, JHEP [**1504**]{}, 001 (2015), \[arXiv:1501.06998\];\ S. Gangopadhyay and D. Roychowdhury, *Analytic study of properties of holographic superconductors in Born-Infeld electrodynamics*, JHEP [**05**]{}, 156 (2012), \[arXiv:1201.6520\]. A. Sheykhi, F. Shaker, *Analytical study of holographic superconductor in Born-Infeld electrodynamics with backreaction*, Phys. Lett. B [**754**]{}, 281 (2016). W. Yao, J. Jing, *Analytical study on holographic superconductors for Born-Infeld electrodynamics in Gauss-Bonnet gravity with backreactions*, JHEP **05**, 101 (2013), \[arXiv:1306.0064\];\ J. Jing, L. Wang, Q. Pan, S. Chen, *Holographic Superconductors in Gauss-Bonnet gravity with Born-Infeld electrodynamics*, Phys. Rev. D [**83**]{}, 066010 (2011), \[arXiv:1012.0644\]. S. Dey, A. Lala, *Holographic s-wave condensation and Meissner-like effect in Gauss-Bonnet gravity with various non-linear corrections*, Annals of Physics [**354**]{}, 165 (2015), \[arXiv:1306.5137\];\ R. Gregory, S. Kanno and J. Soda, *Holographic superconductors with higher curvature corrections*, JHEP [**0910**]{}, 010 (2009) \[arXiv:0907.3203\]. J. Jing, Q. Pan, S. Chen, *Holographic superconductors with Power-Maxwell field*, JHEP [**11**]{}, 045 (2011), \[ arXiv:1106.5181\]. J. Jing, L. Jiang and Q. Pan, *Holographic superconductors for the Power-Maxwell field with backreactions*, Class. Quantum Grav. [**33**]{}, 025001 (2016). A. Sheykhi, H. R. Salahi and A. Montakhab, *Analytical and numerical study of Gauss-Bonnet holographic superconductors with Power-Maxwell field,* JHEP [**04**]{}, 058 (2016). M. Hassaine and C. Martinez, *Higher-dimensional black holes with a conformally invariant Maxwell source*, Phys. Rev. D [**75**]{}, 027502 (2007). M. Hassaine and C. Martinez, *Higher-dimensional charged black holes solutions with a nonlinear electrodynamics source*, Class. Quant. Grav. [**25**]{}, 195023 (2008). C. P. Herzog, *Analytic holographic superconductor*, Phys. Rev. D [**81**]{}, 126009 (2010), \[arXiv:1003.3278\] \[hep-th\]. S Kanno, *A note on Gauss-Bonnet holographic superconductors*, Class. Quant. Grav. [**28**]{}, 127001 (2011), \[arXiv:1103.5022\]. S. S. Gubser and S. S. Pufu. *The gravity dual of a p-wave superconductor*, JHEP [**11**]{}, 033 (2008), \[arXiv:0805.2960\]. R. G. Cai, *Gauss-Bonnet black holes in AdS spaces*, Phys. Rev. D [**65**]{}, 084014 (2002), \[arXiv:hep-th/0109133\]. G. Siopsis and J. Therrien, *Analytic calculation of properties of holographic superconductors*, JHEP [**05**]{}, 013 (2010), \[arXive:1201.6520\]. [^1]: [email protected] [^2]: [email protected] [^3]: [email protected]
--- abstract: 'With LOFAR beginning operation in 2008 there is huge potential for studying pulsars with high signal to noise at low frequencies. We present results of observations made with the Westerbork Synthesis Radio Telescope to revisit, with modern technology, this frequency range. Coherently dedispersed profiles of millisecond pulsars obtained simultaneously between 115-175 MHz are presented. We consider the detections and non-detections of 14 MSPs in light of previous observations and the fluxes, dispersion measures and spectral indices of these pulsars. The excellent prospects for LOFAR finding new MSPs and studying the existing systems are then discussed in light of these results.' author: - 'B. W. Stappers' - 'R. Karuppusamy' - 'J. W. T. Hessels' nocite: '[@kl96]' title: Low Frequency Observations of Millisecond Pulsars with the WSRT --- [ address=[Stichting ASTRON, Postbus 2, 7990 AA Dwingeloo, The Netherlands]{}, altaddress=[Astronomical Institute “Anton Pannekoek”, University of Amsterdam, Kruislaan 403, 1098 SJ Amsterdam, The Netherlands]{} ]{} [ address=[Stichting ASTRON, Postbus 2, 7990 AA Dwingeloo, The Netherlands]{}, altaddress=[Astronomical Institute “Anton Pannekoek”, University of Amsterdam, Kruislaan 403, 1098 SJ Amsterdam, The Netherlands]{} ]{} [ address=[Astronomical Institute “Anton Pannekoek”, University of Amsterdam, Kruislaan 403, 1098 SJ Amsterdam, The Netherlands]{} ]{} Introduction ============ There are a large number of new radio facilities currently in the planning or construction phase and it is important that we consider the impact of these telescopes on all aspects of pulsar research. The first of these facilities to come on line will be the low frequency telescopes like LOFAR, LWA and MWA, which from now on we will collectively refer to as low frequency radio arrays (LRAs). While these instruments differ somewhat in design and frequency range, they all work at frequencies below 300 MHz and will be used to study the existing pulsar population and to discover new pulsars. It is therefore appropriate to consider what we might expect from pulsars in this frequency range. As discussed elsewhere in this volume by van Leeuwen & Stappers \[\] there is huge potential for finding new pulsars with LOFAR for example. Their study didn’t consider the millisecond pulsars (MSPs) as less is presently known about their low-frequency properties. The main issues governing the potential the LRAs have for discovering new MSPs are; the steepness of the radio spectrum and whether it turns over in this frequency range and the magnitue of the scattering in the interstellar medium. In the late nineties and earlier this decade Kuzmin & Losovsky [@kl99; @kl01] published the first papers which took a statistical look at the MSPs in the frequency range of interest here. They first presented a number of pulse profiles at frequencies near 100 MHz and by comparing them with profiles at higher frequencies they concluded that unlike the normal radio pulsar population there was little evidence for broadening of the pulse profile as a function of frequency [@kl99]. They then went on to show that the radio spectral index of the MSPs seemed not to show a turnover at frequencies at or near 120 MHz as do the majority of higher magnetic field pulsars [@kl01]. This later result, combined with the relatively large number of MSPs they detected, indicated that the low frequency range could potentially be a valuable one for finding MSPs. ------------ ------- ------- ----- ----- --- --- J0034-0534 1.87 13.7 250 120 y y J0218+4232 2.33 61.2 270 150 n n J0613-0200 3.06 38.8 240 100 n n J0621+1002 28.85 36.6 50 25 n n J1012+5307 5.25 9.0 30 15 y y J1022+1001 16.45 10.2 90 40 y y J1024-0719 5.16 6.4 200 100 y n B1257+12 6.21 10.1 150 50 y y J1713+0747 4.57 15.9 250 100 y n J1744-1134 4.09 3.1 220 100 n y J1911-1114 3.62 30.9 260 130 y y J2051-0827 4.51 20.7 250 100 n n J2145-0750 16.05 9.0 480 120 n y B1957+20 1.61 29.11 y ------------ ------- ------- ----- ----- --- --- : Pulsars observed so far with the LFFEs at the WSRT. The sixth column indicates whether a 100-MHz profile is presented either in [@kl99] or on the EPN database. PSR B1957+20 was not previously observed at these frequencies. \[stappers:results-table\] Kuzmin & Losovsky had to use very narrow bandwidths (32 $\times$ 5 kHz) to obtain their results due to the deleterious effects of dispersion in the interstellar medium and they had limited time resolution (at best 0.64 – 0.128 ms). We therefore decided to obtain higher resolution pulse profiles from a number of these MSPs in order to better determine the pulse profile changes and the influence of scattering. We first discuss the observations and present our results. We then discuss the implications for the properties of MSPs and scattering in the interstellar medium and for their future study and detection with the LRAs. Observations and Data Reduction =============================== We observed a total of 14 pulsars (see Table \[stappers:results-table\]) using the low frequency front ends (LFFEs) on the Westerbork Synthesis Radio Telescope (WSRT). The LFFEs have good sensitivity in the frequency range 115-180 MHz, where the lower limit is defined by the FM band and the upper limit by the response of the feeds. The band does contain some interference which is especially troublesome because the data is only sampled with 2 bits. We therefore selected eight clean bands each of 2.5 MHz bandwidth distributed throughout the band at 116.75, 130, 139.75, 142.25, 147.5, 156, 163.5 and 173.75 MHz. The data were oversampled at 40 MHz, decimated in real-time to 2.5 MHz bandwidth and then baseband recorded using the PuMa II pulsar backend. The data were reduced using the open source software package, DSP for Pulsars, DSPSR[^1]. A coherent filterbank of either 32 or 64 channels in each of the 8 bands was formed offline. The data were coherently dedispersed in each of the 32 or 64 channels, leading to a final time resolution of 25.6 or 51.2 $\mu$s. The data was also folded offline with an average pulse profile being formed for every ten seconds of data. These time-frequency cubes for each ten seconds of data were then checked for interference using a median filtering technique based on the rms noise in each frequency channel. The cleaned ten-second average profiles from all 8 bands were then summed to form a profile for each band and these bands were subsequently combined. It soon became evident that the combination of a wide range of frequencies and the low central frequencies meant a very accurate, epoch specific, determination of the dispersion measure (DM) was required in order to properly combine the 8 bands. Each data set was therefore optimised for the best DM, by maximising the signal-to-noise ratio of the pulse profile, and the inter-channel and inter-band dispersion correction was redone with the new best DM value. Typically the DM values needed to be changed at the $10^{-3} - 10^{-4}$ level, indicating the importance of accurate DM determinations for these low frequency observations. It will be interesting to see if follow-up observations at these frequencies also indicate small differences in the DM and thus a small DM optimisation step will be required for any analysis at these frequencies. The potential for using these accurate determinations of the DM for other applications like high precision pulsar timing at higher frequencies needs to be investigated. Results and Discussion ====================== We present the results of the observations with the LFFEs in Table \[stappers:results-table\]. Of the 14 pulsars observed a clear detection of a pulse profile was made in 8 cases and of the 13 pulsars which overlapped with the sample of KL99/KL01 7 were detected. In Figure \[stappers:kuzmincf-figure\] the detections are plotted as a function of DM and flux in an attempt to determine whether there is a common reason for detection or non-detection. All the sources detected by KL99/KL01 are plotted with crosses, while those observed by us are indicated by the open squares. The closed squares correspond to our detections. Before considering the sources as a whole we’ll discuss a couple of individual cases. \[stappers:kuzmincf-figure\] The pulsar with the worst period and DM combination and thus the one that is most likely to suffer from scattering is PSR J0218+4232 (top-leftmost point in the left hand plot of figure \[stappers:kuzmincf-figure\]). Despite this KL01 claim to have detected it. It is unclear whether they detected pulsations as only a flux is quoted in their paper and it is well known that it is a bright point source all the way down to 30 MHz. However we do not detect it in our observations and also do not detect it in deep observations at 250 MHz. This suggests that this source is not seen at these frequencies due to scattering. PSR B1957+21, like PSR J0218+4232, has one of the worst combinations of period and DM and yet we detected it with very high signal-to-noise. This is the only pulsar in our sample that was not observed by KL99/KL01 and this is probably because they had insufficient time resolution. Not only did we detect the source but the signal-to-noise ratio was sufficiently high that it was detected in every 10-second interval in each of the eight bands. Moreover the observations took place just as the pulsar was coming out of eclipse. The wide fractional frequency range simultaneously spanned by these data will provide an exciting opportunity to study the properties of the eclipses in this system. It is apparent from Figure \[stappers:kuzmincf-figure\] that there is no clear relationship between detection of a source in our observations with any combination of period and DM nor with the quoted 100 MHz flux. However the majority of the non-detections are at the lower fluxes, although not all. Seven pulsars in our sample have published profiles either in KL99 or in the EPN database[^2] and of those we detect five. It remains unclear why the other sources were not detected and if all the detections by KL99/KL01 are secure then it points to some time variable phenomenon. It is unlikely to be due to scintillation, as the scintillation bandwidth decreases rapidly with frequency and the relatively wide bands used here mean that there are many scintles in each band. The average flux should therefore remain relatively constant. At these frequencies interference is always a concern and it is certainly variable on the timescales of our observations. However all of the sources we detected were seen in the individual 2.5 MHz bands and these are widely separated in frequency and therefore one would not expect them all to be affected by interference simultaneously. Moreover inspection of the data where pulsars were not seen does not show overly worse interference conditions than when pulsars were detected. As discussed above, for the pulsars that we have detected, corrections had to be made to the DM in order not to have a broadened profile. Changes in DM are therefore another variable that might affect our ability to detect the MSPs at low frequencies. However, in order for the profile to be smeared by a large fraction of the pulse period across the 2.5 MHz band of each observing band then DM changes of the order of 0.1 are required. This is two to three orders of magnitude more than were detected above and is much larger than has been measured for any MSP [@yhc+07]. It is therefore unlikely that this is the reason for the lack of detection of some of these sources. While not thought to be highly variable, one of the main reasons why one might not expect to be able to detect some MSPs at these frequencies is scattering in the interstellar medium. The combination of the short rotational periods and the extreme frequency dependence of scattering mean that the pulse profiles may be scattered by more than a pulse period and thus it is no longer possible to detect them as pulsed sources. While there does appear to be an empirical relation between the DM and scattering (e.g. [@cl01; @bcc+04]) the more than a couple of orders of magnitude variation about the relation means that it has litle predictive power (e.g. see Figure 4 of [@bcc+04]). This, combined with the results of KL99/KL01 and this work show that it is basically only possible to determine which MSPs will be visible by actually observing them. PSR J0034-0534 and PSR J1713+0737 ================================= PSRs J0034-0534 and J1713+0737 are a pair of pulsars which illustrate the unpredictability of the degree of scattering. Both pulsars are claimed to be detected by KL99/KL01 and the pulse profiles can be found in the EPN database. A simple comparison of the two profiles suggests that they were both detected equally well, apart from the time resolution being better for PSR J1713+0747. The fluxes quoted for the two sources at these frequencies are also very similar. However, we easily detect PSR J0034-0534 and do not see PSR J1713+0747 at all. What could be the reason why we see one of these pulsars and not the other? As well as the fluxes being similar they also have very similar DMs, however we have already discussed the fact that there is not a very robust correlation between DM and the degree of scattering and so it may be that the profile of PSR J1713+0747 is too scattered to be detected (assuming also, with no explicit reason, that the detection by KL99 is not real). To test this we plot in Figure \[stappers:0034-1713\] the frequency evolution of the average pulse profiles of the two pulsars. The comparison between the 328 and 116 MHz pulse profiles of PSR J0034-0534 shows a small degree of broadening of the pulse profile due to scattering however it is still clearly detected at the lowest frequencies. The situation is a little bit more complicated for PSR J1713+0747 where at the higher frequencies there is some evidence for a broadening of the pulse profile but there is also, what appears to be, the development of a new component on the trailing edge of the profile at 400 MHz. It is therefore unclear whether scattering is the reason why the pulsar is undetected at these frequencies. The pulse profile of PSR J2145-0750 =================================== PSR J2145-0750 is presently the brightest MSP observed at frequencies below 200 MHz. Kuzmin & Losovsky (1996; KL96) first detected the source near 100 MHz and they compared their observed profile with those obtained at higher frequencies. As a result of Gaussian fitting to the profiles at 102, 430 and 1520 MHz they find that the profile apparently broadens at higher frequencies. This is extremely unusual as the traditional view is that the profiles become narrower at higher observing frequencies and this is thought to indicate that the higher frequency emission comes from deeper in the, predominantly dipolar, magnetosphere. The authors then suggest that this may be interpreted as being evidence for a magnetic field where the quadrupole terms are also important. Multipole contributions might be expected in MSPs because they have much more compact magnetospheres and this was some of the first evidence that this might be observable. In Figure \[stappers:2145\] we present observations made at the WSRT using the PuMa II backend at frequencies of 150, 350 and 1380 MHz. In all cases the data were coherently dedispersed and the time resolution was at worst 25.6 $\mu$s. Also shown is the 102 MHz profile from KL96. One can see the advantages of the coherent dedispersion in our profile at 150 MHz which is significantly sharper. It is clear that the leading component (component I in KL96) has undergone significant frequency evolution and also that the the peak separation between it and the second large peak is reduced. However the shift is smaller than seen by KL96 as their profile seems to be slightly distorted, perhaps due to dispersive smearing. Moreover it is not clear that this is still component I as it has such a different shape. It now more resembles the trailing component of the high frequency profile. A multipole interpretation would require further observations at smaller frequency intervals to better track the component evolution. Conclusions =========== We have used the LFFEs on the WSRT to make low frequency observations of 14 MSPs and successfully detected 8 of them. Using the PuMa II backend and coherent dedispersion we are able to get profiles with significantly higher time resolution than was previously possible. Previous work has suggested that we should have been able to detect the majority of these sources. We consider whether our failure to detect them is due to scattering or flux limits and find no clear factor which governs our ability to them. Taking the claimed detections in the literature at face value it would therefore suggest that there is some time dependent effect which is lowering the flux of them. We consider diffractive scintillation, interference, and DM variations and find that neither can plausibly explain the non-detections. One possible explanation is refractive scintillation which causes flux modulation on long timescales. However further investigation into the long term flux stability of the sources would be required to confirm this effect as the modulation due to refractive scintillation is expected to be low [@ssh+00]. What do these observations tell us about the prospects for observing and detecting MSPs with the LRAs? For LOFAR the sensitivity in the frequency range discussed above is expected to be at least 20 times better than the WSRT-LFFE combination. This means that we can expect to have the sensitivity to discover new MSPs with LOFAR. We need better statistics before being able to determine any sort of MSP luminosity function in this frequency range, but the initial results are very promising. It is also apparent that it will not be possible to determine a priori the DM out to which MSPs might be detected. The huge spread around the DM-scattering relationship precludes that and so a search out to DMs up to at least 100 will be a necessary component of searches for MSPs. The greatly improved sensitivity of the LRAs over the existing telescopes, in general, means that they will also be able to study the single pulses from a large sample of MSPs for the first time. This will be essential for determining whether there are any changes in the single pulse properties of MSPs, with their significantly smaller magnetic fields, compared to the normal pulsar population. That is to say, do any of the known single pulse properties depend on rotation rate or magnetic field strength or even neutron star surface temperature. The LRAs have the potential to not only increase the number of MSPs known but also to study their emission properties with unprecedented detail. We would like to thank the staff of the WSRT for assistance with obtaining the data used in this paper. J.W.T.H. thanks NSERC and the Candian Space Agency for a postdoctoral fellowship and supplement respectively. [7]{} natexlab\#1[\#1]{}\[1\][“\#1”]{} url \#1[`#1`]{}urlprefix\[2\]\[\][[\#2](#2)]{} N. E. [Kassim]{}, and T. J. W. [Lazio]{}, *Astrophys.J.* **527**, L101–L104 (1999). A. D. [Kuzmin]{}, and B. Y. [Losovsky]{}, *aap* **368**, 230–238 (2001). X. P. [You]{}, G. [Hobbs]{}, W. A. [Coles]{}, R. N. [Manchester]{}, R. [Edwards]{}, M. [Bailes]{}, J. [Sarkissian]{}, J. P. W. [Verbiest]{}, W. [van Straten]{}, A. [Hotan]{}, S. [Ord]{}, F. [Jenet]{}, N. D. R. [Bhat]{}, and A. [Teoh]{}, *mnras* **378**, 493–506 (2007), . J. M. [Cordes]{}, and T. J. W. [Lazio]{}, *Astrophys.J.* **549**, 997–1010 (2001). N. D. R. [Bhat]{}, J. M. [Cordes]{}, F. [Camilo]{}, D. J. [Nice]{}, and D. R. [Lorimer]{}, *Astrophys.J.* **605**, 759–783 (2004). A. D. Kuzmin, and B. Y. Losovskii, *Astr.Astrophys.* **308**, 91–96 (1996). D. R. Stinebring, T. V. Smirnova, T. H. Hankins, J. Hovis, V. Kaspi, J. Kempner, E. Meyers, and D. J. Nice, *Astrophys.J.* **539**, 300–316 (2000). [^1]: http://sourceforge.net/projects/dspsr/ [^2]: http://www.mpifr-bonn.mpg.de/div/pulsar/data/
--- abstract: 'We provide a sufficient condition for the numerical range of a nilpotent matrix $N$ to be circular in terms of the existence of cycles in an undirected graph associated with $N$. We prove that if we add to this matrix $N$ a diagonal real matrix $D$, the matrix $D+N$ has convex numerical range. For $3 \times 3$ nilpotent matrices, we strength further our results and obtain necessary and sufficient conditions for circularity and convexity of the numerical range.' address: - | Luís Carvalho, ISCTE - Lisbon University Institute\ Av. das Forças Armadas\ 1649-026, Lisbon\ Portugal - | Cristina Diogo, ISCTE - Lisbon University Institute\ Av. das Forças Armadas\ 1649-026, Lisbon\ Portugal\ and\ Center for Mathematical Analysis, Geometry, and Dynamical Systems\ Mathematics Department,\ Instituto Superior Técnico, Universidade de Lisboa\ Av. Rovisco Pais, 1049-001 Lisboa, Portugal - | Sérgio Mendes, ISCTE - Lisbon University Institute\ Av. das Forças Armadas\ 1649-026, Lisbon\ Portugal\ and Centro de Matemática e Aplicações\ Universidade da Beira Interior\ Rua Marquês d’Ávila e Bolama\ 6201-001, Covilhã author: - Luís Carvalho - Cristina Diogo - Sérgio Mendes title: On the convexity and circularity of the numerical range of nilpotent quaternionic matrices --- [^1] Introduction ============ Let ${\mathbb{H}}$ denote the Hamilton quaternions and let $A$ be a $n\times n$ matrix with quaternionic entries. The quaternionic numerical range of $A$, denoted $W(A)$, was introduced in 1951 in Kippenhahn’s seminal article [@Ki] as an analogue of the long established complex numerical range (see [@R] for an account on quaternionic numerical ranges). Specifically, $W(A)$ is the subset of ${\mathbb{H}}$ whose elements have the form ${\boldsymbol{x}}^*A{\boldsymbol{x}}$, where ${\boldsymbol{x}}$ runs over the unit sphere of ${\mathbb{H}}^n$. Due to the failure of Toeplitz-Hausdorff theorem in the quaternionic setting, the convexity of $W(A)$ has been studied by several authors. In [@Ki], Kippenhahn introduced the Bild of $A$, denoted $B(A)$, as the intersection of $W(A)$ with the complex plane and studied its convexity. The Bild is indeed a planar substitute of $W(A)$ in the sense that every element of $W(A)$ is equivalent to an element in $W(A)\cap{\mathbb{C}}$. The first remarkable result on convexity is due to Au-Yeung who proved in 1984 that $W(A)$ is convex if, and only if, the projection of $W(A)$ over ${\mathbb{R}}$ (resp., ${\mathbb{C}}$) equals the real (resp., the complex) elements in $W(A)$, see [@Ye1 theorems 2 and 3]. In that same paper, the author gives necessary and sufficient conditions on the eigenvalues of a normal matrix $A$ for $W(A)$ to be convex. The convexity of the Bild, already an issue in [@Ki], was established for normal matrices in 1994 by So, Thompson and Zhang. They proved in [@STZ p. 192] that the closed upper half plane part of the Bild (the upper Bild $B^+(A))$ of a normal matrix $A$ is the convex hull of eigenvalues and cone vertices. Later on, the proof was simplified by Au-Yeung [@Ye2]. The general case was settled by So and Thompson in 1996. In [@ST theorem 15.2] they proved that for any matrix $A$, the intersection of $W(A)$ with the closed upper half plane is always convex. Another problem that attracted much attention in the complex setting is the shape of the numerical range. In [@ST theorem 17.1], So and Thompson characterized the numerical range of $2\times 2$ quaternionic matrices, the analogue of the elliptical range theorem. However, compared with the complex case, the shape of the numerical range of quaternionic matrices seems to have been more neglected. In this article we study the convexity and shape of the numerical range for nilpotent quaternionic matrices. To be more specific, we determine under what conditions $W(A)$ has circular shape or, at least, is convex. In section 2 of this article we recall some definitions and fix notation. In section 3 we deal with the circularity of the numerical range. Theorem \[center at origin\] shows that if the numerical range of a nilpotent matrix is a disk, its center must be located at the origin. This is the quaternionic analogue of [@MM proposition 1]. We conclude this section with theorem \[tree circular disk\] which says that a sufficient condition for the numerical range of a nilpotent matrix $A$ to be a disk is that the associated graph of $A$ is a tree. This condition is not necessary as example \[ex\_4x4realmatrix\] shows. In section 4 we extend the results of the previous section. Every matrix $A\in\mathcal{M}_n({\mathbb{H}})$ is, up to unitary equivalence, upper triangular and every upper triangular matrix decomposes as a sum of a diagonal with a nilpotent matrix. The main result of this section is theorem \[W\_is\_smaller\], where it is proved that when the diagonal part is real and the nilpotent part is a tree then the numerical range is a union of disks. To reach this result we apply Berge’s maximum theorem, a technique not much seen in the literature. Corollary \[prop\_diag+shiftlike\_convex\] proves that this class of matrices have convex numerical range. We end the section providing an example of one of these matrices where the union of disks that compose its numerical range is in fact an ellipse. In section 5, we focus on $3\times 3$ nilpotent matrices concerning convexity and circularity of the numerical range. Theorem \[NR\_3x3\_disk\] says that a necessary and sufficient condition for the numerical range of $A\in\mathcal{M}_3({\mathbb{H}})$ to be a disk with center at the origin is that $A$ is cycle-free. On the other hand, theorem \[NSC 3x3 convex\] gives a necessary and sufficient condition for the same class of matrices to have convex numerical range. Specifically, $W(A)$ is convex if, and only if, $a_{13}^*a_{12}a_{23}\in{\mathbb{R}}$. The link with our work is now provided by theorem 1 of Chien and Tam [@CT] in the complex setting. Preliminaries and notation {#section_prelims} ========================== In this section we present some well known facts about quaternions and fix some notation. The quaternionic skew-field ${\mathbb{H}}$ is an algebra of rank $4$ over ${\mathbb{R}}$ with basis $\{1, i, j, k\}$. The product in ${\mathbb{H}}$ is given by $i^2=j^2=k^2=ijk=-1$. Denote the pure quaternions by ${\mathbb{P}}=\mathrm{span}_{{\mathbb{R}}}\,\{i,j,k\}$. For any $q=a_0+a_1i+a_2j+a_3k\in{\mathbb{H}}$ let $\pi_{{\mathbb{R}}}(q)=a_0$ and $\pi_{{\mathbb{P}}}(q)=a_1i+a_2j+a_3k$ be the real and imaginary parts of $q$, respectively. The conjugate of $q$ is given by $q^*=\pi_{{\mathbb{R}}}(q)-\pi_{{\mathbb{P}}}(q)$ and the norm is defined by $|q|^2=qq^*$. Two quaternions $q_1,q_2\in{\mathbb{H}}$ are called similar if there exists a unitary quaternion $s$ such that $s^{*}q_2 s=q_1$. Similarity is an equivalence relation and we denote by $[q]$ the equivalence class containing $q$. A necessary and sufficient condition for the similarity of $q_1$ and $q_2$ is that $\pi_{{\mathbb{R}}}(q_1)=\pi_{{\mathbb{R}}}(q_2) \textrm{ and }|\pi_{{\mathbb{P}}}(q_1)|=|\pi_{{\mathbb{P}}}(q_2)|$ [@R theorem 2.2.6]. Let ${\mathbb{F}}$ denote ${\mathbb{R}}$, ${\mathbb{C}}$ or ${\mathbb{H}}$. Let ${\mathbb{F}}^n$ be the $n$-dimensional ${\mathbb{F}}$-space. For ${\boldsymbol{x}}\in{\mathbb{F}}^n$, ${\boldsymbol{x}}^*$ denote the conjugate transpose of ${\boldsymbol{x}}$. The disk with center ${\boldsymbol{a}}\in{\mathbb{F}}^n$ and radius $r\geq 0$ is the set ${\mathbb{D}}_{{\mathbb{F}}^n}({\boldsymbol{a}},r)=\{{\boldsymbol{x}}\in{\mathbb{F}}^n:|{\boldsymbol{x}}-{\boldsymbol{a}}|\leq r\}$ and its boundary is the sphere ${\mathbb{S}}_{{\mathbb{F}}^n}({\boldsymbol{a}},r)$. In particular, if ${\boldsymbol{a}}={\boldsymbol{0}}$ and $r=1$, we simply write ${\mathbb{D}}_{{\mathbb{F}}^n}$ and ${\mathbb{S}}_{{\mathbb{F}}^n}$. The group of unitary quaternions is denoted by ${\mathbb{S}}_{{\mathbb{H}}}$. Notice that we are considering the singleton $\{{\boldsymbol{a}}\}$ to be the disk with center $a$ and radius $r=0$. Any $x \in {\mathbb{H}}$ can be written as $x=\beta z$, with $\beta=|x|$ and $z \in {\mathbb{S}}_{{\mathbb{H}}}$[^2]. We introduce the following notation: $$\begin{aligned} &{\mathbb{R}}^{n,+} =\{(\beta_1,...,\beta_n)\in{\mathbb{R}}^n:\beta_i\geq 0, 1\leq i\leq n\}\\ &{\mathbb{S}}^+_{{\mathbb{R}}^n} ={\mathbb{S}}_{{\mathbb{R}}^n}\cap{\mathbb{R}}^{n,+}.\end{aligned}$$ Let ${\mathcal{M}}_{n} ({\mathbb{F}})$ be the set of all $n\times n$ matrices with entries over ${\mathbb{F}}$. Let $A\in {\mathcal{M}}_{n} ({\mathbb{H}})$. The set $$W(A)=\{{\boldsymbol{x}}^*A{\boldsymbol{x}}:{\boldsymbol{x}}\in {\mathbb{S}}_{{\mathbb{H}}^n}\}$$ is called the numerical range of $A$ in ${\mathbb{H}}$. As usual, the complex numerical range of a complex matrix is defined by $$W_{{\mathbb{C}}}(A)=\{{\boldsymbol{x}}^*A{\boldsymbol{x}}:{\boldsymbol{x}}\in {\mathbb{S}}_{{\mathbb{C}}^n}\}.$$ It is well known that if $q\in W(A)$ then $[q]\subseteq W(A)$ [@R page 38]. Therefore, it is enough to study the subset of complex elements in each similarity class. This set is known as $B(A)$, the Bild of $A$ $$B(A)=W(A)\cap{\mathbb{C}}.$$ Although the Bild may not be convex, the upper Bild $B^+(A)=W(A)\cap{\mathbb{C}}^+$ is always convex, see [@ST]. Taking into account that ${\mathbb{R}}$ can be seen as a real subspace of ${\mathbb{H}}$, what we denoted by $\pi_{{\mathbb{R}}}$ is, in fact, the projection of ${\mathbb{H}}$ over ${\mathbb{R}}$, $\pi_{{\mathbb{R}}}:{\mathbb{H}}\rightarrow {\mathbb{R}}$. The projection of $W(A)$ over ${\mathbb{R}}$ is $$\pi_{{\mathbb{R}}}(W(A))= \{\pi_{{\mathbb{R}}}(w): \; w\in W(A)\}.$$ In the next section we will define a relation between the circularity of the numerical range of $A$ and the lack of cycles of an associated undirected graph. To be more specific, given a matrix $A=[a_{ij}] \in {\mathcal{M}}_n ({\mathbb{H}})$ we may define the underlying undirected graph $\mathcal{G}_A$ with $n$ vertices as the graph with an edge between $i$ and $j$ whenever $a_{ij}\neq 0$ or $a_{ji}\neq 0$. That is, if $\delta:{\mathbb{H}}\to \{0,1\}$ is the indicator function, $\delta(q)=1$ if $q\neq 0$ and $\delta(q)=0$ otherwise, let $A_{\delta}$ be the symmetric matrix given by $$A_\delta=\Big[ \delta(\max\{a_{ij},a_{ji}\}) \Big]_{i,j=1}^n.$$ Then $A_{\delta}$ is precisely the adjacency matrix of the undirected graph $\mathcal{G}_A$. We say that the graph $\mathcal{G}_A$ has a path between the vertices $i, j \in \{1, \ldots, n\}$, if there is a sequence of vertices $(i_1, i_2, \ldots, i_p)$ such that: $$i_1=i, \qquad i_p=j, \qquad (A_\delta)_{i_k, i_{k+1}}=1, \text{ for } k=1,\ldots, p-1.$$ In terms of the elements $a_{km}$ of the matrix $A$ this condition is equivalent to $a_{i_ki_{k+1}}\neq 0$, for all $k\in\{1, \ldots, p-1\}$ and to $a_{i_1i_2}a_{i_2i_3}\ldots a_{i_{p-1} i_{p}} \neq 0 $. The graph $\mathcal{G}_A$ is connected if there is a path between any pair of vertices $i,j \in \{1, \ldots, n\}$, otherwise it is disconnected. We say that the matrix $A$ is connected (resp., disconnected) whenever $\mathcal{G}_A$ is connected (resp., disconnected) . The graph $\mathcal{G}_A$ has a cycle (or the matrix $A$ has a cycle) if there is a vertex $i \in \{1, \ldots, n\}$ and a path connecting $i$ to itself. Loops are seen as cycles. The graph $\mathcal{G}_A$ is cycle-free (or the matrix $A$ is cycle-free) if there are no cycles in $\mathcal{G}_A$. If the graph $\mathcal{G}_A$ is connected and cycle-free then it is a tree, and in this case the number of edges is $n-1$ [@Di corollary 1.5.3]. It follows that there exists one vertex with only one edge and, if we eliminate this vertex and its edge, we get a graph with $n-1$ vertices and $n-2$ edges, which is also a tree. If $A$ is a nilpotent matrix and the graph $\mathcal{G}_A$ is a tree, then there exists a permutation matrix $P$ such that $P^{\top}AP$ is upper triangular. More generally, two matrices $A, A'\in {\mathcal{M}}_n({\mathbb{H}})$ are unitarily equivalent if there exists a unitary $U\in {\mathcal{M}}_n({\mathbb{H}})$ such that $A'=U^*AU$, in which case we write $A'\sim A$. The relation $\sim$ is an equivalence relation. By Schur’s triangularization theorem [@R theorem 5.3.6], every matrix $A\in {\mathcal{M}}_n({\mathbb{H}})$ is unitarily equivalent to an upper triangular matrix whose diagonal is complex. By [@R theorem 3.5.4], the numerical range is invariant for the equivalence classes $[A]_{\sim}$, with $A\in {\mathcal{M}}_n({\mathbb{H}})$. Therefore, it is enough to consider upper triangular matrices $A\in {\mathcal{M}}_n({\mathbb{H}})$ with complex diagonal entries. For the rest of this article we will assume that $A\in {\mathcal{M}}_n({\mathbb{H}})$ is upper triangular. Circularity of the numerical range ================================== Our first result shows that the numerical range of a nilpotent matrix is either circular with center at the origin or it is not a disk. In other words, when circular, the disk must be centered at the origin. Given $A \in {\mathcal{M}}_n({\mathbb{H}})$ there exists an associated complex matrix $$\chi(A)=\left[ \begin{array}{cc} A_1 & A_2 \\ -\bar A_2 & \bar A_1 \end{array} \right]\in{\mathcal{M}}_{2n}({\mathbb{C}}),$$ where $A_1, A_2 \in {\mathcal{M}}_n({\mathbb{C}})$ and $A=A_1+A_2j$. \[center at origin\] Let $A\in{\mathcal{M}}_n({\mathbb{H}})$ be nilpotent. If $W(A)$ is a disk, then its center is at the origin. Since $\chi(AB)=\chi(A)\chi(B)$,[@Zh theorem 4.2], $A$ is nilpotent if and only if $\chi(A)$ is nilpotent. When $W(A)$ is circular, it is convex, and according to [@Ye1 theorem 2] and [@Ye2 p. 280], $W(A) \cap {\mathbb{C}}= W_{{\mathbb{C}}}\big(\chi(A)\big)$. Thus $W_{{\mathbb{C}}}\big(\chi(A)\big)$ is a disk in ${\mathbb{C}}$. According to [@MM proposition 1] a nilpotent complex matrix whose numerical range is a disk must have center at the origin. Thus $W(A) \cap {\mathbb{C}}=W_{{\mathbb{C}}}(\chi(A))= {\mathbb{D}}_{{\mathbb{C}}}(0, r)$, where $r$ is the radius. We conclude, rebuilding the numerical range by taking the equivalence classes of the elements of the Bild, that $W(A)$ is a disk centered at the origin, i.e. $W(A) = {\mathbb{D}}_{{\mathbb{H}}}(0, r)$. In theorem \[tree circular disk\] we will prove that if the graph associated with the nilpotent matrix $A \in {\mathcal{M}}_n({\mathbb{H}})$ has no cycles then the numerical range of $A$ is a disk. When the graph $\mathcal{G}_A$ is disconnected, we can partition the set of vertices into connected components, where each component has no edge to the other components. Then, in terms of the original matrix $A$, we can (through a reordering of the vertices if necessary, *i.e.* through $P^{\top}AP$ where $P$ is a permutation of $\{1,\ldots, n\}$) write $A$ as a block matrix. Now, each block of matrix $A$ is connected. In addition, if $A$ is cycle-free, then each block is cycle-free. The fact that the quaternionic numerical range is not always convex motivates the following definition. Let $\mathcal{A}_1, \dots, \mathcal{A}_n$ be subsets of ${\mathbb{H}}$. The inter-convex hull of the $\mathcal{A}_i$’s is the set $${{\rm iconv}}\{\mathcal{A}_1, \dots, \mathcal{A}_n\}= \Big\{\sum_i \alpha_i^2 a_i: \, {\boldsymbol{\alpha}}\in {\mathbb{S}}_{{\mathbb{R}}^n} ,\, a_i\in \mathcal{A}_i, \, i=1, \dots, n\Big\}.$$ We can easily prove that: \[prop\_iconv\] Let $A_1\oplus\ldots \oplus A_k\in {\mathcal{M}}_n({\mathbb{H}})$. Then $$W(A_1\oplus\ldots \oplus A_k)={{\rm iconv}}\Big\{W(A_1),\ldots, W(A_k) \Big\}.$$ In particular, if $W(A_i)$ is convex, for $i=1, \ldots, k$, then $$W(A_1\oplus\ldots \oplus A_k)={{\rm conv}}\Big\{W(A_1),\ldots, W(A_k) \Big\},$$ where ${{\rm conv}}$ denotes the convex hull. Then, to figure out the numerical range of a nilpotent matrix $A$ without cycles, we just need to consider the numerical range of each block $A_i$, a nilpotent matrix without cycles and connected (that is, a tree). Thus, to establish the relation between the circularity of the numerical range of $A$ and the existence of cycles, we will focus only on connected matrices. We start by an auxiliary and technical result. \[lemma\_induction\] Let $A=[a_{ij}]_{i,j=1}^n \in {\mathcal{M}}_n({\mathbb{H}})$ be a nilpotent and tree matrix and ${\boldsymbol{\beta}}=(\beta_1, \dots, \beta_n)\in {\mathbb{S}}^+_{{\mathbb{R}}^n}$. Then $$\bigcup_{\substack{z_k \in {\mathbb{S}}_{{\mathbb{H}}}\\1\leq k\leq n}}\Bigg\{\sum_{i,j=1}^{n} \beta_{i}\beta_j z^*_{i}a_{ij} z_j\Bigg\}= \sum_{i,j}^{n} {\mathbb{S}}_{{\mathbb{H}}}\Big(0, \beta_{i}\beta_j|a_{ij}|\Big).$$ The proof is done by induction in $n$. If $n=1$, then $A$ is the zero matrix and the result is obvious. Now assume that $$\bigcup_{\substack{z_k \in {\mathbb{S}}_{{\mathbb{H}}}\\1\leq k\leq n-1}}\Bigg\{\sum_{i,j=1}^{n-1} \beta_{i}\beta_j z^*_{i}a_{ij} z_j\Bigg\}= \sum_{i,j}^{n-1} {\mathbb{S}}_{{\mathbb{H}}}\Big(0, \beta_{i}\beta_j|a_{ij}|\Big).$$ In the tree $\mathcal{G}_{A}$ associated with $A$, pick any vertex that has only one edge. If necessary change its label to $n$. In this case, since $\mathcal{G}_A$ is a tree, we have that $a_{ni}=0$, for any $i \in \{1, \ldots, n\}$ and, for some $p \in \{1, \ldots, n-1\}$, $a_{pn}\neq 0$, $a_{in}=0$ if $i\in \{1, \ldots, n-1\}\backslash \{p\}$. We then have $$\begin{aligned} \bigcup_{\substack{z_k \in {\mathbb{S}}_{{\mathbb{H}}}\\1\leq k\leq n}} \Big\{\sum_{i,j=1}^{n} \beta_{i}\beta_j z^*_{i}a_{ij} z_j\Big\}= & \bigcup_{\substack{z_k \in {\mathbb{S}}_{{\mathbb{H}}}\\1\leq k\leq n}} \bigg\{ \sum_{i,j=1}^{n-1} \beta_{i}\beta_j z^*_{i}a_{ij} z_j + \sum_{\max\{i,j\}=n} \beta_{i}\beta_j z^*_{i}a_{ij} z_j\bigg\}\\ =& \bigcup_{\substack{z_k \in {\mathbb{S}}_{{\mathbb{H}}}\\1\leq k\leq n-1}} \bigcup_{z_n \in {\mathbb{S}}_{{\mathbb{H}}}} \bigg\{ \sum_{i,j=1}^{n-1} \beta_{i}\beta_j z^*_{i}a_{ij} z_j + \beta_{p}\beta_n z^*_{p}a_{pn} z_n\bigg\}\\ =& \bigcup_{\substack{z_k \in {\mathbb{S}}_{{\mathbb{H}}}\\1\leq k\leq n-1}} \bigg\{ \sum_{i,j=1}^{n-1} \beta_{i}\beta_j z^*_{i}a_{ij} z_j + {\mathbb{S}}_{{\mathbb{H}}}\Big(0, \beta_{p}\beta_n|a_{pn}|\Big)\bigg\}\\ =& \bigcup_{\substack{z_k \in {\mathbb{S}}_{{\mathbb{H}}}\\1\leq k\leq n-1}} \bigg\{ \sum_{i,j=1}^{n-1} \beta_{i}\beta_j z^*_{i}a_{ij} z_j\bigg\} + {\mathbb{S}}_{{\mathbb{H}}}\Big(0, \beta_{p}\beta_n|a_{pn}|\Big)\\ =& \bigcup_{\substack{z_k \in {\mathbb{S}}_{{\mathbb{H}}}\\1\leq k\leq n-1}} \bigg\{ \sum_{i,j=1}^{n-1} \beta_{i}\beta_j z^*_{i}a_{ij} z_j \bigg\} + \sum_{i=1}^{n-1} {\mathbb{S}}_{{\mathbb{H}}}\Big(0, \beta_{i}\beta_n|a_{in}|\Big)\\\end{aligned}$$ In the second and the last equality we used that the only non-zero $a_{in}$, for $i \in \{1, \ldots, n\}$, is $a_{pn}$ and that all $a_{ni}$ are zero. By the induction hypothesis, the last equality can be written as $$\begin{aligned} \bigcup_{\substack{z_k \in {\mathbb{S}}_{{\mathbb{H}}}\\1\leq k\leq n}} \Big\{\sum_{i,j=1}^{n} \beta_{i}\beta_j z^*_{i}a_{ij} z_j\Big\}= & \sum_{i,j}^{n-1} {\mathbb{S}}_{{\mathbb{H}}}\Big(0, \beta_{i}\beta_j|a_{ij}|\Big) + \sum_{i=1}^{n-1} {\mathbb{S}}_{{\mathbb{H}}}\Big(0, \beta_{i}\beta_n|a_{in}|\Big) \\ = & \sum_{i,j}^{n} {\mathbb{S}}_{{\mathbb{H}}}\Big(0, \beta_{i}\beta_j|a_{ij}|\Big),\end{aligned}$$ again using that $a_{nj}=0$ for $j\in \{1, \ldots, n \}$. Notice that the previous calculation was carried out under the assumption that the $(n-1)\times(n-1)$ matrix had no cycles. In fact, since the initial $n\times n$ matrix $A$ is a tree, as we mentioned before, if we eliminate a one edge vertex together with its edge, we end up with a new graph that is also a tree. \[tree circular disk\] Let $A\in {\mathcal{M}}_n({\mathbb{H}})$ be a nilpotent and tree matrix. Then, $W(A)$ is a circular disk with center at the origin and radius $\max_{ {\boldsymbol{\beta}} \in {\mathbb{S}}^+_{{\mathbb{R}}^n}} \sum_{i,j}^{n} \beta_{i}\beta_j|a_{ij}|$. We have: $$W(A)=\bigcup_{{\boldsymbol{x}}\in{\mathbb{S}}_{{\mathbb{H}}^n}}\,\Bigg\{\sum_{i,j=1}^nx^*_ia_{ij}x_j\Bigg\}.$$ Each summand $x^*_ia_{ij}x_j$ may be written as $$\beta_i\beta_jz_i^*a_{ij}z_j$$ with $\beta_i,\beta_j\geq 0$ and $z_i,z_j\in{\mathbb{S}}_{{\mathbb{H}}}$. It follows that $$W(A)=\bigcup_{{\boldsymbol{\beta}}\in{\mathbb{S}}^+_{{\mathbb{R}}^n}}\bigcup_{\substack{z_k \in {\mathbb{S}}_{{\mathbb{H}}}\\1\leq k\leq n}}\Bigg\{\sum_{i,j=1}^n \beta_i\beta_jz^*_ia_{ij}z_j\Bigg\}.$$ From lemma \[lemma\_induction\], we have $$\label{W(A) sum spheres} W(A)=\bigcup_{{\boldsymbol{\beta}}\in{\mathbb{S}}^+_{{\mathbb{R}}^n}}\,\Bigg\{\sum_{i,j=1}^n \,{\mathbb{S}}_{{\mathbb{H}}}(0,\beta_i\beta_j|a_{ij}|)\Bigg\}.$$ Therefore, it remains to prove that $$\bigcup_{{\boldsymbol{\beta}}\in{\mathbb{S}}^+_{{\mathbb{R}}^n}}\,\Bigg\{\sum_{i,j=1}^n \,{\mathbb{S}}_{{\mathbb{H}}}(0,\beta_i\beta_j|a_{ij}|)\Bigg\}={\mathbb{D}}_{{\mathbb{H}}}\Bigg(0,\max_{ {\boldsymbol{\beta}} \in {\mathbb{S}}^+_{{\mathbb{R}}^n}} \sum_{i,j}^{n} \beta_{i}\beta_j|a_{ij}|\Bigg).$$ This is achieved by proving a double inclusion. If $y=\sum_{i,j=1}^{n} \beta_{i}\beta_jr_{ij}$ for a given ${\boldsymbol{\beta}}\in{\mathbb{S}}^+_{{\mathbb{R}}^n}$, where $r_{ij}\in{\mathbb{S}}_{{\mathbb{H}}}(0,|a_{ij}|)$, then $$|y|\leq\sum_{i,j=1}^n\beta_i\beta_j|a_{ij}|\leq\max_{ {\boldsymbol{\beta}} \in {\mathbb{S}}^+_{{\mathbb{R}}^n}} \sum_{i,j}^{n} \beta_{i}\beta_j|a_{ij}|.$$ For the converse inclusion, first observe that the function $f:{\mathbb{S}}^+_{{\mathbb{R}}^n}\to{\mathbb{R}}$ defined by $$f({\boldsymbol{\beta}})=\sum_{i,j=1}^{n} \beta_{i}\beta_j|a_{ij}|$$ is a continuous function on a compact set, so it has a maximum at, say, ${\boldsymbol{\beta}}^*\in{\mathbb{S}}^+_{{\mathbb{R}}^n}$. Since $f$ is continuous, $f(1,0,\ldots,0)=0$ and ${\mathbb{S}}^+_{{\mathbb{R}}^n}$ is connected, for every $r_0\in[0,f({\boldsymbol{\beta}}^*)]$ there exists ${\boldsymbol{\beta}}_0\in{\mathbb{S}}^+_{{\mathbb{R}}^n}$ such that $f({\boldsymbol{\beta}}_0)=r_0$, that is, for any path connecting $(1,0,\ldots,0)$ to ${\boldsymbol{\beta}}^*$, the values of $f$ run surjectively over the interval $[0,f({\boldsymbol{\beta}}^*)]$. Take $y\in{\mathbb{D}}_{{\mathbb{H}}}(0,f({\boldsymbol{\beta}}^*))$. Then, $|y|\leq f({\boldsymbol{\beta}}^*)$ and $|y|=f({\boldsymbol{\beta}})$, for some ${\boldsymbol{\beta}}\in{\mathbb{S}}^+_{{\mathbb{R}}^n}$. If $y=0$, then we can take ${\boldsymbol{\beta}}=(1,0,\ldots,0)$ and the inclusion follows. Now, suppose $y\neq 0$. Let $y_{ij}=\beta_i\beta_j|a_{ij}|\dfrac{y}{|y|}$. It follows that $$\sum_{i,j=1}^ny_{ij}=f({\boldsymbol{\beta}})\frac{y}{|y|}=y$$ and $y_{ij}\in{\mathbb{S}}_{{\mathbb{H}}}(0,\beta_i\beta_j|a_{ij}|)$. Therefore, $$y\in\sum_{i,j=1}^n {\mathbb{S}}_{{\mathbb{H}}}(0,\beta_i\beta_j|a_{ij}|).$$ This result can be applied to more general matrices as shown in the next example. \[example2\] Let $A=\left[ \begin{array}{ccc} 0& 2j&0 \\ 0& 0&0\\ 0& 0&1 \end{array} \right]$ and write $A$ as a direct sum $$A=A_1\oplus A_2=\left[ \begin{array}{cc}0& 2j\\0& 0\end{array}\right]\oplus\left[ \begin{array}{c} 1\end{array}\right].$$ By theorem \[tree circular disk\] we have $$W(A_1)={\mathbb{D}}_{{\mathbb{H}}}(0, 1) \,\,\,\textrm{and}\,\,\,W(A_2)=\{1\}.$$ From proposition \[prop\_iconv\] it follows that $$W(A)=\mathrm{conv}\,\{W(A_1),W(A_2)\}={\mathbb{D}}_{{\mathbb{H}}}(0, 1).$$ The previous result, theorem \[tree circular disk\], can be extended to disconnected matrices. \[cor\_nil\_cycle-free\] Let $A\in{\mathcal{M}}_n({\mathbb{H}})$ be a nilpotent and cycle-free matrix. Then, $W(A)$ is a disk with center at the origin. There exist a permutation matrix $P$ such that $P^{\top}AP=A_1\oplus\ldots\oplus A_k$, where each $A_i$ is a tree, square matrix. By proposition \[prop\_iconv\] we have $$\begin{aligned} W(A) & = W(P^{\top}AP)=W(A_1\oplus...\oplus A_k) \\ & =\mathrm{iconv}\,\{W(A_1),...,W(A_k)\\ & =\mathrm{conv}\,\{W(A_1),...,W(A_k)\}.\end{aligned}$$ The result follows from the convexity of $W(A_i)$, see theorem \[tree circular disk\]. We will now give an example that shows the implication of the previous result cannot be strengthened to an equivalence. We will provide a nilpotent (real) matrix $A$ with cycles that have a circular numerical range. The existence of such matrix is supported on two results. Firstly, in theorem 3.7 of [@CDM] it was shown that the quaternionic numerical range of a real matrix $A$ is the equivalence classes of the complex numerical range of the matrix $A$, that is $$W(A)=\Big[ W_{{\mathbb{C}}}(A) \Big], \text{ for } A \in M_n({\mathbb{R}}).$$ And secondly, theorem $1$ of [@CT] provide necessary and sufficient conditions for a $4\times4$ nilpotent complex matrix to have a circular complex numerical range. \[ex\_4x4realmatrix\] Let $A \in M_4({\mathbb{R}})$ be $$A=\begin{bmatrix} 0 & 1 & -1 & 0 \\ 0 & 0 & 1 & 1 \\ 0 & 0 & 0& 1\\ 0 & 0 & 0 & 0 \end{bmatrix}.$$ This matrix satisfies both conditions of [@CT] for the numerical range to be circular, therefore the complex numerical range of $A$ is $W_{{\mathbb{C}}}(A)={\mathbb{D}}_{{\mathbb{C}}}(0, r)$. Since $A$ is real, by [@CDM], $W(A)=\Big[ {\mathbb{D}}_{{\mathbb{C}}}(0, r)\Big]={\mathbb{D}}_{{\mathbb{H}}}(0, r)$. A class of matrices with convex numerical range =============================================== So far we have dealt with nilpotent matrices. In this section we will enlarge our domain and consider matrices that have real entries in the diagonal. It is know that when $A=\alpha I + N\in {\mathcal{M}}_{n}({\mathbb{H}})$, for $\alpha \in {\mathbb{R}}$, the numerical range is $W(\alpha I+N)=\alpha+W(N)$, see [@R proposition 3.5.4]. Therefore, if the numerical range of $N$ is convex, so is the numerical range of $A$. We expand this result on convexity for the case where $A$ can be written as the sum of real diagonal matrix $D$ and a nilpotent cycle-free matrix $N$, i.e. $T=N+D$. Let $D={{diag}}(d_1, \ldots, d_n) \in {\mathcal{M}}_n({\mathbb{R}})$ and define $$\underline{d}=\min \{d_i: 1\leq i\leq n\}\,\,\,\textrm{ and }\,\,\, \overline{d}=\max \{d_i, 1\leq i\leq n\}.$$ We will find out in theorem \[W\_is\_smaller\] that the numerical range of $A$ can be decomposed into a union of disks ${\mathbb{D}}_{{\mathbb{H}}}(d, r(d))$, one for each $d \in [\underline{d},\overline{d}]$, that is, $$W(A)= \bigcup_{d \in \big[\underline{d}, \overline{d}\big]}{\mathbb{D}}_{{\mathbb{H}}}(d, r(d)).$$ To prove the above decomposition, we will need to show that the radius of each disk $r(d)$ varies continuously with the center $d$. \[disk\_continuity\] Let $A\in \mathcal{M}_{n}({\mathbb{H}})$ be a matrix with real entries in the diagonal. Let $$\begin{aligned} f: &\; {\mathbb{R}}^{n,+} \longrightarrow {\mathbb{R}}^+ &g :{\mathbb{R}}^{n,+} \longrightarrow \big[\underline{d}, \overline{d}\big] \,\,\,\,\,\,\\ &\,\,\,\,\,{\boldsymbol{\beta}} \mapsto \sum_{i \neq j} \beta_i\beta_j |a_{ij}| & {\boldsymbol{\beta}} \mapsto \sum_{i} d_{i}\beta_i^2.\end{aligned}$$ Then, the function $r: \big[\underline{d}, \overline{d}\big] \to {\mathbb{R}}^+$ defined by $$\label{raio_d} r(d)= \max \{f({\boldsymbol{\beta}}): g({\boldsymbol{\beta}})=d \text{ and } {\boldsymbol{\beta}} \in {\mathbb{S}}^+_{{\mathbb{R}}^n}\}$$ is continuous. Define a correspondence $\Gamma:\big[\underline{d}, \overline{d}\big] \rightrightarrows {\mathbb{S}}^+_{{\mathbb{R}}^n}$ to be the intersection of fibers $$\Gamma(d)= g^{-1}(d) \cap h^{-1}(0),$$ where $h:{\mathbb{R}}^{n,+}\rightarrow {\mathbb{R}}$ is $h({\boldsymbol{\beta}})= \|{\boldsymbol{\beta}}\|-1$. We may rewrite function $r$ using the correspondence $\Gamma$ as follows: $$r(d)= \max \{f({\boldsymbol{\beta}}): {\boldsymbol{\beta}} \in \Gamma(d)\}.$$ According to Berge’s maximum theorem, see [@Be p.116], $r$ is continuous provided that $f$ and $\Gamma$ are continuous and, for each $d \in \big[\underline{d}, \overline{d}\big]$, $\Gamma(d) \neq \emptyset$. Clearly, $f$ is continuous, and since for each $d \in \big[\underline{d}, \overline{d}\big]$, there is a convex linear combination of $\underline{d}$ and $ \overline{d}$ equal to $d$, $\Gamma(d)$ is nonempty. We will now prove that $\Gamma$ is continuous by showing that it is sequentially upper and lower semi-continuous. To prove upper semi-continuity, take any convergent sequence $\{\delta_k\}_k$ such that $\delta_k \to d$. We now prove that every sequence $ {\boldsymbol{\beta}}_k \in \Gamma(\delta_k)$ has a convergent subsequence $\big\{{\boldsymbol{\beta}}_{k_p}\big\}_p$ where ${\boldsymbol{\beta}}_{k_p}\rightarrow {\boldsymbol{\beta}} \in \Gamma(d)$. In fact, since ${\boldsymbol{\beta}}_k $ is a sequence in the compact set $ {\mathbb{S}}^+_{{\mathbb{R}}^n}$, it has a subsequence that converges to some ${\boldsymbol{\beta}}$. And since $h$ and $g$ are continuous it follows that $\delta_{k_p}=g({\boldsymbol{\beta}}_{k_p}) \rightarrow g({\boldsymbol{\beta}})=d$ and $0=h({\boldsymbol{\beta}}_{k_p}) \rightarrow h({\boldsymbol{\beta}})$. Thus ${\boldsymbol{\beta}} \in g^{-1}(d) \cap h^{-1}(0)= \Gamma(d)$. A correspondence is lower semi-continuous if, for any convergent sequence $\{\delta_k\}_k \subseteq \big[\underline{d}, \overline{d}\big]$, such that $\delta_k \to d$, and any ${\boldsymbol{\beta}} \in \Gamma(d)$, there is a convergent sequence $\{{\boldsymbol{\beta}}_k\}_k$, such that ${\boldsymbol{\beta}}_k \in \Gamma(\delta_k)$ and ${\boldsymbol{\beta}}_k \rightarrow {\boldsymbol{\beta}}$. If $D=\alpha I$, for some $\alpha \in{\mathbb{R}}$, then $\underline{d}=\overline{d}$ and there is nothing to prove since the correspondence’s domain is a singleton and the correspondence is trivially lower semi-continuous. When $\underline{d}<\overline{d}$ we need to find for each $\delta_k$ a vector ${\boldsymbol{\beta}}_k$ satisfying ${\boldsymbol{\beta}}_k \in \Gamma(\delta_k)$, and, the whole sequence $\big\{{\boldsymbol{\beta}}_k\big\}_k$, must be such that ${\boldsymbol{\beta}}_k \rightarrow {\boldsymbol{\beta}}$. When $\delta_k=d$ we choose ${\boldsymbol{\beta}}_k={\boldsymbol{\beta}}$. When $d<\delta_k$, to find ${\boldsymbol{\beta}}_k$ we proceed in the following manner. The vector ${\boldsymbol{\beta}}$ is such that $g({\boldsymbol{\beta}})=d$, and with it define the sets:$$\begin{aligned} &P({\boldsymbol{\beta}})=\big\{j \in \{1,\ldots,n\}: \beta_j^2 > 0\big\},\\ &D({\boldsymbol{\beta}})=\big\{j \in \{1,\ldots,n\}: d_j >d\big\},\\ &d({\boldsymbol{\beta}})=\big\{j \in \{1,\ldots,n\}: d_j \leq d \big\}.\end{aligned}$$ Since $d<\delta_k\leq \overline{d}$, $D({\boldsymbol{\beta}}) \neq \emptyset$. On the other hand, we also have that $P({\boldsymbol{\beta}}) \cap d({\boldsymbol{\beta}}) \neq \emptyset$, because $g({\boldsymbol{\beta}})=d$ is a weighted average of the $d_i$ over the indices $i \in P({\boldsymbol{\beta}})$, thus we cannot have them all with strictly higher value than $d$. We will choose one element from each of these sets, without loss of generality, the element $1$ from $D({\boldsymbol{\beta}})$ and the element $2$ from $P({\boldsymbol{\beta}}) \cap d({\boldsymbol{\beta}})$. Clearly $d_1>d_2$. Let $0<r^2=\beta_1^2+\beta_2^2\leq 1$ and let the function $\tilde g:[0, 2\pi] \to {\mathbb{R}}$ be $$\tilde g(\theta)= \sum_{i \geq 3} d_i\beta_i^2+ r^2\sin^2(\theta)d_1 + r^2 \cos^2(\theta)d_2.$$ When $\theta_0=\arcsin \Big(\frac{\beta_1}{\sqrt{\beta_1^2+\beta_2^2}}\Big)$ then $\beta_1^2=r^2\sin^2(\theta_0)$ and $\beta_2^2=r^2\cos^2(\theta_0)$, and $\tilde g(\theta_0)= g({\boldsymbol{\beta}})$. We have, for $d_1>d_2$, $$\begin{aligned} &\tilde g\big(0\big)= \sum_{i \geq 3} d_i\beta_i^2+ r^2d_2 =d+\beta_1^2(d_2-d_1)\equiv \tilde d\leq d, \,\,\,(\beta_1^2\geq 0),\\ &\tilde g\Big(\frac{\pi}{2}\Big)= \sum_{i \geq 3} d_i\beta_i^2+ r^2d_1 =d+\beta_2^2(d_1-d_2)\equiv \hat{ d}>d, \,\,\,(\beta_2^2>0).\end{aligned}$$ The function $\tilde g$ is continuous and increasing in the interval $(0, \frac{\pi}{2})$, since $ \tilde g'(\theta)=2\sin(\theta) \cos(\theta)r^2(d_1-d_2)>0. $ Therefore $\tilde g$ is a homeomorphism (in fact a diffeomorphism) between $[0, \frac{\pi}{2}]$ and the interval $[\tilde d, \hat{ d}]$. In order to find a vector ${\boldsymbol{\beta}}_k$ such that $g({\boldsymbol{\beta_k}})=\delta_k$, $k$ must be sufficiently large for $\delta_k \in [\tilde d, \hat{ d}]$. Since, on one hand $\delta_k \to d$ and, in this case, $d < \delta_k$ and, on the other hand, $\tilde d \leq d <\hat{d}$, there is a $K \in \mathbb{N}$ such that $\delta_k \in [\tilde d, \hat{ d}]$, for any $k >K$. Then, for $k\leq K$, we can take any ${\boldsymbol{\beta}}_k \in \Gamma(\delta_k)$. For $k>K$, we start by finding $\theta_k$ such that $\tilde g(\theta_k)=\delta_k$. Now, let $\beta_{1,k}=r\sin(\theta_k)$, $\beta_{2,k}=r\cos(\theta_k)$ and $\beta_{i,k}=\beta_{i}$, for $i \in \{3, \ldots, n\}$, that is, $\beta_{1,k}=r\sin(\widetilde{g}^{-1}(\delta_k))$, $\beta_{2,k}=r\cos(\widetilde{g}^{-1}(\delta_k))$ and the remaining terms constant. We can easily verify that $g({\boldsymbol{\beta_k}})=\widetilde{g}(\theta_k)=\delta_k$. When $\{\delta_k\}_k$ converges to $d$, clearly $\{\theta_k\}_k$ converges to $\theta$ since $\tilde g^{-1}$ is continuous. Therefore $\{{\boldsymbol{\beta}}_k\}_k$ converges to ${\boldsymbol{\beta}}$, as trigonometric functions are continuous. When $\underline{d} \leq \delta_k<d$ we proceed in a similar way, and conclude that if the set $\{\delta_k:\delta_k<d\}$ is infinite then for each element of this subsequence there is a ${\boldsymbol{\beta_k}} \in \Gamma(\delta_k)$ and these ${\boldsymbol{\beta_k}}$ converge to ${\boldsymbol{\beta}}$. In conclusion, for any convergent sequence $\{\delta_k\}_k$ we can find a sequence ${\boldsymbol{\beta}}_k\in {\mathbb{S}}_{{\mathbb{H}}}$ such that $g({\boldsymbol{\beta}}_k)=\delta_k \rightarrow d=g({\boldsymbol{\beta}})$. We are now able to show that $W(A)$ decompose as unions of disks. \[W\_is\_smaller\] Let $A=D+N \in \mathcal{M}_{n}({\mathbb{H}})$, with $D$ a diagonal matrix of real entries and $N$ a nilpotent and tree matrix. Then, we have $$W(A)= \bigcup_{d \in \big[\underline{d}, \overline{d}\big]} {\mathbb{D}}_{{\mathbb{H}}}(d, r(d)),$$ where $r(d)$ is given by (\[raio\_d\]). We begin by proving that $W(A) \subseteq \bigcup_{d \in \big[\underline{d}, \overline{d}\big]} \mathbb{D}_{{\mathbb{H}}}(d, r(d))$. We have: $$\begin{aligned} W(A)=& \bigcup_{{\boldsymbol{x}} \in {\mathbb{S}}_{{\mathbb{H}}^n}} {\boldsymbol{x}}^*A{\boldsymbol{x}}= \bigcup_{{\boldsymbol{x}} \in {\mathbb{S}}_{{\mathbb{H}}^n}} \Big\{ \sum_i d_{i} |x_i|^2 + \sum_{i < j} x^*_ia_{ij}x_j \Big\} \nonumber\\ = & \bigcup_{{\boldsymbol{\beta}} \in {\mathbb{S}}^+_{{\mathbb{R}}^n}} \bigcup_{\substack{z_k \in {\mathbb{S}}_{{\mathbb{H}}}\\1\leq k\leq n}} \Big\{ \sum_i d_{i} \beta_i^2 + \sum_{i < j} \beta_i\beta_j z_i^* a_{ij}z_j \Big\}\nonumber\\ =& \bigcup_{{\boldsymbol{\beta}} \in {\mathbb{S}}^+_{{\mathbb{R}}^n}} \Big\{ \sum_i d_{i} \beta_i^2 + \bigcup_{\substack{z_k \in {\mathbb{S}}_{{\mathbb{H}}}\\1\leq k\leq n}} \sum_{i < j} \beta_i\beta_j z_i^* a_{ij}z_j \Big\} \nonumber\\ =& \bigcup_{{\boldsymbol{\beta}} \in {\mathbb{S}}^+_{{\mathbb{R}}^n}} \Big\{ \sum_i d_{i} \beta_i^2 + \sum_{i < j} \beta_i\beta_j {\mathbb{S}}_{{\mathbb{H}}}(0, |a_{ij}|) \Big\} \label{1stequality}\\ \subseteq & \bigcup_{{\boldsymbol{\beta}} \in {\mathbb{S}}^+_{{\mathbb{R}}^n}} \Big\{ \sum_i d_{i} \beta_i^2 + \sum_{i < j} \beta_i\beta_j {\mathbb{D}}_{{\mathbb{H}}}(0, |a_{ij}|) \Big\} \nonumber\\ = & \bigcup_{d \in \big[\underline{d}, \overline{d}\big]} d + {\mathbb{D}}_{{\mathbb{H}}}(0, r(d)) \label{2ndequality}\\ = & \bigcup_{d \in \big[\underline{d}, \overline{d}\big]} {\mathbb{D}}_{{\mathbb{H}}}(d, r(d)).\nonumber\end{aligned}$$ Equality (\[1stequality\]) follows from lemma \[lemma\_induction\] applied to matrix $N$. Equality (\[2ndequality\]) follows from dividing the set ${\mathbb{S}}^+_{{\mathbb{R}}^n}$ into the fibers of the function $g({\boldsymbol{\beta}}) =\sum_i d_i \beta_i^2 $, i.e. $${\mathbb{S}}^+_{{\mathbb{R}}^n}=\bigcup_d \Gamma(d).$$ For each fiber $g^{-1}(d) \cap {\mathbb{S}}^+_{{\mathbb{R}}^n}=g^{-1}(d) \cap h^{-1}(0)=\Gamma (d)$, with $\underline{d}\leq d \leq \overline{d}$, $h$ and $\Gamma(d)$ defined as in lemma \[disk\_continuity\], we have $$\begin{aligned} \bigcup_{{\boldsymbol{\beta}} \in \Gamma(d) } \Big\{ \sum_i d_{i} \beta_i^2 + \sum_{i < j} {\mathbb{D}}_{{\mathbb{H}}}(0, \beta_i\beta_j|a_{ij}|) \Big\} & = \bigcup_{{\boldsymbol{\beta}} \in \Gamma(d) } \bigg\{d + \sum_{i < j} {\mathbb{D}}_{{\mathbb{H}}}(0, \beta_i\beta_j|a_{ij}|)\bigg\}\\ & = d + \bigcup_{{\boldsymbol{\beta}} \in \Gamma(d) } \sum_{i < j} {\mathbb{D}}_{{\mathbb{H}}}(0, \beta_i\beta_j|a_{ij}|)\\ & = d + \bigcup_{{\boldsymbol{\beta}} \in \Gamma(d) } {\mathbb{D}}_{{\mathbb{H}}}\Big(0, \sum_{i < j} \beta_i\beta_j|a_{ij}|\Big)\\ &=d + {\mathbb{D}}_{{\mathbb{H}}}\Bigg(0, \max_{{\boldsymbol{\beta}}\in\Gamma(d)} \sum_{i < j}\beta_i\beta_j|a_{ij}|\Bigg) \\ & = {\mathbb{D}}_{{\mathbb{H}}}(d,r(d)).\end{aligned}$$ Now, to prove the converse inclusion we need to consider three different cases. **Case 1:** $y\in{\mathbb{S}}_{{\mathbb{H}}}(d, r(d))$. We defined in (\[raio\_d\]) $$r(d)=\max\{f({\boldsymbol{\beta}}): \sum_i\beta_i^2d_i=d, {\boldsymbol{\beta}}\in{\mathbb{S}}^+_{{\mathbb{R}}^n}\}$$ and, in (\[1stequality\]), we saw that $$W(A)=\bigcup_{{\boldsymbol{\beta}} \in {\mathbb{S}}^+_{{\mathbb{R}}^n}}\Big\{ \sum_i \beta_i^2 d_i + \sum_{i<j} \beta_i\beta_j {\mathbb{S}}_{{\mathbb{H}}}(0, |a_{ij}|) \Big\}.$$ Choose ${\boldsymbol{\beta}}^* \in g^{-1}(d)\cap {\mathbb{S}}^+_{{\mathbb{R}}^n}$ such that $r(d)=f({\boldsymbol{\beta}}^*)$. To conclude that $y \in W(A)$ we will find $y_{ij} \in \beta^*_i\beta^*_j {\mathbb{S}}_{{\mathbb{H}}}(0, |a_{ij}|)$ such that $y-d=\sum_{i<j} y_{ij}$. This is precisely what we did at the end of theorem \[tree circular disk\], taking this time $y_{ij}=\beta^*_i\beta^*_j |a_{ij}| \dfrac{y-d}{|y-d|}$. And we just follow the same reasoning we did there, noting that $f({\boldsymbol{\beta}}^*)=r(d)=|y-d|$. **Case 2.1:** $y\in{\mathbb{D}}_{{\mathbb{H}}}(d, r(d))\backslash {\mathbb{S}}_{{\mathbb{H}}}(d, r(d))$, with $d=\underline{d}$ or $d=\overline{d}$. Suppose $y \in {\mathbb{D}}_{{\mathbb{H}}}(\underline{d}, r(\underline{d}))$ and $|y-\underline{d}|<r(\underline{d})$. Notice that for ${\boldsymbol{\beta}}$ be such that $\sum \beta^2_i d_i=\underline{d}$, then $\beta^2_k> 0$ only for those $k$’s for which $d_k=\underline{d}$. Assume, without loss of generality, that those $k$’s are the first $p$ elements in the diagonal of $A$, that is, $$\{k \in \{1,\ldots,n\}: d_k=\underline{d}\}=\{1,\ldots,p\}.$$ Define $$\mathcal{A}=\{{\boldsymbol{\beta}} \in {\mathbb{S}}^+_{{\mathbb{R}}^n}: \sum \beta^2_i d_i=\underline{d}\}={\mathbb{S}}^+_{{\mathbb{R}}^p}\times \{0\}^{n-p}.$$ Then, $\mathcal{A}$ is a connected set and we can take a path form the element where $f$ vanishes (for example some vector ${\boldsymbol{\beta}}$ of the canonical basis) to the element where $f$ is maximum and equal to $r(\underline{d})$, as we did in theorem \[tree circular disk\]. Since $f$ is continuous, the intermediate value theorem ensures that, in such path, all the values in between $0$ and $r(\underline{d})$ are taken by some element in the path. We can conclude that there exists an element ${\boldsymbol{\gamma}} \in \mathcal{A}$ such that $f({\boldsymbol{\gamma}})=|y-\underline{d}|$ and that $y \in {\mathbb{S}}(\underline{d}, f({\boldsymbol{\gamma}}))$. By (\[1stequality\]), $y\in W(A)$. For $d=\overline{d}$ the procedure is analogous. **Case 2.2:** $y\in {\mathbb{D}}_{{\mathbb{H}}}(d, r(d))\backslash {\mathbb{S}}_{{\mathbb{H}}}(d, r(d))$, with $\underline{d}<d<\overline{d}$. Assume that $y \not \in{\mathbb{D}}_{{\mathbb{H}}}(\underline{d}, r(\underline{d}))$, otherwise recur to the previous cases. Then $|y-\underline{d}|>r(\underline{d})$. Let $\rho$ be a function defined over the interval $[\underline{d}, \overline{d}]$ by $\rho(t)= |y-t|-r(t)$. The function $\rho$ is continuous since the norm is continuous and, by lemma \[disk\_continuity\], $r$ is continuous. Since $\rho(\underline{d})>0$ and $\rho(d)<0$, continuity of $\rho$ implies the existence of an element $\widetilde{d}$ such that $|y-\widetilde{d}|=r(\widetilde{d})$, and so $y \in {\mathbb{S}}_{{\mathbb{H}}}(\widetilde{d}, r(\widetilde{d}))$. This concludes the proof since, by (\[1stequality\]), $y \in W(A)$. The next result identify a class of upper triangular matrices that has convex numerical range. \[prop\_diag+shiftlike\_convex\] Let $A=D+N\in{\mathcal{M}}_n({\mathbb{H}})$, with $D$ a diagonal matrix with real entries, $N$ nilpotent and cycle-free matrix. Then, $W(A)$ is convex. We start assuming that $N$ is a tree. In this case, from theorem \[W\_is\_smaller\], for any $w \in W(A) = \bigcup_{d \in \big[\underline{d}, \overline{d}\big]} {\mathbb{D}}_{{\mathbb{H}}}(d, r(d))$, there is a $d \in \big[\underline{d}, \overline{d}\big] $ such that $w \in {\mathbb{D}}_{{\mathbb{H}}}(d, r(d))$, which implies $|\pi_{{\mathbb{R}}}(w)-d| \leq r(d)$. Thus $ d-r(d) \leq \pi_{{\mathbb{R}}}(w) \leq d+r(d)$ and we end up concluding that $$\pi_{{\mathbb{R}}}(W(A)) \subset W(A) \cap {\mathbb{R}}= \Big[\min_{d \in [\underline{d}, \overline{d}]} \big( d-r(d)\big), \max_{d \in [\underline{d}, \overline{d}]} \big( d-r(d)\big)\Big]$$ and since $\pi_{{\mathbb{R}}}(W(A)) \supset W(A) \cap {\mathbb{R}}$, we conclude that $\pi_{{\mathbb{R}}}(W(A)) = W(A) \cap {\mathbb{R}}$. By [@Ye1 theorem 3] the numerical range is convex. In the case where $N$ is not a tree, then there exists a permutation matrix $P$ such that $P^{\top}NP=N_1\oplus\ldots\oplus N_k$, where each $N_i$ is a tree, square matrix. Since $P^{\top}DP$ is still a real diagonal matrix, we have $$\begin{aligned} W(A) & = W(D+N)\nonumber \\ & = W(P^{\top}(D+N)P)=W((D_1+N_1)\oplus...\oplus (D_k+N_k))\nonumber \\ & =\mathrm{iconv}\,\{W(D_1+N_1),...,W(D_k+N_k)\}\nonumber \\ & =\mathrm{conv}\,\{W(D_1+N_1),...,W(D_k+N_k)\}.\label{convD+N}\end{aligned}$$ The result follows from proposition \[prop\_iconv\] and the first part of this corollary. Let $A=\begin{pmatrix} d_1 & a_{12} & a_{13}\\ 0 & d_2 & 0 \\ 0 & 0 & d_2 \end{pmatrix}$ , where $d_1, d_2\in {\mathbb{R}}$, $d_1\neq d_2$, and $a_{12}, a_{13}\in {\mathbb{H}}$. Since $A$ can be written as $A=d_1 I + (d_2-d_1)\tilde{A}$, with $ \tilde{A}=\begin{pmatrix} 0 & q_{12} & q_{13}\\ 0 & 1 & 0 \\ 0 & 0 & 1 \\ \end{pmatrix}, $ we have $W(A)=d_1 + (d_2-d_1) W(\tilde{A})$, which means that it is enough to study $W(\tilde{A})$. By theorem \[W\_is\_smaller\], we have that $W(\tilde{A})=\bigcup_{d \in [0, 1]} {\mathbb{D}}_{{\mathbb{H}}}(d, r(d))$, with $r(d)=\sqrt{ k d(1-d)}$ and $k={|q_{12}|^2+|q_{13}|^2}$. Since $W(\tilde{A})\cap {\mathbb{C}}= \bigcup_{d \in [0, 1]} {\mathbb{D}}_{{\mathbb{C}}}(d, r(d))$, we will prove that this union of disks is an ellipse: $$\bigcup_{d \in [0, 1]} {\mathbb{D}}_{{\mathbb{C}}}(d, \sqrt{k d(1-d)}=\Big\{(x,y)\in {\mathbb{R}}^2:\frac{(x-\frac{1}{2})^2}{\frac{k+1}{4}}+\frac{y^2}{\frac{k}{4}}\leq 1\Big\}\equiv{\mathcal{E}}.$$ To show that ${\mathcal{E}} \subseteq \bigcup_{d \in [0, 1]} {\mathbb{D}}_{{\mathbb{C}}}(d,\sqrt{k d(1-d)}),$ notice that for $(a,b)\in{\mathcal{E}}$, $(a,b)\in {\mathbb{D}}_{{\mathbb{C}}}(d_0,\sqrt{kd_0(1-d_0)})$ with $d_0=\tfrac{2a+k}{2(k+1)} \in [0,1]$. Conversely, let $(a,b)\in {\mathbb{D}}_{{\mathbb{C}}}(d, \sqrt{kd(1-d)})$. We want to show now that $\frac{(a-\frac{1}{2})^2}{\frac{k+1}{4}}+\frac{b^2}{\frac{k}{4}}\leq 1$ , i.e, $\frac{k}{k+1}(a-\frac{1}{2})^2+{b^2}-{\frac{k}{4}}\leq 0$. Since $b^2=kd(1-d)-(a-d)^2$, we have that $$\frac{k}{k+1}\Big(a-\frac{1}{2}\Big)^2+{b^2}-{\frac{k}{4}}=-\frac{1}{k+1}((k+1)d-a)^2+\frac{k}{k+1}((k+1)d-a)-\frac{k^2}{4(k+1)}.$$ This is a second degree polynomial in $(k+1)d-a$, with down concavity and always non-positive, so we conclude that $(a,b)\in \mathcal{E}$. Convexity and circularity of $3\times 3$ nilpotent matrices =========================================================== Our main goal in this section is to establish necessary and sufficient conditions for quaternionic $3\times 3$ nilpotent matrices to have circular or convex numerical range. We start by finding out that this condition for circularity is related to the product of all non-zero elements of the matrix. In particular, it relates the numerical range’s circularity with the product $a^*_{13}a_{12}a_{23}$ vanishing or not. Theorem \[NSC 3x3 convex\] gives a condition for the convexity of the numerical range in terms of the values assumed by exactly the same product. Consequently, the numerical range is convex if and only if $a^*_{13}a_{12}a_{23} \in {\mathbb{R}}$. \[NR\_3x3\_disk\] Let $A\in {\mathcal{M}}_3({\mathbb{H}})$ be a nilpotent matrix. Then, $W(A)$ is a disk with center at the origin, if and only if, $A$ is cycle-free. Sufficiency was proved in corollary \[cor\_nil\_cycle-free\]. For necessity we will show that if $A$ has a cycle then $W(A)$ is not a disk. We just need to observe that there are some elements $q \in {\mathbb{S}}_{\mathbb{H}}$ for which $aq \in W(A)$, with $$a=\max \{ \beta_1\beta_2|a_{12}|+\beta_1\beta_3|a_{13}|+\beta_2\beta_3|a_{23}|\},$$ and some others $\tilde q\in {\mathbb{S}}_{\mathbb{H}}$ for which $a\widetilde{q}\notin W(A)$. If $w\in W(A)$ then, by the triangle inequality, $|w|\leq a$, and the equality holds if, and only if, all the terms of ${\boldsymbol{x}}^*A{\boldsymbol{x}}$ are collinear. We have that $aq \in W(A)$ if, and only if, any term of ${\boldsymbol{x}}^*A{\boldsymbol{x}}$ is in the real span of $q$, for some $q\in{\mathbb{S}}_{{\mathbb{H}}}$, that is, writing $a_{ij}=|a_{ij}|w_{ij}$ with $w_{ij}\in{\mathbb{S}}_{{\mathbb{H}}}$, $$\begin{cases} \beta_1\beta_2|a_{12}| z_1^* w_{12} z_2 & =\beta_1\beta_2|a_{12}| q \\ \beta_1\beta_3|a_{13}| z_1^* w_{13} z_3 & =\beta_1\beta_3|a_{13}| q\\ \beta_2\beta_3|a_{23}| z_2^* w_{23} z_3 & =\beta_2\beta_3|a_{23}| q \end{cases}$$ If $a_{13}a_{12}a_{23} \neq 0$ (i.e, $A$ has cycles) the system has solution only when $q=z_3^* w_{13}^*w_{12}w_{23} z_3$, for some $z_3\in{\mathbb{S}}_{{\mathbb{H}}}$. That is, only for those $q \in \big[w_{13}^*w_{12}w_{23}\big]$ can we reach the maximum $aq \in W(A) $. But then $W(A) $ is not circular since for $\tilde q\notin \big[w_{13}^*w_{12}w_{23}\big]$, $a \tilde q \notin W(A)$ but $a q \in W(A)$ for all $q \in \big[w_{13}^*w_{12}w_{23}\big]$. We conclude that when $A$ has cycles the numerical range is not circular. We will now obtain a similar result for the convexity of the numerical range of a matrix $A$ in ${\mathcal{M}}_3({\mathbb{H}})$, that is, $W(A)$ is convex, if and only if, $a^*_{13}a_{12}a_{23}\in{\mathbb{R}}$. The argument uses two known results: [@Ye1 Theorem 3], which says that $W(A)$ is convex if, and only if, $W(A)\cap {\mathbb{R}}=\pi_{{\mathbb{R}}}\big(W(A)\big)$, and that $W(A)\cap {\mathbb{R}}$ is a closed interval, see [@Ye1 Corollary 1]. Since the numerical range is connected [@R Theorem 3.10.7] we know that $\pi_{{\mathbb{R}}}\big(W(A)\big)=[m,M]$ with $$\begin{aligned} m & = \min \pi_{\mathbb{R}}(W(A)),\\ M & = \max \pi_{\mathbb{R}}(W(A)).\end{aligned}$$ Thus we can conclude that the numerical range is convex if, and only if, $W(A)\cap {\mathbb{R}}=[m,M]$. That is, $W(A)$ is convex if, and only if, there are ${\boldsymbol{v}}$ and ${\boldsymbol{\hat{v}}}$ in ${\mathbb{S}}_{{\mathbb{H}}^3}$, such that $$\begin{aligned} \label{v_hat{v}} {\boldsymbol{v}}^*A{\boldsymbol{v}}=M \,\,\, & \textrm{and}\,\,\,{\boldsymbol{\hat{v}}}^*A{\boldsymbol{\hat{v}}}=m.\end{aligned}$$ Necessarily, there are ${\boldsymbol{y}}, {\boldsymbol{\hat{y}}}\in {\mathbb{S}}_{{\mathbb{H}}^3}$, such that $M= \pi_{\mathbb{R}}({\boldsymbol{y}}^*A{\boldsymbol{y}})$ and $m= \pi_{\mathbb{R}}({\boldsymbol{\hat{y}}}^*A{\boldsymbol{\hat{y}}})$. The following lemma is preparatory to reach the conclusion in (\[v\_hat[v]{}\]). \[prep\_for\_general\] Let $A\in {\mathcal{M}}_3({\mathbb{H}})$ be a nilpotent matrix satisfying the following condition: $$\begin{aligned} \label{R-linearly_independent} \textrm{the two quaternions}\,\,\,a_{12},\, a_{13}a_{23}^*\,\,\, & \textrm{are}\,\,{\mathbb{R}}-\textrm{linearly independent}.\end{aligned}$$ Suppose that $\pi_{\mathbb{R}}({\boldsymbol{y}}^*A{\boldsymbol{y}})=M$. Then, $$y^*_1(a_{12}y_2+a_{13}y_3), y^*_2(a_{12}^*y_1+a_{23}y_3), (y^*_1 a_{13}+y^*_2 a_{23})y_3 \in {\mathbb{R}}\setminus\{0\}$$ To prove that $\omega=y^*_1(a_{12}y_2+a_{13}y_3)\in {\mathbb{R}}$ we start by writing $$\begin{aligned} \label{maximum projection} M=&\pi_{\mathbb{R}}(y^*_1(a_{12}y_2+a_{13}y_3)+y^*_2a_{23}y_3)\\ =&\pi_{\mathbb{R}}(\omega+y^*_2a_{23}y_3).\end{aligned}$$ There exists $z\in {\mathbb{S}}_{{\mathbb{H}}}$ such that $$z^* \omega=|\omega|\in{\mathbb{R}}$$ Taking now ${\boldsymbol{\tilde{y}}}=(y_1 z, y_2, y_3)\in {\mathbb{S}}_{{\mathbb{H}}^3}$, we have: $$\begin{aligned} M=\pi_{\mathbb{R}}({\boldsymbol{y}}^*A{\boldsymbol{y}}) \geq & \pi_{\mathbb{R}}({\boldsymbol{\tilde{y}}}^*A{\boldsymbol{\tilde{y}}}).\end{aligned}$$ Then, $$\begin{aligned} M=\pi_{\mathbb{R}}(\omega+y^*_2a_{23}y_3) \geq & \pi_{\mathbb{R}}(z^* \omega+y^*_2a_{23}y_3)\end{aligned}$$ and by ${\mathbb{R}}$-linearity of $\pi_{\mathbb{R}}$ we have $$\begin{aligned} \pi_{\mathbb{R}}(\omega) \geq & \pi_{\mathbb{R}}(z^* \omega)=|\omega|.\end{aligned}$$ We conclude that $\omega=|\omega|$ and so $\omega\in{\mathbb{R}}$. Now we prove that $y^*_1(a_{12}y_2+a_{13}y_3) \neq 0$ by assuming that $y^*_1(a_{12}y_2+a_{13}y_3) = 0$ and then finding a vector ${\boldsymbol{t}} \in {\mathbb{S}}_{{\mathbb{H}}^3}$ with ${\boldsymbol{t}}^*A{\boldsymbol{t}}> \pi_{{\mathbb{R}}}\big({\boldsymbol{y}}^*A{\boldsymbol{y}}\big)$, reaching a contradiction with ${\boldsymbol{y}}$ being the maximizer of $\pi_{{\mathbb{R}}}\big({\boldsymbol{x}}^*A{\boldsymbol{x}}\big)$ for ${\boldsymbol{x}} \in {\mathbb{S}}_{{\mathbb{H}}^3}$. Condition (\[R-linearly\_independent\]) implies that $a_{23}\neq 0$ and $a\equiv a_{12}+a_{13}\dfrac{a_{23}^*}{|a_{23}|}\neq 0 $. If $y^*_1(a_{12}y_2+a_{13}y_3) = 0$ then $$\pi_{{\mathbb{R}}}\big({\boldsymbol{y}}^*A{\boldsymbol{y}}\big)=y_2^*a_{23}y_3= \dfrac{|a_{23}|}{2}$$ The previous equality comes from the maximum of $f(\alpha_1,\alpha_2)=\alpha_1\alpha_2$ , subject to $\alpha_1^2+\alpha_2^2=1$, being $\dfrac{1}{2}$. Let $t_1=\beta_1 \in {\mathbb{R}}$, $t_2=\beta_2 \dfrac{a^*}{|a|}$ and $t_3=\dfrac{a_{23}^*}{|a_{23}|}t_2$, with $\beta_2=\dfrac{\sqrt{2}}{2}-\epsilon$ and $\beta_1^2+2\beta^2_2=1$. $$\begin{aligned} {\boldsymbol{t}}^*A{\boldsymbol{t}}-\dfrac{|a_{23}|}{2}&=\beta_1\beta_2|a|+\beta_2^2|a_{23}|-\dfrac{|a_{23}|}{2}\\ &=\sqrt{1-2\beta_2^2}\beta_2|a|+\big(\beta_2^2-\dfrac{1}{2}\big)|a_{23}|\\ &= \sqrt{2\sqrt{2}\epsilon-2\epsilon^2}\big(\dfrac{\sqrt{2}}{2}-\epsilon\big)|a|-\big(\sqrt{2}-\epsilon\big)\epsilon|a_{23}|\\ &=\epsilon \bigg\{ \sqrt{ \dfrac{2\sqrt{2}}{\epsilon}-2}(\dfrac{\sqrt{2}}{2}-\epsilon\big)|a|-\big(\sqrt{2}-\epsilon\big)|a_{23}|\bigg\}\end{aligned}$$ Clearly $\dfrac{1}{\epsilon} \Big({\boldsymbol{t}}^*A{\boldsymbol{t}}-\dfrac{|a_{23}|}{2}\Big)>0$ for very small and positive $\epsilon$. Thus, ${\boldsymbol{t}}^*A{\boldsymbol{t}}>\pi_{{\mathbb{R}}}\big({\boldsymbol{y}}^*A{\boldsymbol{y}}\big)$, and we found a contradiction. The second case, $y^*_2(a_{12}^*y_1+a_{23}y_3)\in {\mathbb{R}}\setminus\{0\}$, follows from writing $M$ as follows $$\begin{aligned} \label{maximum projection} M=&\pi_{\mathbb{R}}(y^*_1a_{12}y_2+ y^*_1a_{13}y_3+y^*_2a_{23}y_3)\\ = &\pi_{\mathbb{R}}(y^*_1a_{12}y_2)+ \pi_{\mathbb{R}}(y^*_1a_{13}y_3)+\pi_{\mathbb{R}}(y^*_2a_{23}y_3)\\ = &\pi_{\mathbb{R}}(y^*_2a_{12}^*y_1)+ \pi_{\mathbb{R}}(y^*_1a_{13}y_3)+\pi_{\mathbb{R}}(y^*_2a_{23}y_3)\\ = &\pi_{\mathbb{R}}\big(y^*_2(a_{12}^*y_1+a_{23}y_3)\big)+ \pi_{\mathbb{R}}(y^*_1a_{13}y_3). \end{aligned}$$ Now we proceed as in the first case to find out that $y^*_2(a_{12}^*y_1+a_{23}y_3)\in {\mathbb{R}}$. To prove that $y^*_2(a_{12}^*y_1+a_{23}y_3)\neq 0$ we let this time $a\equiv a_{23}+a_{12}^*\dfrac{a_{13}}{|a_{13}|}$. From (\[R-linearly\_independent\]) we have that $a_{13}\neq 0$ and $a\neq 0 $. We will find a contradiction, in the same way as in the first case, assuming that $y^*_2(a_{12}^*y_1+a_{23}y_3)=0$. In this case it must be that $\pi_{{\mathbb{R}}}\big({\boldsymbol{y}}^*A{\boldsymbol{y}}\big)= \dfrac{|a_{13}|}{2}$, and we will find a ${\boldsymbol{t}} \in {\mathbb{S}}_{{\mathbb{H}}^3}$ with ${\boldsymbol{t}}^*A{\boldsymbol{t}}> \dfrac{|a_{13}|}{2}$. Such ${\boldsymbol{t}}$ has $t_1=\dfrac{a_{13}}{|a_{13}|}t_3$, $t_2=\beta_2$, $t_3=\beta_3 \dfrac{a^*}{|a|}$, with $\beta_3=\dfrac{\sqrt{2}}{2}-\epsilon$ and $\beta_2^2+2\beta^2_3=1$. The rest of the proof proceeds just like the first case. The proof for case $3$ mimics the previous two. Now we will state and prove a necessary and sufficient condition for a nilpotent $3\times 3$ matrix to have convex numerical range. \[NSC 3x3 convex\] Let $A\in {\mathcal{M}}_3({\mathbb{H}})$ be a nilpotent matrix. Then, $W(A)$ is convex if, and only if, $a^*_{13}a_{12}a_{23}\in{\mathbb{R}}$. First, consider that $a_{13}^*a_{12}a_{23} \in {\mathbb{R}}$. The case where $a_{13}^*a_{12}a_{23}=0$ was dealt in theorem \[NR\_3x3\_disk\], the numerical range is circular and therefore convex. For the other cases, the matrix $A$ is unitary equivalent to a real matrix, *i.e* there exists an unitary matrix $U \in \mathcal{M}_n({\mathbb{H}})$, such that $U^*AU \in \mathcal{M}_n({\mathbb{R}})$. By [@CDM theorem 3.6], we know that the numerical range of any real matrix is convex, thus $W(A)=W(U^*AU)$ is convex. For the unitary matrix $U$, take the diagonal matrix ${{diag}}(\rho, z_{12}^*\rho, z_{13}^*\rho)$, where $\rho \in {\mathbb{S}}_{{\mathbb{H}}}$ and $z_{ij} \in {\mathbb{S}}_{{\mathbb{H}}}$ are such that $a_{ij}=|a_{ij}|z_{ij}$. Its now a matter of simple calculations, using that $z_{13}^*z_{12}z_{23}=\pm1$, to check that $U^*AU \in \mathcal{M}_n({\mathbb{R}})$. Now we consider the converse implication, that is, if $W(A)$ is convex then $a^*_{13}a_{12}a_{23}\in {\mathbb{R}}$. If $a_{12},\,a_{13}a_{23}^*$ are ${\mathbb{R}}$-linearly dependent we easily see that $a_{13}a_{23}^*a_{12}^* \in {\mathbb{R}}$ and, since $\pi_{{\mathbb{R}}}(ab)=\pi_{{\mathbb{R}}}(ba)$, then $a_{13}^*a_{12}a_{23} \in {\mathbb{R}}$. Therefore, we can assume that $a_{12},\,a_{13}a_{23}^*$ are ${\mathbb{R}}$-linearly independent and, by lemma \[prep\_for\_general\], conclude that $$\label{equation_y2_y_3} y^*_1(a_{12}y_2+a_{13}y_3), y^*_2(a_{12}^*y_1+a_{23}y_3), (y^*_1 a_{13}+y^*_2 a_{23})y_3\in {\mathbb{R}}\setminus\{0\} .$$ Hence, for some $\alpha_1, \alpha_2, \alpha_3\in {\mathbb{R}}\setminus\{0\}$ we can write $$\label{system} \left\{ \begin{array}{c} y_1= \alpha_1 (a_{12}y_2+a_{13}y_3) \\ y_2= \alpha_2 (a_{12}^*y_1+a_{23}y_3)\\ y_3= \alpha_3 (a_{13}^*y_1+a_{23}^*y_2) \end{array} \right.$$ Substituting $y_1$ in the second equation, we get $$\label{eq1} (1-\alpha_1\alpha_2|a_{12}|^2)y_2=\alpha_2(\alpha_1 a_{12}^*a_{13}+a_{23})y_3.$$ Suppose $1-\alpha_1\alpha_2|a_{12}|^2\neq 0$. We have $$y_2=r_2(\alpha_1 a_{12}^*a_{13}+a_{23})y_3, \quad \text{where} \quad r_2=\frac{\alpha_2}{1-\alpha_1\alpha_2|a_{12}|^2}.$$ Therefore, for ${\boldsymbol{y}}=(y_1, y_2, y_3)\in {\mathbb{S}}_{{\mathbb{H}}^3}$ such that $M=\pi_{\mathbb{R}}({\boldsymbol{y}}^*A{\boldsymbol{y}})$, $$\begin{aligned} {\boldsymbol{y}}^*A{\boldsymbol{y}} & ={y}^*_1(a_{12}{y}_2+a_{13}{y}_3)+{y}^*_2a_{23}{y}_3 \nonumber \\ & ={y}^*_1(a_{12}{y}_2+a_{13}{y}_3)+ r_2 |a_{23}|^2 |{y}_3|^2 + r_2\alpha_1 {y}_3^*a_{13}^*a_{12}a_{23}{y}_3. \label{eq2}\end{aligned}$$ Notice that the first two terms of (\[eq2\]) are real. Since $W(A)$ is convex, by [@Ye1 theorem 3], $ M= {\boldsymbol{y}}^*A{\boldsymbol{y}} \in W(A)\cap {\mathbb{R}}$. Thus, the term ${y}_3^*a_{13}^*a_{12}a_{23}{y}_3$ is also real. This only happens if $a_{13}^*a_{12}a_{23} \in {\mathbb{R}}$ (since ${y}_3^*a_{13}^*a_{12}a_{23}{y}_3=|y_3|^2\frac{{y}_3^*}{|y_3|}a_{13}^*a_{12}a_{23}\frac{{y}_3}{|y_3|}$ and $\frac{{y}_3^*}{|y_3|}a_{13}^*a_{12}a_{23}\frac{{y}_3}{|y_3|}\sim a_{13}^*a_{12}a_{23}$) or $y_3=0$. The case $y_3=0$ can be ruled out because then $(y^*_1 a_{13}+y^*_2 a_{23})y_3=0$ and this contradicts (\[equation\_y2\_y\_3\]). If $1-\alpha_1\alpha_2|a_{12}|^2=0$ and since $y_3\neq 0$, from (\[eq1\]) $\alpha_1a^*_{12}a_{13}+a_{23}=0$. It follows that $a^*_{13}a_{12}a_{23}\in{\mathbb{R}}$. We finish with a simple example. Let $A=\begin{pmatrix}0 & i & j \\ 0 & 0 & k \\ 0 & 0 & 0 \end{pmatrix}$. Since $(-j)ik=-1$, $W(A)$ is convex and noncircular. [99]{} Y. Au-Yeung , *On the convexity of the numerical range in quaternionic Hilbert space*, Linear and Multilinear Algebra, **16** (1984), 93–100. Y. Au-Yeung , *A short proof of a theorem on the numerical range of a normal quaternionic matrix*, Linear and Multilinear Algebra, **39:3** (1995), 279–284. C. Berge, *Topological Spaces*, Dover, 1997. L. Carvalho, C. Diogo, S. Mendes, *A bridge between quaternionic and complex numerical ranges*. (To appear in Linear Algebra and its Applications). M-T. Chien, B-S. Tam, *Circularity of the numerical range*, Linear Algebra and its Applications, **201** (1994), 113–133. R. Diestel, *Graph Theory*, 4th edition, Springer, 2010. C. Edwards, *Advanced Calculus of Several Variables*, Dover, 1994. K. Gustafson, D. Rao, *Numerical Range*, Springer-Verlag, New York, 1997. R. Kippenhahn, *On the numerical range of a matrix*, Translated from the German by Paul F. Zachlin and Michiel E. Hochstenbach. Linear Multilinear Algebra, **56:1-2** (2008), 185-225. V. Matache, M. Matache, *When is the numerical range of a nilpotent matrix circular?*, Applied Mathematics and Computation, **216(1)** (2010), 269–275. L. Rodman, *Topics in Quaternion Linear Algebra*, Princeton University Press, 2014. W. So, R. C. Thompson, *Convexity of the upper complex plane part of the numerical range of a quaternionic matrix*, Linear and Multilinear Algebra, **41** (1996), 303–365. W. So, R. C. Thompson, F. Zhang, *The numerical range of normal matrices with quaternion entries*, Linear and Multilinear Algebra, **37** (1994), 175–195. F. Zhang, *Quaternions and matrices of quaternions*, Linear Algebra and its Applications, **251** (1997), 21–57. [^1]: The second author was partially supported by FCT through project UID/MAT/04459/2013 and the third author was partially supported by FCT through CMA-UBI, project PEst-OE/MAT/UI0212/2013. [^2]: If $x\neq 0$, $z$ is uniquely defined. However, if $x = 0$ that is not the case and we take $z=1$.
--- abstract: 'We have investigated the current-induced spin transfer torque of a ferromagnet-insulator-ferromagnet tunnel junction by taking the spin-flip scatterings into account. It is found that the spin-flip scattering can induce an additional spin torque, enhancing the maximum of the spin torque and giving rise to an angular shift compared to the case when the spin-flip scatterings are neglected. The effects of the molecular fields of the left and right ferromagnets on the spin torque are also studied. It is found that $\tau ^{Rx}/I_{e}$ ($\tau ^{Rx}$ is the spin-transfer torque acting on the right ferromagnet and $I_{e}$ is the tunneling elcetrical current) does vary with the molecular fields. At two certain angles, $\tau ^{Rx}/I_{e}$ is independent of the molecular field of the right ferromagnet, resulting in two crossing points in the curve of $\tau ^{Rx}/I_{e}$ versus the relevant orientation for different molecular fields.' address: | Department of Physics, The Graduate School of the Chinese Academy of\ Sciences, P.O. Box 3908, Beijing 100039, China author: - 'Zhen-Gang Zhu, Gang Su$^{\ast }$, Biao Jin, and Qing-Rong Zheng' title: 'Spin-Flip Scattering Effect on the Current-Induced Spin Torque in Ferromagnet-Insulator-Ferromagnet Tunnel Junctions' --- The spin-polarized transport in multilayer structures exhibits new effects such as the giant magnetoresistance [@wolf] (GMR), the spin transfer effect [@slonczewski], and so on. How to use the spin degree of freedom of electrons in ferromagnetic materials to construct new devices is at present a focus in the field of spintronics. Spin-polarized electrons flowing from one ferromagnetic layer into another layer in which the molecular field deviates by an angle may transfer the angular momentum to the local angular momentum of the ferromagnetic layer, thereby exerting a torque on the magnetic moments (see e.g. Refs. [@slonczewski; @s2; @tsoi; @sun; @myers; @katine; @fnf; @waintal]). This phenomenon is usually called the spin transfer effect[@slonczewski]. The torques in the plane spanned by ${\bf s}_{1}$ and ${\bf s}_{2}$, where ${\bf s}_{1}$ and ${\bf s}_{2}$ are spin moments in the left and right ferromagnets, are normally called the dynamic nonequilibrium spin torques[@waintal]. Spin transfer motion of ${\bf s}_{1}$ and ${\bf s}_{2}$ within their spanned plane is different from the spin precession like $\partial {\bf s}% _{1}/\partial t=\hbar J{\bf s}_{1}\times {\bf s}_{2}$ out of the spanned plane which describes the conventional exchange coupling [@waintal2; @erickson]. Therefore, the spin transfer effect causes new physical phenomena in magnetic multilayer structures. When the current is large enough, it could switch the magnetic states of the local angular momentum. Such a current-induced change of the magnetic state has been observed in several experiments (see e.g. Refs.[@tsoi; @sun; @myers; @katine]). As a result, the spin transfer effect may provide a mechanism for a current-controlled magnetic memory element. To deal with the spin transfer effect, it is useful to introduce the concepts such as the spin current and the spin torque to describe the coupling between the conduction electrons and the magnetic moments of ferromagnetic materials. These concepts are first proposed by Slonczewski [@s2] based on a quantum-mechanical model for ferromagnet-insulator-ferromagnet (FM-I-FM) junctions. Then, the concepts are extended to the structures such as ferromagnet-normal metal-ferromagnet (FM-NM-FM) junctions [@fnf; @waintal], ferromagnet-superconductor-ferromagnet junctions [@fsf], and trilayer FM-NM-FM which contacts an normal metal lead or a superconductor lead[@waintal1; @waintal2], etc., showing that the investigation on spin torques in magnetic junctions has been receiving much attention. Many works concerning the spin torque are presented for FM-NM-FM structures so far. The work for FM-I-FM structures is still sparse. In particular, when electrons tunnel through the insulator barrier, the spin-flip scattering may occur[@moodera; @vedyayev]. The spin-flip electrons feel a different torque with respect to those non-flip electrons, and could exert an additional torque to the ferromagnet. Consequently, this additional torque induced by the spin-flip electrons may play a role as the dynamic spin transfer torque. In this paper, we shall use the nonequilibrium Green function technique[@haug] to investigate the spin-flip scattering effect on the current-induced spin transfer torque in FM-I-FM tunnel junctions. The system under interest is composed of two ferromagnets which are stretched to infinite separated by a thin insulator, as illustrated in Fig.1. The molecular field in the left ferromagnet is assumed to align along the $z$ axis which is in the junction plane, while the orientation of the molecular field in the right ferromagnet, along the $z^{\prime }$ axis, deviates the $z$ axis by an angle $\theta $. The electrons flow along the $x$ axis which is perpendicular to the junction plane. The Hamiltonian of the system is                                                                                                                                           $$H=H_{L}+H_{R}+H_{T}, \label{Thamiltonian}$$ with $$\begin{aligned} H_{L} &=&\sum_{k\sigma }\varepsilon _{k\sigma }^{L}a_{k\sigma }^{\dagger }a_{k\sigma }, \label{respectH} \\ H_{R} &=&\sum_{q\sigma }[(\varepsilon _{R}({\bf q)-\sigma }M_{2}\cos \theta )c_{q\sigma }^{\dagger }c_{q\sigma }-M_{2}\sin \theta c_{q\sigma }^{\dagger }c_{q\overline{\sigma }}], \nonumber \\ H_{T} &=&\sum_{kq\sigma \sigma ^{\prime }}[T_{kq}^{\sigma \sigma ^{\prime }}a_{k\sigma }^{\dagger }c_{q\sigma ^{\prime }}+T_{kq}^{\sigma \sigma ^{\prime }}{}^{\ast }c_{q\sigma ^{\prime }}^{\dagger }a_{k\sigma }], \nonumber\end{aligned}$$ where $a_{k\sigma }$ and $c_{k\sigma }$ are annihilation operators of electrons with momentum $k$ and spin $\sigma $ $(=\pm 1)$ in the left and right ferromagnets, respectively, $\varepsilon _{k\sigma }^{L}=\varepsilon _{L}({\bf k)-}eV{\bf -\sigma }M_{1},$ $M_{1}=\frac{g\mu _{B}h_{L}}{2},$ $% M_{2}=\frac{g\mu _{B}h_{R}}{2},$ $g$ is the Landé factor, $\mu _{B}$ is the Bohr magneton, $h_{L(R)}$ is the molecular field of the left (right) ferromagnet, $\varepsilon _{L(R)}({\bf k)}$ is the single-particle dispersion of the left (right) FM electrode, $V$ is the applied bias voltage, $T_{kq}^{\sigma \sigma ^{\prime }}$ denotes the spin and momentum dependent tunneling amplitude through the insulating barrier. Note that the spin-flip scattering is included in $H_{T}$ when $\sigma ^{\prime }=\bar{% \sigma}=-\sigma $. It is this term that violates the spin conservation in the tunneling process. With the system defined above, let us now consider the spin torques exerting on the magnetic moments in the [*right*]{} FM electrode of this magnetic tunnel junction. The spin torques, namely the time evolution rate of the total spin of the left or the right ferromagnet, can be obtained by $\frac{% \partial }{\partial t}\langle {\bf s}_{1,2}(t)\rangle =\frac{i}{\hbar }% \langle \lbrack H,$ ${\bf s}_{1,2}(t)]\rangle .$ In Refs.[@waintal; @waintal1], the spin torques are defined by considering the momentum conservation $\partial {\bf s}_{1}/\partial t={\bf I(-\infty )-I(}0)$[@slonczewski], where ${\bf I}$ is the spin current (whose definition can be found in Refs.[@waintal2; @fnf]). Because there are the spin-dependent scatterings caused by the local exchange field inside the ferromagnets, the spin current is no longer conserved inside the ferromagnets. Whereas the total spin is conserved, the lost spin current is transferred to the local magnetic moments, thereby giving rise to a torque exerting on the local magnetic moments of the ferromagnets. So the nonconservation of the nonequilibrium spin current leads to a current-induced nonequilibrium torque [@waintal2]. We may consider this issue in another way, i.e. to investigate the evolution rate of the total spin of the ferromagnets. In doing so, one must be cautious to identify the spin-torques implied from the total Hamiltonian. In this model one may see that the right ferromagnet gains two types of torques: one is the equilibrium torque caused by the spin-dependent potential ( i.e. the magnetic exchange interaction), and the other is from the electrons tunneling through the insulating barrier from the left side. The latter can be obtained from the tunneling term $H_{T}$, which is nothing but the current-induced spin transfer torque in the plane spanned by ${\bf h}_{L}$ and ${\bf h}_{R}$. When the applied bias is absent, the left and right ferromagnets will only undergo the spin torque caused by the spin-dependent potential, and the current-induced torques would not appear. Therefore, we can define the current-induced spin transfer torque as ${\bf \tau }=\frac{i}{\hbar }\langle \lbrack H_{T},$ ${\bf s}_{2}(t)]\rangle $. Note that a similar expression in a current matrix form is introduced in Ref.[@fsf]. The equilibrium torques caused by a spin-dependent potential will not be considered here. The total spin of the right ferromagnet is $${\bf s}_{2}(t)=\frac{\hbar }{2}% %TCIMACRO{\underset{k\mu \nu }{\sum }}% %BeginExpansion \mathrel{\mathop{\sum }\limits_{k\mu \nu }}% %EndExpansion c_{k\mu }^{\dagger }c_{k\nu }({\bf R}^{-1}\chi _{\mu })^{\dagger }\stackrel{% \wedge }{{\bf \sigma }}({\bf R}^{-1}\chi _{\nu }), \label{totalsp}$$ where ${\bf R}=\left( \begin{array}{cc} \cos \frac{\theta }{2} & -\sin \frac{\theta }{2} \\ \sin \frac{\theta }{2} & \cos \frac{\theta }{2} \end{array} \right) $, $\stackrel{\wedge }{{\bf \sigma }}$ is Pauli matrices and $\chi _{\mu (\nu )}$ is spin states. Note that Eq. (\[totalsp\]) is written in the $xyz$ coordinate frame while the spins ${\bf s}_{2}$ are quantized in the $x^{\prime }y^{\prime }z^{\prime }$ frame. From ${\bf \dot{s}}_{1,2}\sim I_{e}\widehat{s}_{1,2}\times (\widehat{s}_{1}\times \widehat{s}_{2})$ with $% I_{e}$ the electrical current, we can judge that the direction of the spin transfer torque is along $x^{\prime }$ direction in the $x^{\prime }y^{\prime }z^{\prime }$ coordinate frame. We can further write the spin transfer torque in Eq.(\[totalsp\]) as ${\bf s}_{2}(t)=\frac{\hbar }{2}% %TCIMACRO{\underset{k\sigma }{\sum }}% %BeginExpansion \mathrel{\mathop{\sum }\limits_{k\sigma }}% %EndExpansion (c_{k\sigma }^{\dagger }c_{k\overline{\sigma }}\cos \theta -\sigma c_{k\sigma }^{\dagger }c_{k\sigma }\sin \theta )=s_{2x^{\prime }0}\cos \theta -s_{2z^{\prime }0}\sin \theta ,$ where $s_{2x^{\prime }0}$ and $% s_{2z^{\prime }0}$ are $x^{\prime }$- and $z^{\prime }$-components of the total spins in the $x^{\prime }y^{\prime }z^{\prime }$ coordinate frame in which the spins ${\bf s}_{2}$ are quantized. Hence, the current-induced spin transfer torque can be obtained: $$\tau ^{Rx^{\prime }}=-\cos \theta %TCIMACRO{\func{Re}}% %BeginExpansion \mathop{\rm Re}% %EndExpansion %TCIMACRO{\underset{kq}{\sum }}% %BeginExpansion \mathrel{\mathop{\sum }\limits_{kq}}% %EndExpansion \int \frac{d\varepsilon }{2\pi }Tr_{\sigma }[{\bf G}_{kq}^{<}(\varepsilon )% \stackrel{\wedge }{\sigma }_{1}{\bf T}^{\dagger }]+\sin \theta %TCIMACRO{\func{Re}}% %BeginExpansion \mathop{\rm Re}% %EndExpansion %TCIMACRO{\underset{kq}{\sum }}% %BeginExpansion \mathrel{\mathop{\sum }\limits_{kq}}% %EndExpansion \int \frac{d\varepsilon }{2\pi }Tr_{\sigma }[{\bf G}_{kq}^{<}(\varepsilon )% \stackrel{\wedge }{\sigma }_{3}{\bf T}^{\dagger }], \label{torquex}$$ where $\stackrel{\wedge }{\sigma }_{1}=\left( \begin{array}{cc} 0 & 1 \\ 1 & 0 \end{array} \right) ,$ $\stackrel{\wedge }{\sigma }_{3}=\left( \begin{array}{cc} 1 & 0 \\ 0 & -1 \end{array} \right) $ are Pauli matrices, ${\bf T}=\left( \begin{array}{cc} T_{1} & T_{2} \\ T_{3} & T_{4} \end{array} \right) $ with the elements $T_{i}$ ($i=1,...,4)$ being the tunneling amplitudes which are for simplicity assumed to be independent of $k$ and $q$ (namely, $T^{\uparrow \uparrow }$, $T^{\uparrow \downarrow }$, $% T^{\downarrow \uparrow }$, $T^{\downarrow \downarrow }$ respectively), $% Tr_{\sigma }$ stands for the trace of the matrix taking over the spin space, and ${\bf G}_{kq}^{<}(\varepsilon )$ is the lesser Green function in spin space defined as $${\bf G}_{kq}^{<}(\varepsilon )=\left( \begin{array}{cc} G_{kq}^{\uparrow \uparrow ,<}(\varepsilon ) & G_{kq}^{\downarrow \uparrow ,<}(\varepsilon ) \\ G_{kq}^{\uparrow \downarrow ,<}(\varepsilon ) & G_{kq}^{\downarrow \downarrow ,<}(\varepsilon ) \end{array} \right) , \label{lesserGF}$$ with $G_{kq}^{\sigma \sigma ^{\prime },<}(\varepsilon )=\int dte^{i\varepsilon (t-t^{\prime })}G_{kq}^{\sigma \sigma ^{\prime },<}(t-t^{\prime }),$ and $G_{kq}^{\sigma \sigma ^{\prime },<}(t-t^{\prime })\equiv i\langle c_{q\sigma }^{\dagger }(t^{\prime })a_{k\sigma ^{\prime }}(t)\rangle .$ By using the nonequilibrium Green function technique[@zhu], we can get the torque, $\tau ^{Rx^{\prime }}$, to the first order of the Green function by $$\tau ^{Rx^{\prime }}=\pi \int d\varepsilon \lbrack f(\varepsilon +eV)-f(\varepsilon )]Tr_{\sigma }[{\bf \Lambda (}\stackrel{\wedge }{\sigma }% _{1}\cos \theta -\stackrel{\wedge }{\sigma }_{3}\sin \theta )], \label{xt1}$$ where $f(x)$ is the Fermi function, the matrix ${\bf \Lambda }$ is defined as ${\bf \Lambda }={\bf T}^{\dagger }{\bf D}_{L}(\varepsilon +eV){\bf TRD}% _{R}(\varepsilon ){\bf R}^{\dagger }=\left( \begin{array}{cc} \Lambda _{1} & \Lambda _{2} \\ \Lambda _{3} & \Lambda _{4} \end{array} \right) ,\ $and ${\bf D}_{L(R)}=\left( \begin{array}{cc} D_{L(R)\uparrow } & 0 \\ 0 & D_{L(R)\downarrow } \end{array} \right) $ whose elements $D_{L(R)\uparrow (\downarrow )}(\varepsilon )=D_{L(R)}(\varepsilon \pm M_{1(2)})$ are the density of states (DOS) of electrons with spin up and down in the left (right) ferromagnet, respectively. After some algebras, we have $$\tau ^{Rx^{\prime }}=\frac{\pi }{2}\int d\varepsilon \lbrack f(\varepsilon )-f(\varepsilon +eV)](D_{R\uparrow }+D_{R\downarrow })\Gamma _{1}^{L}(P_{1}\sin \theta -P_{3}\cos \theta ), \label{troque1}$$ where $P_{1}=\frac{D_{L\uparrow }(T_{1}^{2}-T_{2}^{2})-D_{L\downarrow }(T_{4}^{2}-T_{3}^{2})}{D_{L\uparrow }(T_{1}^{2}+T_{2}^{2})+D_{L\downarrow }(T_{3}^{2}+T_{4}^{2})},$ $P_{3}=\frac{2(D_{L\uparrow }T_{1}T_{2}+D_{L\downarrow }T_{3}T_{4})}{D_{L\uparrow }(T_{1}^{2}+T_{2}^{2})+D_{L\downarrow }(T_{3}^{2}+T_{4}^{2})},$ and $\Gamma _{1}^{L}=D_{L\uparrow }(T_{1}^{2}+T_{2}^{2})+D_{L\downarrow }(T_{3}^{2}+T_{4}^{2}).$ It is interesting to note from the above equation that the direction of the spin torque is closely related to whether the applied bias is positive or negative, namely, it depends strongly on the direction of the electrical current, in agreement of the previous observation [@slonczewski; @tsoi; @sun; @myers; @katine]. To compare the results with those reported in Refs. [@slonczewski; @waintal], we can also consider the spin-torque per current, i.e. $\tau ^{Rx^{\prime }}/I_{e}=\left\langle \frac{1}{G}\frac{\partial \tau ^{Rx\prime }}{\partial V}\right\rangle ,$ where $G$ is the tunneling conductance \[see Eq. (12) in Ref. [@zhu]\] and $V$ is the applied bias. Then we obtain $$\frac{\tau ^{Rx^{\prime }}}{I_{e}}=\frac{\hbar }{e}\frac{P_{1}\sin \theta -P_{3}\cos \theta }{1+P_{2}(P_{1}\cos \theta +P_{3}\sin \theta )}, \label{torqandcurr}$$ where $P_{2}=\frac{D_{R\uparrow }-D_{R\downarrow }}{D_{R\uparrow }+D_{R\downarrow }}$ is the polarization of the right ferromagnet, and the energy is taken at the Fermi level. To gain deeper insight into the effect of the spin-flip scatterings on the spin tranfer torque, we ought to invoke numerical calculations. Before presenting the calculated results, we shall presume a parabolic dispersion for band electrons based on which the DOS of conduction electrons are calculated. The Fermi energy and the molecular field will be taken as $% E_{f}=1.295$ eV and $\left| {\bf h}_{1}\right| =\left| {\bf h}_{2}\right| =0.90$ eV, which are given in Ref. [@moodera1] for Fe. In addition, we may for convenience introduce two parameters $\gamma _{1}=T_{2}/T_{1}$ and $% \gamma _{2}=T_{3}/T_{1}$, and assume $T_{1}=T_{4}$. Now let us first look at the case without the spin-flip scatterings. The leading contribution to the torque $\tau ^{Rx^{\prime }}$ comes from the first nonvanishing term. When the spin-flip scatterings are absent, i.e. $% T_{2}=T_{3}=0,$ we find that $\frac{\partial \tau ^{Rx^{\prime }}}{\partial V% }=\frac{e\pi }{2}(D_{R\uparrow }+D_{R\downarrow })(D_{L\uparrow }+D_{L\downarrow })T_{1}^{2}\overline{P}_{1}\sin \theta $ with $\overline{P}% _{1}=(D_{L\uparrow }-D_{L\downarrow })/(D_{L\uparrow }+D_{L\downarrow }),$ which is consistent with that obtained in Refs.[@slonczewski; @s2]. This result shows clearly that $\tau ^{Rx^{\prime }}$ vanishes when the relative alignment of magnetizations of the two ferromagnets is parallel ($\theta =0$) or antiparallel ($\theta =\pi $). It is similar to that for a FM-NM-FM trilayer system discussed in Ref.[@waintal], although the transport mechanisms are different. The present result can be easily understood because the spin-polarized electrons along the $z$ or $-z$ axis cannot feel the spin transfer torque owing to the property of $\widehat{s}_{1,2}\times (% \widehat{s}_{1}\times \widehat{s}_{2})$. In Fig. 2, we show the $\theta $ dependence of the spin torque $\frac{\tau ^{Rx^{\prime }}}{I_{e}}$ in the absence of spin-flip scatterings for different polarizations. One may see that $\frac{\tau ^{Rx^{\prime }}}{I_{e}}$ is a nonmonotonic function of $% \theta $, and shows minima and maxima at certain relative alignments. $\frac{% \tau ^{Rx^{\prime }}}{I_{e}}$ versus $\theta $ is inversion-symmetrical to the axis of $\theta =0$. It is evident that the larger the polarization, the stronger the spin transfer torques. This observation is in good agreement with the finding in Ref.[@slonczewski], though the latter is obtained on a basis of a quite different method. The $\theta $-dependence of $\frac{\tau ^{Rx^{\prime }}}{I_{e}}$ for $\gamma _{1}=\gamma _{2}$ is shown in Fig. 3 (a). It can be seen that the spin-flip scatterings can lead to a nonvanishing spin torque at $\theta =0$ or $\pi ,$ i.e. $\frac{\partial \tau ^{Rx^{\prime }}}{\partial V}=\mp e\pi (D_{R\uparrow }+D_{R\downarrow })T_{1}^{2}(\gamma _{1}D_{L\uparrow }+\gamma _{2}D_{L\downarrow }),$ being different from the case when the spin-flip scatterings are neglected. However, in the present case with $\gamma _{1}=\gamma _{2}=0.05,$ at a particular angle, e.g. $\theta =14.04^{\circ }$, $\tau ^{Rx^{\prime }}$ becomes zero. This suggests that the spin-flip scattering can lead to an angular shift to the spin torque. In other words, the spin-flip scattering gives rise to an additional torque which we may call the [*spin-flip induced spin torque*]{} henceforth. In addition, if $% \gamma _{1}=\gamma _{2}=\gamma ,$ one may find that $\frac{\partial \tau ^{Rx^{\prime }}}{\partial V}$ is propotional to $\gamma $. When $\gamma _{1}\neq \gamma _{2}$, which means that the spin-flip scatterings from the spin up band to the spin down band are different from those from down to up, $\gamma _{1}$ and $\gamma _{2}$ give different effects on the $\theta $-dependence of spin torques as shown in Fig. 3 (b). It is observed that there are angular shifts for different $\gamma _{1}$ and $\gamma _{2}$, showing that the effects of $\gamma _{1}$ and $\gamma _{2}$ are various. For instance, the curve for $\gamma _{1}=0.2$ and $\gamma _{2}=0.1$ moves to the right-hand side in comparison to the curve of  $\gamma _{1}=0.1$ and $% \gamma _{2}=0.2$. It appears that a larger spin-flip scattering from spin down to spin up has a larger effect on the spin torque. The effect of the molecular fields of the ferromagnets on the spin torques is also investigated. The $\theta $-dependences of $\tau ^{Rx^{\prime }}$ are shown in Fig. 4 for different parameter $\alpha =\left| {\bf h}% _{R}\right| /\left| {\bf h}_{L}\right| $. One may find that $\tau ^{Rx^{\prime }}/I_{e}$ is also a nonmonotinoc function of $\theta $, and shows peaks at certain values of $\theta $ for different $\alpha ^{\prime }$s. It is interesting to note that the two crossing points at $\theta _{1}=37^{\circ }$ and $\theta _{2}=127^{\circ }$ under different molecular fields are observed. When $\theta <$ $\theta _{1}$, a larger $\alpha $ leads to a smaller magnitude of the spin-torque (which are negative); when $\theta _{1}<\theta <\theta _{2}$, the larger $\alpha ,$ the smaller the spin-torques; while $\theta >\theta _{2}$, a larger $\alpha $ leads to a larger magnitude of the spin-torque, showing that $\left| {\bf h}_{R}\right| $ and $\left| {\bf h}_{L}\right| $ have different effects on the spin torques. The two crossing points appeared in the curve $\tau ^{Rx^{\prime }}/I_{e}$ versus $\theta $ for different molecular fields can be understood in the following way. From Eq. (\[torqandcurr\]), one can get a relation $$\frac{\tau ^{Rx^{\prime }}}{I_{e}}=\frac{\hbar }{e}\frac{\sqrt{% P_{1}^{2}+P_{3}^{2}}\sin (\theta -\theta _{f})}{1+P_{2}\sqrt{% P_{1}^{2}+P_{3}^{2}}\cos (\theta -\theta _{f})}, \label{anothert}$$ where the angular shift $\theta _{f}$ is defined by $\tan \theta _{f}=\frac{% P_{3}}{P_{1}}$[@zhu]. $\theta _{1}$ at which the first crossing point appears is nothing but $\theta _{f}$ (i.e. $\theta _{1}=\theta _{f}=37^{\circ }$). At $\theta _{1}=\theta _{f}$, $\frac{\tau ^{Rx^{\prime }}% }{I_{e}}=0$, which is independent of the molecular field of the right ferromagnet. The second crossing point appears at $\theta _{2}-\theta _{f}=% \frac{\pi }{2}$ (i.e. $\theta _{2}=127^{\circ }$). At this angle $\theta _{2} $, $\frac{\tau ^{Rx^{\prime }}}{I_{e}}=\frac{\hbar }{e}\sqrt{% P_{1}^{2}+P_{3}^{2}}$, which depends only on the parameters of the left ferromagnet but is independent of the molecular field of the right ferromagnet. Therefore, at these two particular alignments, $\frac{\tau ^{Rx^{\prime }}}{I_{e}}$ gives the same value for different $\alpha ^{\prime }$s, thereby leading to the two crossing points. When the spin-flip scattering disappears, the two crossing points occur at $\theta =$ $0$ and $% \frac{\pi }{2}$. So we may find that the spin-flip scattering may lead to an additional spin torque on the magnetic moments of the right ferromagnet. The present observation may be readily understood, because the spin-flip tunneling of electrons from the left ferromagnet into the right ferromagnet would give rise to a change of the DOS of conduction electrons as well as the effective polarization factor in the right ferromagnet, leading to an additional torque exerting on the magnetic moments, thereby the property of an angular shift of the spin torque was observed. In summary, we have investigated the current-induced spin torque in FM-I-FM tunnel junctions with inclusion of spin-flip scattering by using the nonequilibrium Green function method. In the absence of the spin-flip scattering, our results are consistent with the previous results found in Refs.[@slonczewski; @s2; @waintal]. When the spin-flip scattering, the factor that could exist in realistic spin-based electronic devices, is considered, we have found that an additional spin torque is induced. It is found that the spin-flip scattering can enhance the maximum of the current-induced spin transfer torque, giving rise to an angular shift. The effects of the molecular fields of the left and right ferromagnets on the spin torques are also studied. It can be observed that the spin-torques per unit tunneling current exerting on the right ferromagnet are independent of the molecular field of the right ferromagnetic lead at $\theta =\theta _{f}$ and $\theta -\theta _{f}=\frac{\pi }{2}$. The present study shows that the spin-flip scatterings during the tunneling process in magnetic tunnel junctions have indeed remarkable influences on the dynamics of the magnetic moments. Since the spin-transfer torque induced by the applied current can switch the magnetic domains between different orientations[@myers], people can invoke this property to fabricate a current-controlled magnetic memory element. As the previous studies on the spin torque usually ignore the effect of the spin-flip scatterings which could not be avoided in practice, our investigation might offer a supplement to the previous studies, namely, when people design a device based on a mechanism of the spin transfer torque, one should take the additional torque induced by the spin-flip scatterings into account, which could complement with the experiments. Acknowledgments {#acknowledgments .unnumbered} =============== This work is supported in part by the National Science Foundation of China (Grant No. 90103023, 10104015), the State Key Project for Fundamental Research in China, and by the Chinese Academy of Sciences. $^{\ast }$Corresponding author. E-mail: [email protected]. S. A. Wolf, et al, Science [**294,**]{} 1488 (2001). J. C. Slonczewski, J. Magn. Magn. Mater. [**159**]{}, L1 (1996). J. C. Slonczewski, Phys. Rev. B [**39,**]{} 6995 (1989). M. Tsoi et al, Phys. Rev. Lett. [**80,**]{} 4281 (1998). J. Z. Sun, J. Magn. Magn. Mater. [**202,**]{} 157 (1999). E. B. Myers et al, Science [**285**]{}, 867 (1999). J. A. Katine, F. J. Albert, R. A. Buhrman, E. B. Myers, and D. C. Ralph, Phys. Rev. Lett. [**84,**]{} 3149 (2000). K. B. Hathaway and J. R. Cullen, J. Magn. Magn. Mater. [**104-107,**]{} 1840 (1992); A. Brataas, Yu. V. Nazarov, and G. E. W. Bauer, Eur. Phys. J. B [**22,**]{} 99 (2001). X. Waintal, E. B. Myers, P. W. Brouwer, and D. C. Ralph, Phys. Rev. B [**62,** ]{}12317 (2000). Y. Tserkovnyak and A. Brataas, Phys. Rev. B [**65,** ]{}094517 (2002). X. Waintal and P. W. Brouwer, Phys. Rev. B [**63,** ]{}220407 (2001). X. Waintal and P. W. Brouwer, Phys. Rev. B [**65,** ]{}054407 (2002) . R. P. Erickson, K. B. Hathaway, and J. R. Cullen, Phys. Rev. B [**47,**]{} 2626 (1993); J. C. Slonczewski, J. Magn. Magn. Mater. [**126,**]{} 374 (1993). J. S. Moodera, J. Nassar and G. Mathon, Annu. Rev. Sci. [**29,**]{} 381 (1999). A. Vedyayev, R. Vlutters, N. Ryzhanova, J. C. Lodder, and B. Dieny, Eur. Phys. J. B [**25,**]{} 5 (2002). H. Haug, and A. -P. Jauho,[ ]{}[*Quantum Kinetics in Transport and Optics of Semiconductors*]{} (Springer-Verlag, Berlin, 1998). Zhen-Gang Zhu, Gang Su, Qing-Rong Zheng, and Biao Jin, Phys. Lett. A [**300,**]{} 658 (2002). J. S. Moodera, M. E. Taylor, and R. Meservey, Phys. Rev. B [**40,** ]{}11980 (1989). [**FIGURE CAPTIONS**]{} Fig. 1 A schematic illustration of the spin-transfer torque with spin-flip scatterings in a $FIF$ tunnel junction. Fig. 2 Spin transfer torque $\frac{\tau ^{Rx^{\prime }}}{I_{e}}$versus angle $\theta $ in the absence of the spin-flip scatterings (i.e.$\gamma _{1}=\gamma _{2}=0$). Equal polarization factors of the magnets are assumed ($P_{1}=P_{2}$). Torque per unit current is measured in unit of $\hbar /e.$ Fig. 3 The $\theta $ dependence of $\frac{\tau ^{Rx^{\prime }}}{I_{e}}$ for (a) $\gamma _{1}=\gamma _{2}$ and (b) $\gamma _{1}\neq \gamma _{2}$, where the effective masses of the left and right ferromagnets are taken as unity, the molecular fields are assumed to be $0.9$ eV, the Fermi energy is taken as $1.295$ eV, and $T_{1}=T_{4}$ $=0.01$ eV. Torque per unit current is measured in unit of $\hbar /e.$ Fig. 4 $\tau ^{Rx^{\prime }}/I_{e}$ versus angle $\theta $ for different molecular fields. Here we take $\gamma _{1}=\gamma _{2}=0.15$, $\left| {\bf h% }_{1}\right| =0.9$ eV, and the other parameters are taken the same as those in Fig. 3. Torque per unit current is measured in unit of $\hbar /e.$
--- abstract: | By using an elegant response function theory, which does not require matching of the messy boundary conditions, we investigate the surface plasmon excitations in the multicoaxial cylindrical cables made up of negative-index metamaterials. The multicoaxial cables with [*dispersive*]{} metamaterial components exhibit rather richer (and complex) plasmon spectrum with each interface supporting two modes: one TM and the other TE for (the integer order of the Bessel function) $m \ne 0$. The cables with [*nondispersive*]{} metamaterial components bear a different tale: they do not support simultaneously both TM and TE modes over the whole range of propagation vector. The computed local and total density of states enable us to substantiate spatial positions of the modes in the spectrum. Such quasi-one dimensional systems as studied here should prove to be the milestones of the emerging optoelectronics and telecommunications systems.\ [*OCIS codes:*]{} 160.3918, 240.6680, 250.5403, 350.3618. address: | $^{1}$Department of Physics and Astronomy, Rice University, P.O. Box 1892, Houston, TX 77251, USA\ $^{2}$IEMN, UMR-CNRS 8520, UFR de Physique, University of Science and Technology of Lille I, 59655 Villeneuve d’Ascq Cedex, France author: - 'M. S. Kushwaha$^{1}$ and B. Djafari-Rouhani$^{2}$' title: 'Low-frequency surface plasmon excitations in multicoaxial negative-index metamaterial cables' --- INTRODUCTION ============ The surface plasmon is a well-defined excitation that can exist on an interface that separates a surface-wave active medium \[with $\epsilon < 0$\] from a surface-wave inactive \[with $\epsilon > 0$\] medium. It is characterized by the electromagnetic fields that are localized at and decay exponentially away from the interfaces into the bounding media. In a conventional system, and in the simplest physical situation, an interface supports one and only one confined mode associated with either p-polarization or s-polarization. It may sound somewhat exaggeration, but it seems to be true that the plasmon excitation, in classical as well as in quantum structures and both theoretically and experimentally, is the most exploited and best understood quasi-particle in condensed matter physics \[1\]. The recent research interest in surface plasmon optics has been invigorated by an experiment performed on the transmission of light through subwavelength holes in metal films \[2\]. This experiment has spurred numerous theoretical \[3-6\] as well experimental \[7-11\] works on similar structured surfaces: either perforated with holes, slits, dimples, or decorated with grooves. It has been argued that resonant excitation of surface plasmons creates huge electric fields at the surface that force the light through the holes, yielding very high transmission coefficients. The idea of tailoring the topography of a perfect conductor to support the surface waves resembling the behavior of the surface plasmons at optical frequencies was discussed in the context of a surface with an array of two-dimensional holes \[6\]. The experimental verification of this proposal has recently been reported \[12-14\] on the structured metamaterial surfaces which support surface plasmons at microwave frequencies. Because of their mimicking characteristics, these geometry-controlled surface waves were named [*spoof*]{} surface plasmons. Talking of the negative-index metamaterials reminds us of another hot subject that has been drawing immense attention of many research groups world-wide for the past few years. Proposed some four decades ago by Veselago \[15\], advocated by Sir John Pendry in 2000 \[16\], and practically realized by Smith and coworkers in 2001 \[17\], an artificially designed negative-index metamaterial, exhibiting simultaneously negative values of electrical permittivity $\epsilon(\omega)$ and magnetic permeability $\mu(\omega)$ and hence negative refractive index $n$, seems to have extended many basic notions related with the electromagnetism. It forms a left-handed medium, with the energy flow ${\bf E\times {\bf H}}$ being opposite to the direction of propagation vector, for which it has been argued that such phenomena as Snell’s law, Doppler effect, Cherenkov radiation are inverted. Metamaterials are also lately becoming known as the basis of the proposals for designing cloaking device \[18\] and exotic fundamental phenomena such as anomalous refraction \[19\]. At the outset, it would be interesting to shed some light on how the plasma frequency is lowered in these metamaterials structured periodically with wire loops or coils. Some time ago, Pendry and coworkers \[20\] argued that any restoring force acting on the electrons will not only have to work against the rest mass of the electrons, but also against the self-inductance of such wire structures. This effect is of paramount importance in these wire structures. They went on arguing that the inductance of a thin wire diverges logarithmically with wire radius and confining the electrons to thin wires enhances their effective mass by orders of magnitude. In other simpler format \[7\] one can, from Ohm’s law ($j=\sigma E_{local}$), determine the effective conductivity for the inductive wire , and calculate an effective local dielectric function analogous to the Drude dielectric function, but with plasma frequency directly related to the inductance ($L$) of the unit cell (of length $l$) and wire spacing $d$ according to $\omega_p=\sqrt{l/(d^2 L \epsilon_0 )}$. Thus reducing the wire radius enhances the inductance which thereby lowers the plasma frequency of the system. Such estimates led them to predict the plasma frequency on the order of $\sim$ 7 to 8 GHz. In the present paper, we generalize our recent Green function (or response function) approach \[21\] to investigate the propagation characteristics of surface plasmons in multicoaxial cables made up of right-handed medium (RHM) \[with $\epsilon >0$, $\mu >0$\] and the left-handed medium (LHM) \[with $\epsilon(\omega) <0$, $\mu(\omega) <0$\] in alternate shells starting from the innermost cable. In other words, we visualize a cylindrical analogue of a one-dimensional planar superlattice structure bent round until two ends of each layer coincide to form a multicoaxial cylindrical geometry. We prefer to name such a resultant structure as multicoaxial cables. Such structures as conceived here may pave the way to some interesting effects in relation to, for example, the optical science exploiting the cylindrical symmetry of the coaxial waveguides that make it possible to perform all major functions of an optical fiber communication system in which the light is born, manipulated, and transmitted without ever leaving the fiber environment, with precise control over the polarization rotation and pulse broadening \[22\]. The cylindrical geometries are already known to have generated particular interest for their usefulness not just as electromagnetic waveguides, but also as atom guides, where the guiding mechanism is governed mainly by the excited cavity modes. It is envisioned that the understanding of atom guides at such a small scale would lead to much desirable advances in atom lithography, which in turn should facilitate atomic physics research \[23\]. The rest of the paper is organized as follows. In Sec. II, we briefly focus on the strategy of the formalism generalized to be applicable to the multicoaxial metamaterial cables. In Sec. III, we discuss several illustrative examples on the dispersion characteristics and the density of states of the relevant systems. Finally, we conclude our findings in Sec. IV. THEORETICAL FRAMEWORK ===================== Recently, we have embarked on a systematic investigation of the surface plasmon excitations in the cylindrical coaxial shells made up of negative-index metamaterials interlaced with right-handed media within the framework of an exact Green-function (or response function) theory \[21\]. The knowledge of such excitations is considered to be fundamental to the understanding of the basics of the plasmonic optics. We consider the cross-section of these coaxial cables to be much larger than the de Broglie wavelength, so as to neglect the quantum size effects. We include the retardation effects but neglect, in general, the damping effects and hence ignore the absorption. In the state-of-the-art high quality systems, this is deemed to be quite a reasonable approximation \[17\]. Therefore we study the plasmon excitations in a neat and clean system comprised of multicoaxial metamaterial cables (MCMC), schematically shown in Fig. 1. The formalism of the problem is a straightforward generalization of the theory presented in Ref. \[21\]. While it is always important to have a paper as much self-contained as possible, we think that reiterating all the necessary mathematical part from Ref. \[21\] would make it an undesirable repetition. The working strategy is systematically illustrated in Fig. 2, with all the necessary details. We call attention to the fact that Fig. 2 is a $2(n-1)\times 2(n-1)$ matrix, with all the elements outside the shaded regions being zero. Now ‘1’ refers to the first perturbation specified by Eq. (3.8), ‘n’ stands for the second perturbation specified by Eq. (3.15), and ‘2’, ‘3’, ‘4’, ....., ‘(n-2)’, ‘(n-1)’ correspond to the third perturbation specified by Eq. (3.23) in Ref. \[21\] for the respective shells. We would like to stress that our formalism is [*not*]{} a perturbative scheme, albeit we use the term ‘perturbation’ — the term ‘perturbation’, in fact, implies to the step-wise operation concerned with the problem. It is also noteworthy that this theoretical framework knows no bound with respect to the number of media involved in the system and/or their material characteristics. This implies that the general theory in Ref. \[21\] provides equal footing for the choice of all different LHM, all different RHM, or a combination thereof in the alternate shells. As such, an interested reader is only advised to focus on the final response functions given in Eqs. (8), (15), and (23) in Ref. \[21\] in order to build the matrix depicted in Fig. 2. ILLUSTRATIVE EXAMPLES ===================== For illustrative examples, we have focused on the dispersive negative-index metamaterials characterized by $\epsilon(\omega) = 1 - \omega_p^2/\omega^2$, where $\omega_p$ is the plasma frequency (usually in the GHz range), and $\mu(\omega) = 1 - F\omega^2/(\omega^2-\omega_0^2)$, where $F$ is, generally, chosen to be a constant factor ($<1$) and $\omega_0$ is the resonance frequency (also usually chosen to be in the GHz range). Since this is, to our knowledge, the first paper on such a complex system of multicoaxial metamaterial cables, we adhere to the backbone simplicity and think that the complexities, such as the choice of different metamaterials, different (conventional) dielectrics as spacers, different (irregular) radii, and geometrical defects would (and should) come later and hence are deferred to a future publication. As such, our system is considered to be made up of the (same) RHM and the (same) LHM in alternate shells starting from the innermost cable of radius $R_1$ and fix (the total number of media, including the outermost semi-infinite medium) $n=15$. ![Schematics of the multi-coaxial cables: the side view showing the alignments of the cylindrical cables of circular cross-sections of radii $R_{j+1} > R_j$, and $n$ media with $n-1$ interfaces. The innermost circle marked as 1 refers to the innermost cable of radius $R_1$ enclosed by the consecutive ($n-1$) shells assumed to be numbered as 2, 3, .... (n-2), (n-1) and cladded by an outermost semi-infinite medium $n$. Our exact general theory schematically outlined in Fig. 2 allows one to consider the resultant system to be made up of negative-index (dispersive or nondispersive) metamaterials interlaced with conventional dielectrics, metals, or semiconductors.[]{data-label="fig1"}](figm01.eps){width="7cm" height="6cm"} ![A graphic representation of the complete formalism for the total inverse response function $\tilde{g}^{-1}(...)$ for the resultant system shown in a desired compact form. Here $n$ refers to the total number of media comprising the MCMC system; with $m=n-1$ as the number of interfaces, $l=n-2$, and $k=n-3$ ..... etc. The plasma modes of the system are defined by $det [\tilde{g}^{-1}(...)]=0$.[]{data-label="fig2"}](figm02.eps){width="7.5cm" height="7cm"} Figure 3 illustrates the surface plasmon dispersion for a perfect multicoaxial cable system made up of a dispersive, negative-index metamaterials interlaced with conventional dielectrics (assumed to be vacuum) for $n=15$ and (the integer order of the Bessel function) $m=0$. The plots are rendered in terms of the dimensionless frequency $\xi=\omega/\omega_p$ and the propagation vector $\zeta=c k/\omega_p$. The dashed line and curve marked as LL1 and LL2 refer, respectively, to the light lines in the vacuum and the metamaterial. The shaded area represents the region within which both $\epsilon(\omega)$ and $\mu(\omega)$ are negative and disallows the existence of truly confined modes. The thick dark band of frequencies piled up near the resonance frequency $\omega_0=0.4 \omega_p$ is not unexpected. Since there are fourteen interfaces in the system, we logically expect fourteen branches each for the TM and TE modes in the system. As we see, this is exactly the case, except for the fact that the lower group of seven TM branches (which start from zero) have observed a resonance splitting due to the resonance frequency $\omega_0$ in the problem. The latter branches quickly become asymptotic to $\omega_0$. All the TM and TE confined modes above $\omega_0$ have their well-defined asymptotic limits exactly dictated, respectively, by $\omega=\omega_p/\sqrt{\epsilon_1 +1}$ and $\omega=\omega_0/\sqrt{1-\frac{F}{\mu_1+1}}$. A word of warning about the simultaneous existence of TM and TE modes: if we search the zeros of the determinant (see Fig. 2), as it is required, for [*any*]{} value of $n$ and $m$, we always obtain the simultaneous existence of all the TM and TE modes along with the resonance splittings as stated above. This is a [*rule*]{} as long as $m \ne 0$. The only exception to this is the case of $m=0$ (and very small $n$). For instance, for $n=3$ and $m=0$, one has a $4 \times 4$ determinant and it is possible to separate analytically the TM and TE modes. \[We recall the well-known facts from the electrodynamics: the electrostatics (magnetostatics) claim ownership of the p-polarized (TM) (s-polarized (TE)) fields.\] However, even for these values of $n$ and $m$, if we search the zeros of the full determinant, without analytically decoupling the modes, we obtain both TM and TE modes together. The interesting question that remains to be answered is: what is the advantage of investigating multicoaxial cables over a few coaxial cables? The answer lies in comparing not only the growth mechanism but also the optimum response. As to the fabrication, we understand that, for the reason of sensitivity, the growth conditions are more favorable for the multicoaxial cables than for the single (or, a few coaxial) cables. This growth aspect is deemed to be just similar to the growth mechanism of multiwalled carbon nanotubes. As regards the characteristic response of the multicoaxial cables, one can notice several differences as compared to that of a single (or a few coaxial) cable: (i) the excitation spectrum becomes richer and complex, (ii) several waves separated by energy gaps can exist at a given propagation vector, (iii) a single frequency allows several guided waves localized at different interfaces, (iv) it should become feasible to describe the shells with varying effective parameters as required and realized in cloaking devices, and, most importantly, (v) the structure allows the coexistence of p-polarized and s-polarized modes because the multicoaxial system is made up of metamaterials. We believe that these characteristics provide a suitable platform for devising useful devices based on the surface plasmonic waves in the multicoaxial cables. ![Plasmon dispersion for a perfect multicoaxial cable system made up of a dispersive negative-index metamaterial interlaced with conventional dielectrics for $N=15$ and $m=0$. The dimensionless plasma frequency used in the computation is specified by $\omega_pR_1/c=\sqrt{3.5}$ and the dimensionless thicknesses of the shells are defined by $\Delta r_{j,j+1}=0.35$. Dashed line and curve marked as LL1 and LL2 refer, respectively, to the light lines in the vacuum and the metamaterial. The (dark) thick band of frequencies is piled up at the characteristic resonance frequency ($\omega_0$). We call attention to the resonance splitting of the lower TM confined modes due to the resonance frequency ($\omega_0$) in the problem. The shaded area represents the region within which both $\epsilon(\omega)$ and $\mu(\omega)$ are negative and disallows the existence of truly confined modes. The system as a whole is represented by RLR...RLR design.[]{data-label="fig3"}](figm03.eps){width="7.5cm" height="8cm"} Figure 4 shows the local density of states (LDOS) as a function of reduced frequency $\xi$ for the multicoaxial cable system discussed in Fig. 3, for the propagation vector $\zeta=1.0$. The rest of the parameters used are the same as those in Fig. 3. Notice that each of these interfaces is seen to share most of the peaks supposed to exist and reproduce most of the discernible modes at $\zeta =1.0$ (in Fig. 3). Of course, one has to take into consideration the degeneracy and the hodgepodge that persists near the resonance frequency $\xi=0.4$ in Fig. 3. Let us, for instance, look at the top panel: the highest, second highest, third highest, fourth highest, fifth highest, sixth highest, seventh highest, and eighth highest peaks lie, respectively, at $\xi=0.9490$, 0.9428, 0.9299, 0.9061, 0.8667, 0.8046, 0.7298, and 0.6232. The highest peak (at $\xi=0.9490$) remains indiscernible at this scale. Similarly, the lowest, second lowest, third lowest, fourth lowest, fifth lowest, sixth lowest, and seventh lowest peaks (below the resonance frequency) are seen to lie, respectively, at $\xi=0.2877$, 0.2999, 0.3197, 0.3452, 0.3663, 0.3763, and 0.3967. All these peak positions exactly substantiate the modes at $\zeta=1.0$ in the spectrum in Fig. 3. One has to notice that, as the name LDOS suggests, every interface has its own choice (with respect to the geometry and/or the material parameters) and there does not seem to be a rule that may dictate the modes’ counting. This is, in a sense, different from the total DOS where one obtains exactly the same number of peaks as the modes in the spectrum for a given value of $\zeta$. Notice that the shorter-wavelength modes do not interact much with the neighboring ones and remain spatially confined to the immediate vicinity of the respective interfaces. Such strongly localized modes are thus easier to be observed in the experiments than their longer-wavelength counterparts. ![Local density of states for the system discussed in Fig. 3 and for $n=15$, $m=0$, and $\zeta=1.0$. The bottom, middle, and top panel refer, respectively, to LDOS at interface 1, interface 7, and interface 14 in the system. The rest of the parameters used are the same as in Fig. 3. The arrows in the panels indicate the relatively smaller (in height) peaks.[]{data-label="fig4"}](figm04.eps){width="7.5cm" height="8cm"} Figure 5 depicts the surface plasmon dispersion (right panel) and total density of states (left panel) for a multicoaxial cable system made up of a nondispersive negative-index metamaterials interlaced with conventional dielectrics for $n=9$ and $m=0$. The plots are rendered in terms of the dimensionless frequency $\omega R_1/c$ and the propagation vector $k R_1$. The dimensionless thicknesses of the shells are defined by $\Delta r_{j,j+1}=0.25$. The material parameters are as listed inside the left panel. Right panel: The dashed line refers to the light line in the vacuum. As expected, there are eight TM modes — four of them starting from the nonzero $k$ and the other four emerge from the light line. The shaded area is the radiative region which encompasses radiative modes (not shown) towards the left of the light line. We notice in passing that the slope of these TM modes in the asymptotic limit is defined by $\omega/c k=0.7817$. An important issue remains to be answered: Why do we obtain only TM modes up to the asymptotic limit? The answer is clearly provided by the analytical diagnosis. In order to answer this question, one has to look carefully at the analytical diagnoses presented in Sec. III.G in Ref. 21. To be brief, the answer lies in the fact that, for the material parameters chosen here, while Eq. (3.43) is fully satisfied, Eq. (3.47) or (3.48) is not. The former condition justifies the existence of the TM modes and the latter rules out the occurrence of TE modes. One remaining curiosity: What do these (almost) vertical lines, hanging downwards from the light line, indicated by arrows refer to? The succinct answer is that these are the ill-behaved TE modes which exist only in the long wavelength limit (LWL). It is interesting to note that if we interchange the values of $\epsilon_L$ and $\mu_L$, (i.e., the parameters that define the nondispersive LHM), we obtain well-behaved TE modes and the ill-behaved TM modes. The reason is simply that the aforesaid conditions that govern the nature of the modes in the asymptotic limit are then reversed. Left panel: The computation of the total density of states \[plotted as a function of reduced frequency\] shows clearly eight peaks for the given value of $kR_1=21.5$. Starting from the lowest frequency, we observe that these peaks lie at $\omega R_1/c=14.69$, 14.99, 15.40, 16.20, 17.99, 18.53, 19.37, and 20.18. These peak positions exactly substantiate the frequencies of the TM modes in the right panel at $kR_1=21.5$. The research efforts during the past few years reaffirm that the metamaterials are predominantly dispersive and lossy materials. And yet, a considerable amount of research effort has focussed to explore interesting physical phenomena in the nondispersive metamaterials, particularly, in the context of photonic crystals in the recent years \[24\]. This leads us to infer that the results in Fig. 5 remain at least of fundamental interest. ![Right panel: Plasmon dispersion for a perfect MCMC cable system made up of a non-dispersive negative-index metamaterial interlaced with conventional dielectrics for $n=9$ and $m=0$. There are well-defined ($n-1$) TM modes in the system: upper four starting from the light-line and the lower four from the nonzero propagation vector. The dimensionless thicknesses of the shells are defined by $\Delta r_{j,j+1}=0.25$. The shaded region stands for the purely radiative modes (not shown). We call attention to the sharply downward modes, indicated by arrows, which are the ill-behaved TE modes in the LWL. Left panel: the total density of states as a function of reduced frequency $\omega R_1/c$ for the propagation wave vector $kR_1=21.5$.[]{data-label="fig5"}](figm05.eps){width="7.5cm" height="8cm"} CONCLUDING REMARKS ================== To conclude with, we estimate that if $\nu_p=\omega_p/2\pi=10$ GHz, the radius of the innermost cable is defined as $R_1=8.93$ mm for the parameter $\omega_pR_1/c=1.87$ used for the dispersive negative-index metamaterials (see Figs. 3 and 4). It is interesting to notice that this size scale is almost the same as the dimensions of the sample (the lattice spacing $d=9.53$ mm and inner size of the tubes $a=6.96$ mm) used in the experiment in Ref. 12, which verified the prediction of Pendry and coworkers \[6\] that, if textured on a subwavelength scale, even perfect conductors can support the surface plasmon modes. It is noteworthy that the frequency range of GHz is the most explored regime for the metamaterials realized from split ring resonators and other similar geometries so far. Nevertheless, the recent fabrication processes can (and do) allow to go to much higher frequency ranges approaching THz \[10\]. However, the appraisal of physical validity of the effective parameters involved in the growth process is not so straightforward. The surface plasmon modes predicted here should be observable in the inelastic electron (or light) scattering experiments. The EELS is already becoming known to be a powerful probe for studying the plasmon excitations in multiwalled carbon nanotubes. We trust that our methodology, as sketched in Fig. 2, will prove to be a powerful theoretical framework for studying further such plasma excitations in similar cable geometries such as multiwalled carbon nanotubes. M.S.K. gratefully acknowledges the hospitality of the UFR de Physique of the University of Science and Technology of Lille 1, France, during the short visit in 2009. We sincerely thank Leonard Dobrzynski for many very fruitful discussions. For an extensive review of electronic, optical, and transport properties of surfaces/interfaces, thin-films, inversion layers, and systems of reduced dimensions such as quantum wells, wires, dots, and (electrically and/or magnetically) modulated 2D systems, see M. S. Kushwaha, “Plasmons and magnetoplasmons in semiconductor heterostructures", Surf. Sci. Rep. [**41**]{}, 1-416 (2001). T.W. Ebbesen, H.J. Lezec, H. Ghaemi, T. Thio, and P.A. Wolf, “Extraordinary optical transmission through sub-wavelength hole arrays", Nature [**391**]{}, 667-669 (1998). J.A. Porto, F.J. Garcia-Vidal, and J.B. Pendry, “Transmission resonances on metallic gratings with very narrow slits", Phys. Rev. Lett. [**83**]{}, 2845-2848 (1999). L. Martin-Moreno, F.J. Garcia-Vidal, H.J. Lezec, K.M. Pellerin, T.Thio, and J.B. Pendry, and T.W. Ebbesen, “Theory of extraordinary optical transmission through subwavelength hole arrays", Phys. Rev. Lett. [**86**]{}, 1114-1117 (2001). Y.Takakura, “Optical resonance in a narrow slit in a thick metallic screen", Phys. Rev. Lett. [**86**]{}, 5601-5603 (2001). J.B. Pendry, L. Martin-Moreno, and F.J. Garcia-Vidal, “Mimicking surface plasmons with structured surfaces", Science [**305**]{}, 847-848 (2004.) D.R. Smith, D.C. Vier, W. Padilla, S.C. Nemat-Nasser, and S. Schultz, “Loop-wire medium for investigating plasmons at microwave frequencies", Appl. Phys. Lett. [**75**]{}, 1425-1427 (1999). F. Yang and J.R. Sambles, “Resonant transmission of microwaves through a narrow metallic slit", Phys. Rev. Lett. [**89**]{}, 063901 (2002). J.R. Suckling, A.P. Hibbins, M.J. Lockyear, T.W. Preist, J.R. Sambles, and C.R. Lawrence, “Finite conductance governs the resonance transmission of thin metal slits at microwave frequencies", Phys. Rev. Lett. [**92**]{}, 147401 (2004). S.A. Maier, S.R. Andrews, L. Martin-Moreno, and F.J. Garcia-Vidal, “Terahertz surface plasmon-polariton propagation and focusing on periodically corrugated metal wires", Phys. Rev. Lett. [**97**]{}, 176805 (2006). Z. Chen, I.R. Hooper, and J.R. Sambles, “Strongly coupled surface plasmons on thin shallow metallic gratings", Phys. Rev. B [**77**]{}, 161405 (2008). A.P. Hibbins, M.J. Lockyear, I.R. Hooper, and J.R. Sambles, “Waveguide arrays as plasmonic metamaterials: transmission below cutoff", Phys. Rev. Lett. [**96**]{}, 073904 (2006). A.P. Hibbins, M.J. Lockyear, and J.R. Sambles, “Coupled surface-plasmon-like modes between metamaterial", Phys. Rev. B [**76**]{}, 165431 (2007). M.J. Lockyear, A.P. Hibbins, and J.R. Sambles, “Microwave surface-plasmon-like modes on thin metamaterials", Phys. Rev. Lett. [**102**]{}, 073901 (2009). V.G. Veselago, “The electrodynamics of substances with simultaneously negative values of $\epsilon$ and $\mu$", Sov. Phys. Usp. [**10**]{}, 509-514 (1968). J.B. Pendry, “Negative refraction makes a perfect lens", Phys. Rev. Lett. [**85**]{}, 3966-3969 (2000). R.A. Shelby, D.R. Smith, and S. Schultz, “Experimental verification of a negative index of refraction", Science [**292**]{}, 77-79 (2001). J. Li and J.B. Pendry, “Hiding under the carpet: a new strategy for cloaking", Phys. Rev. Lett. [**101**]{}, 203901 (2008). M.G. Silveirinha, “Anomalous refraction of light colors by a metamaterial prism", Phys. Rev. Lett. [**102**]{}, 193903 (2009). J.B. Pendry, A.J. Holden, W.J. Stewart, and I. Youngs, “Extremely low frequency plasmons in metallic mesostructures", Phys. Rev. Lett. [**76**]{}, 4773-4776 (1996). M.S. Kushwaha and B. Djafari-Rouhani, “Theory of confined plasmonic waves in coaxial cylindrical cables fabricated of metamaterials", J. Opt. Soc. Am. B [**27**]{}, 148-167 (2010). M. Ibanescu, Y. Fink, S. Fan, E. L. Thomas, and J. D. Joannopoulos, “An all-dielectric coaxial waveguide", Science [**289**]{}, 415-419 (2000). G. D. Banyard, C. R. Bennett, and M. Babiker, “Enhancement of energy relaxation rates near metal-coated dielectric cylinders", Opt. Commun. [**207**]{}, 195-200 (2002). I.V. Shadrivov, A.A. Sukhorukov, and Y.S. Kivshar, “Complete band gaps in one-dimensional left-handed periodic structures", Phys. Rev. Lett. [**95**]{}, 193903 (2005).
--- abstract: 'The effect of quantizing magnetic field on the electron transport is investigated in a two dimensional topological insulator (2D TI) based on a 8 nm (013) HgTe quantum well (QW). The local resistance behavior is indicative of a metal-insulator transition at $B\approx 6$ T. On the whole the experimental data agrees with the theory according to which the helical edge states transport in a 2D TI persists from zero up to a critical magnetic field $B_c$ after which a gap opens up in the 2D TI spectrum.' author: - 'E. B. Olshanetsky,$^1$ Z. D. Kvon,$^{1,2}$ G. M. Gusev,$^3$ N. N. Mikhailov,$^1$ and S. A. Dvoretsky,$^{1}$' title: Two dimensional topological insulator in quantizing magnetic fields --- Introduction {#introduction .unnumbered} ============ 2D TI is characterized by the absence of bulk conductivity and the presence of two gapless edge current states with a linear dispersion and an opposite spin polarization that counter-propagate along the sample perimeter [@Kane; @Bernevig]. Such edge current states are called helical as opposed to the chiral edge states of the quantum Hall regime that circulate in the same direction independent of spin polarization. The described property of the 2D TI results from the energy spectrum inversion caused by a strong spin-orbit interaction. Up to date the presence of the 2D TI state has been established in HgTe QWs with an inverted energy spectrum [@Konig; @Gusev1]. The observation of a 2D TI has also been reported in InAs/GaSb heterostructure [@Du]. The later, however, require further verification since edge transport has also been reported in InAs/GaSb heterostructure with a non-inverted spectrum [@Marcus]. The effect of a perpendicular magnetic field on the properties of a 2D TI has two distinct and important aspects. On the one hand, even a weak magnetic field breaks down the time reversal symmetry protection of the topological edge states against backscattering. This effect is expected to manifest itself as a positive magnetoresistance (PMR) of a 2D TI in the vicinity of B=0. Such PMR has indeed been observed experimentally in diffusive and quasiballistic samples of 2D TI based on HgTe QWs [@Konig; @Japan; @Gusev2; @Olsh; @CM] and is found to be in qualitative agreement with the existing theoretical models [@Maciejko; @Richter]. The other aspect of a perpendicular magnetic field is related to the transformation of the edge current states spectrum under the influence of quantized magnetic fields and, eventually, to the transition of the 2D TI system to the quantum Hall effect regime. The goal of the present work is an experimental investigation of the effect of a strong quantizing magnetic field on the transport properties a quasiballistic sample of 2D TI. In the beginning a few words about the existing theoretical and experimental results related to this problem. Theoretically this problem has been investigated in [@Tkachov; @Chen; @Fabian; @Tarasenko] but the conclusions at which the authors of these works arrive are quite controversial. Indeed, Tkachev et al [@Tkachov] come to the conclusion that the gapless helical edge states of a 2D TI persist in strong quantizing magnetic fields but are no longer characterized by a linear energy spectrum. Similarly, Chen et al [@Chen] suggest that the gapless helical states of a 2D TI survive up to 10 T, but there will also emerge several new phases with unusual edge states properties. By varying the Fermi energy one should be able to observe transitions between these phases accompanied by plateaux in the longitudinal and Hall resistivity. The results of the work [@Fabian] by Scharf et al also attest a certain robustness of the helical edge states with respect to the quantizing magnetic fields. However, according to [@Fabian] the edge states persist only up to a critical field $B_c$ while at higher fields a gap proportional to $B$ opens up in the energy spectrum. Finally, a mention should be made of the results obtained by Durnev et al in [@Tarasenko] that strongly differ from those cited above. The authors of [@Tarasenko] consider the effect of a perpendicular magnetic field on the properties of a 2D TI taking into account the strong interface inversion asymmetry inherent in HgTe QW. The key conclusion of this study is that the spectrum of the 2D TI helical edge states becomes gapped at arbitrary small magnetic fields. The size of this gap depends on the width of the gap separating the bulk energy bands and grows monotonically with magnetic field reaching, on average, a noticeable value of several meVs already in fields of the order of $0.5$ T. As for the experimental investigation of the effect of quantizing magnetic fields on the 2D TI, there are lacking at present direct transport measurements in the most interesting quasiballistic transport regime. In [@Orlita; @Zholudev] far-infrared magnetospectroscopy has been used to probe the behavior of two peculiar “zero” Landau levels that split from the conduction and valence bands in an inverted HgTe QW and approach each other with magnetic field increasing. Instead of the anticipated crossing of these levels the authors have established that these levels anticross which is equivalent to the existence of a gap in the spectrum. In [@ImpSpec] a microwave impedance microscopy has been employed to visualize the edge states in a 2D TI sample. The authors come to the conclusion that there is no noticeable change in the character of the edge states up to 9 T. Samples and Experimental procedures {#samples-and-experimental-procedures .unnumbered} =================================== In the present work we study the effect of quantizing magnetic fields on the transport properties of a quasiballistic samples of 2D TI fabricated on the basis of 8nm Cd$_{0.65}$Hg$_{0.35}$Te/HgTe/Cd$_{0.65}$Hg$_{0.35}$Te QW with the surface orientation (013). Detailed description of the structure is given in [@samples1; @samples2]. The samples were shaped as six-terminal Hall bridges (two current and four voltage probes) with the lithographic size $\approx 3\times3$ $\mu$m. The ohmic contacts to the two-dimensional gas were formed by the inburning of indium. To prepare the gate, a dielectric layer containing $100$ nm $SiO_2$ and $200$ nm $Si_3Ni_4$ was first grown on the structure using the plasmochemical method. Then, a $8\times14$ $\mu$m $TiAu$ gate was deposited on top. The density variation with gate voltage was $1.09\times 10^{15}$ m$^{-2}$V$^{-1}$. The electron density at $V_g=0$ V, when the Fermi level lies in the bulk conduction band is $N_s=3.85\times10^{11}$ cm$^{-2}$. The magnetotransport measurements in the described structures were performed in the temperature range 0.2-10 K and in magnetic fields up to 10 T using a standard four point circuit with a $3-13$ Hz ac current of 0.1-1 nA through the sample, which is sufficiently low to avoid the overheating effects. Several samples from the same wafer have been studied. ![Fig.1 The gate voltage dependences of the local (a) and nonlocal (b) resistance of the 2D TI sample at $B=0$. The insert to Fig.1a shows a photographic image of the experimental sample with a scale of $20$ $\mu$m for comparison. The purple color indicates the mesa contours, the gold yellow in the middle is the gate. This and the following figures contain schematic representations of the configurations used to measure the corresponding transport parameters. The arrows mark the gate voltage corresponding to the charge neutrality point $V_g=V_{CNP}$.](Fig1n.eps){width="0.8\linewidth"} Results and Discussion {#results-and-discussion .unnumbered} ====================== Fig.1 shows the gate voltage dependences of the local (a) and nonlocal (b) resistance of the experimental sample in zero magnetic field. Both dependences have a maximum that corresponds to the passage of the Fermi level across the charge neutrality point (CNP) in the middle of the bulk energy gap. In the vicinity of the CNP and at low temperatures the charge transfer is realized predominantly by the helical edge states. The coincidence of the curves in Fig.1a taken from the opposite sides of the sample prove the sample homogeneity. With the temperature lowering both the local and nonlocal CNP resistance values increase (Fig.1b) due to the reduction of the bulk contribution to transport. In the case of a purely ballistic helical edge states transport and with the bulk contribution taken to be zero the calculation yields the following CNP resistance values for the local and nonlocal measurement configurations shown in Fig.1: $h/2e^2\approx 12.9$ k$\Omega$ (experimental value - $\approx20$ k$\Omega$) for local resistance and $2h/3e^2\approx17.2$ k$\Omega$ (experimental value $\approx11$ k$\Omega$) for nonlocal resistance. The discrepancy between the calculated and the experimental values is supposedly the result of the following two factors: the backscattering of the edge states, the nature of which is not yet quite clear, and the contribution of the bulk states. Nevertheless, the affinity between the calculated and experimental resistance values allows us to characterize the transport in our samples as quasiballistic. ![Fig.2 Local (a) and nonlocal (b) 2D TI resistance at the CNP in classically weak magnetic fields $B\leq0.5$ T.](Fig2.eps){width="0.8\linewidth"} Fig.2 shows the local (a) and nonlocal (b) 2D TI resistance at the CNP as a function of classically weak magnetic fields ($\leq0.5$ T). In both cases the dependences reveal well pronounced mesoscopic fluctuations. The presence of such fluctuations is typical in small ($\sim1$ $\mu$m) 2DTI samples (the fluctuations are absent in larger $\approx100$ $\mu$m samples fabricated from the same wafer). The observation of these fluctuations at the CNP is an additional evidence of the helical edge states experiencing backscattering. Further, in the field interval $|B|\leq0.1$ T in Fig.2 one can see a characteristic positive magnetoresistance (PMR) analogous to that studied previously in larger diffusive 2D TI samples based on a 8 nm HgTe QW [@Gusev2] and also in macro and microscopic 2D TI samples based on a 14 nm HgTe QW [@Olsh; @CM]. Much as in samples studied previously, this PMR most likely results from the magnetic field induced breakdown of the topological protection of the edge states against backscattering. However, compared to larger samples, the PMR in the quasiballistic samples has some specific features: a different (compared to larger samples) temperature dependence of the PMR amplitude, the presence of a fine structure in the PMR in nonlocal measurements (see, for example, the MR features at $B=0$ in Fig.2b.). These features require further investigation and their discussion is out of scope of the present paper. ![Fig.3 The local (a) and nonlocal (b) sample resistance in the intermediate field range $\leq6$ T at different temperatures.](Fig3.eps){width="0.8\linewidth"} Fig.3a shows the temperature dependence of the 2D TI local resistance in the intermediate field range $\leq6$ T. The monotonic decrease of the local resistance with lowering the temperature from 6.2 to 1 K (metallic T-dependence) observed in the interval $B\approx2-5$ T is not predicted by any of the theories cited in the introduction. Moreover the general run of the curves in Fig.3a excludes the expected, according to [@Tarasenko], opening of a gap in the edge current states spectrum at low magnetic fields. On the whole the local resistance behavior is reminiscent of the behavior of $\rho_{xx}(B)$ in a low-mobility 2D electron system in the vicinity of quantum Hall liquid-quantum Hall insulator transition near the filling factor $\nu=1$ of the quantum Hall effect regime (see, for example [@QH; @liquid]). It should be mentioned that such similarity has been observed earlier in diffusive macroscopic 2D TI samples [@Gusev2]. Finally, starting from $B\approx5$ T, the local resistance begins to increase sharply with magnetic field. It is instructive to compare the described behavior of the local resistance with that of the nonlocal resistance in the same magnetic field range $B\leq6$T, Fig.3b. As one can see in Fig.3b, the temperature increasing is accompanied by a modification of the signal behavior in the PMR region and by a suppression of the mesoscopic fluctuations amplitude. At the same time, however, in contrast to the local resistance in Fig.3a, the general run of the nonlocal resistance with magnetic field has no noticeable temperature dependence. Thus, showing no sign of temperature dependence the average value of the nonlocal resistance decreases monotonically with magnetic field up to $B\approx6$ T, i.e. including in the interval $4.5 \leq B \leq 6$ T, where the local resistance first displays a metallic behavior and then starts to grow sharply. It is worth noting that in the case of a gap opening up in the spectrum at the Fermi level one would expect the following behavior of the local and nonlocal resistance: $R_{LOC}\equiv R_{xx}\to\infty$ and $R_{NL}\to 0$. ![Fig.4 (a) - the local resistance at the CNP in the magnetic field range $|B| \leq 8.5$ T and in the temperature interval $1.5-10$ K, (b) - the temperature dependence of the local resistance at the CNP for magnetic fields $8$ and $8.4$ T. Insert: the same data presented versus 1/T and on a logarithmic scale.](Fig4.eps){width="0.8\linewidth"} Fig.4a presents the local CNP resistance dependence on magnetic field $B\leq8.5$ T in the temperature range $1.5-10$ K. As one can see, the sharp increase of the local resistance mentioned in the end of the previous paragraph persists in higher magnetic fields, leading to a multiple resistance value augmentation: from $\approx10$ k$\Omega$ at $B=5$ T to $\approx2$ M$\Omega$ at $B=8.5$T. To exclude possible heating effects resulting from such a rapid resistance growth, the measurements were carried out at the current level of $0.1$nA. The temperature dependence of the local resistance in the region of its intensive growth ($B\geq5$T) has a pronounced insulating character that supersedes to the metallic behavior observed in the vicinity of $B\approx4$T. The transition between the metallic and the insulating behavior occurs at $B\approx6$T which is probably an indication of a gap opening up in the energy spectrum at this particular magnetic field. Fig.4b shows the temperature dependence of the local CNP resistance for $8$ and $8.4$T. The analysis of these curves shows that in the temperature range investigated the system behavior cannot be described by a simple activation law that one would expect in the case of a gap present in the energy spectrum (see Insert to Fig.4b). A possible explanation is that the temperature dependences in Fig.4b may result from a combination of a predominantly activation transport at higher temperatures and a hopping conductivity at lower temperatures [@Thoules; @Ando]. Conclusion {#conclusion .unnumbered} ========== To conclude, in the present work we have investigated the effect of quantizing magnetic field on the transport properties of a quasiballistic 2D TI sample based on 8 nm HgTe QW with the surface orientation (013). The behavior of the local resistance is indicative of a metal-insulator transition that occurs at $B\approx6$T. The insulating state on the high-B side of the transition is characterized by a strong resistance increase with the temperature lowering, which, however, is not described by a simple activation law. On the whole the obtained results seem to be in better agreement with the theoretical prediction [@Fabian], according to which there should be a critical magnetic field $B_c$ that separates the transport via the gapless helical edge states at low fields from the activation transport due to a gap emerging in the spectrum at higher fields. The work was supported by the RFBI Grant No. N15-02-00217-a, and by FAPESP CNPq (Brazilian agencies). C. L. Kane and E. J. Mele, Phys. Rev. Lett. 95, 226801 (2005). B.A. Bernevig, T.L. Hughes, and S.-C. Zhang, Science 314, 1757 (2006). M. Konig, S. Wiedmann, C. Brune, A. Roth, H. Buhmann, L.W. Molenkamp, X.-L. Qi, and S.-C. Zhang, Science 318, 766 (2007). Gusev G M et al, Phys.Rev.B84, 121302(R) (2011). I. Knez, R.-R. Du, and G. Sullivan, Phys. Rev. Lett. 107, 136603 (2011). Fabrizio Nichele et al, New J.Phys.18, 083005 (2016). Konig M., Buhmann H., Molenkamp L. W., Hughes T., Liu C.-X., Qi X.-L. and Zhang S.-C., J. Phys. Soc. Japan 77, 031007 (2008). Gusev G. M., Olshanetsky E. B., Kvon Z. D., Mikhailov N. N. and Dvoretsky S. A., Phys. Rev. B87, 081311 (2013). E. B. Olshanetsky, Z. D. Kvon, G. M. Gusev, N. N. Mikhailov and S. A. Dvoretsky, J. Phys.: Condens. Matter 28, 345801 (2016). Maciejko J., Qi X. L. and Zhang S.-C., Phys. Rev. B82, 155310 (2010). Essert S. and Richter K., 2D Mater. 2, 024005 (2015). Tkachov G. and Hankiewicz E.M., Phys.Rev.Lett., 104, 166803 (2010). Chen Jiang-chai, Wang Jian, and Sun Qing-feng, Phys.Rev.B85, 125401 (2012). Scharf Benedikt, Matos-Abiague Alex, and Fabian Jaroslav, Phys.Rev.B86, 075418 (2012). Durnev M. V. and Tarasenko S. A., Phys.Rev.B93, 075434 (2016). Orlita, M. et al, Phys. Rev. B83, 115307 (2011). Zholudev, M. et al, Phys. Rev. B86, 205420 (2012). Eric Yue Ma et al, Nature Comm. DOI: 10.1038/ncomms8252 (2015). Z. D. Kvon, E. B. Olshanetsky, D. A. Kozlov, N. N. Mikhailov, and S. A. Dvoretsky, Pis’ma Zh. Eksp. Teor. Fiz. 87, 588 (2008) \[JETP Lett. 87, 502 (2008)\]. G. M. Gusev, E. B. Olshanetsky, Z. D. Kvon, N. N. Mikhailov, S. A. Dvoretsky, and J. C. Portal, Phys. Rev. Lett. 104, 166401 (2010). R.J.F. Hughes et al, J.Phys.:Condens. Matter 6, 4763-4770 (1994). Q.Li, D.J. Thoules, Phys.Rev.B40, 9738 (1989). T. Ando, Phys.Rev. B40, 9965 (1989).
--- abstract: 'At the LHC Multiple Parton Interactions will represent an important feature of the minimum bias and of the underlying event and will give important contributions in many channels of interest for the search of new physics. Different numbers of multiple collision may contribute to the production of a given final state and one should expect important interference effects in the regime where different contributions have similar rates. We show, on the contrary, that, once multiple parton interactions are identified by their different topologies, terms with different numbers of multiple parton interactions do not interfere in the final cross section.' author: - 'G. Calucci' - 'D. Treleani' title: Incoherence and Multiple Parton Interactions --- Introduction ============ The growing flux of partons at high energy will increase considerably the chances to have inelastic events where more than a single pair of partons interact with large momentum exchange at the LHC[@HERA-LHC][@Perugia][@Kulesza:1999zh][@Acosta:2004wqa][@Acosta:2006bp][@Hussein:2007gj][@Maina:2009vx][@Domdey:2009bg]. The phenomenon is originated by the increasingly large flux of partons at small fractional momenta. Once the final state is fixed the parton flux is maximized in the channel where the hard component of the interaction is maximally disconnected[@Paver:1984ux]. In the resulting picture of the process, the dominant contribution at high energy is hence given by a set of Multiple Parton Interactions (MPI) where different pairs of partons collide independently in different points inside the overlap volume of the two interacting hadrons[@Sjostrand:1987su][@Ametller:1987ru][@Rogers:2008ua]. On the other hand, although the contribution with the largest number of initial state partons dominates at large hadron-hadron center of mass energes, a given final state may be generated by various competing processes, characterized by different numbers of partonic collisions[@Maina:2009vx]. Interactions with different numbers of partonic collisions populate in fact the final state phase space in a different way and one may always find kinematical regions where terms with different numbers of collisions give similar contributions to the cross section. In those kinematical regions, important interference effects between different production mechanisms should be expected. The problem of interferences between terms with different numbers of collisions was, on the other hand, never discussed in the literature, while all theoretical estimates have always assumed incoherence between contributions with different numbers of parton collisions, obtaining results not in contradiction with the available experimental evidence[@Akesson:1986iv][@Abe:1997bp][@Abe:1997xk][@Abazov:2009wy]. The purpose of the present note is to gain some understanding of the problem by looking at the kinematics of the different terms. After reminding the reader of the kinematical argument, which leads to the geometrical picture of MPI processes, we will analyze an interference diagram. The comparison between diagonal and off-diagonal terms in the cross section will allow to draw some general conclusions. MPI diagonal scattering diagram =============================== In Fig.1 we show the cut diagram representing the contribution to the forward amplitude of a process with $n$ partonic collisions. ![Unitarity diagram for the multi parton scattering cross section[]{data-label="fig:MPI"}](Fig1.pdf){width="13cm"} To study the kinematics we simplify the problem by limiting our discussion to the scalar case[@Paver:1982yp]. We consider moreover all partons as identical particles. The soft vertices $\phi$ are assumed to be characterized by a non-perturbative scale of the order of the hadron radius $R$, which represents the (energy independent) scale of the transverse momenta and virtualities of the attached lines. To be definite, we consider the specific multi-parton interaction process where each elementary interaction $T_i$, represented by the squares in the figure, generates two large $p_t$ partons with momenta $p_i$ and $\bar p_i$. Momentum conservation in the vertices limits the number of four dimensional variables to $3n-1$. Defining $$\begin{aligned} \delta_i=\frac{1}{2}(a_i-a_{i+1}),\qquad \delta_i'=\frac{1}{2}(a_i'-a_{i+1}'),\qquad P_i=p_i+\bar p_i,\end{aligned}$$ in such a way that $a_i+b_i=P_i=a_1'+b_i'$, one may choose as independent variables: $$\begin{aligned} \begin{cases} P_i,&n-\text{variables},\\ \delta_i,&(n-1)-\text{variables},\\ \delta_i',&(n-1)-\text{variables},\\ k\quad\text{or}\quad l, \end{cases}\end{aligned}$$ since the overall momentum conservation is $\sum_iP_i+k+l=P_A+P_B$. Sometimes the auxiliary dependent variables $$\begin{aligned} \bar\delta_i=\frac{1}{2}(b_i-b_{i+1})=\frac{1}{2}(P_i-P_{i+1})-\delta_i\end{aligned}$$ will be used. The four momentum variables will be represented in the light-cone form and the longitudinal and transverse components will be studied separately. The vertices $\phi$ are “soft” and represent the non-perturbative partonic content of the hadron. In the hadron c.m. one has $$\begin{aligned} &&a_{\perp}\lesssim\frac{1}{R}\nonumber\\ &&a_z\lesssim\frac{1}{R}\\ &&a^2\lesssim\frac{1}{R^2}\to a_0\lesssim\frac{1}{R},\nonumber\end{aligned}$$ where $R$ is the hadron radius. By performing a boost to reach the c.m. frame of the two interacting hadrons, one obtains $$\begin{aligned} \begin{cases} a_{\perp},\ b_{\perp}\lesssim\frac{1}{R}\\ a_{+},\ b_{-}\lesssim\frac{\sqrt {\cal S}}{2M}\frac{1}{R}\\ a_{-},\ b_{+}\lesssim\frac{2M}{\sqrt {\cal S}}\frac{1}{R}\\ P_i^2=(a_i+b_i)^2\lesssim\frac{{\cal S}}{4M^2}\frac{1}{R^2}, \end{cases}\end{aligned}$$ where ${\cal S}=(P_A+P_B)^2$ is the hadron-hadron c.m. energy and $M$ is a scale of the order of the hadron mass. When looking at $\delta_{i-}$ and $\bar\delta_{i+}$ one hence finds that both variables become smaller and smaller at large c.m. energies: $$\begin{aligned} \begin{cases} &\delta_{i-}\lesssim\frac{2M}{\sqrt {\cal S}}\frac{1}{R}\\ &\bar\delta_{i+}\lesssim\frac{2M}{\sqrt {\cal S}}\frac{1}{R}. \end{cases}\end{aligned}$$ The variables $\delta_{i-}$ are thus relevant only for the vertex $\phi_A$ and for the propagators of the lines with momenta $a_i$, since the kinematical range of $a_{i-}$ is of ${\cal O}(2M/R\sqrt {\cal S})$. At the leading order, one needs in fact to take into account only of the kinematical variables which grow as $\sqrt {\cal S}$ in the hard interaction vertices, while the lower vertex $\phi_B$ and the propagators of the lines with momenta $b_i$ will remain practically constant for variations of $\delta_-$ of ${\cal O}(2M/R\sqrt {\cal S})$, as the kinematical range of $b_{i-}$ is of ${\cal O}(\sqrt {\cal S})$. Conversely the variables $\bar\delta_{i+}$ are relevant only for the vertex $\phi_B$ and for the propagators of the lines with momenta $b_i$, where all “+” components have a kinematical range of ${\cal O}(2M/R\sqrt {\cal S})$. Similar considerations hold for the variables $a_i'$, $b_i'$, $\delta_{i-}'$, $\bar\delta_{i+}'$ and for the vertices $\phi_A'$, $\phi_B'$. One may hence define: $$\begin{aligned} &&\psi_A\bigl(a_{1+}\dots a_{n+};a_{1\perp}\dots a_{n\perp};k_-;j \bigr)\equiv\int\frac{\phi_A(a_1\dots a_n,k;j)}{\prod_i^n a_i^2}\prod_i^{n-1} \frac{d\delta_{i-}}{2\pi}\Big|_{\bar\delta_{i+}=0}\nonumber\\ &&\psi_B\bigl(b_{1-}\dots b_{n-};b_{1\perp}\dots b_{n\perp};l_+;j'\bigr)\equiv\int\frac{\phi_B(b_1\dots b_n,l;j')}{\prod_i^n b_i^2}\prod_i^{n-1} \frac{d\delta_{i+}}{2\pi}\Big|_{\bar\delta_{i-}=0}\end{aligned}$$ where $\ k_+,\ k_{\perp}$ are given in terms of $a_{i+},\ a_{i\perp}$ and $\ l_-,\ l_{\perp}$ are given in terms of $b_{i-},\ b_{i\perp}$, while the value of $k_-$ is determined by the values of $k_+,\ k_{\perp}$ and by the value of the invariant mass of the remnants of the hadron $A$, $(P_A-k)^2$. $l_+$ is similarly determined by $l_-,\ l_{\perp}$ and by the value of the invariant mass of the remnants of the hadron $B$, $(P_B-l)^2$. All other variables which characterize the remnants of $A$ and $B$ are labeled by the indices $j$ and $j'$ respectively. One has: $$\begin{aligned} &&a_i+b_i=P_i=a_i'+b_i'\nonumber\\ &&a_{i+}+b_{i+}=a_{i+}'+b_{i+}'\simeq a_{i+}\simeq a_{i+}'=P_{i+}\\ &&a_{i-}+b_{i-}=a_{i-}'+b_{i-}'\simeq b_{i-}\simeq b_{i-}'=P_{i-}.\nonumber\end{aligned}$$ The “+” and “-” components $\frac{a_{i+}}{\sqrt {\cal S}},\ \frac{b_{i-}}{\sqrt {\cal S}}$ are the “+” or “-” fractional momenta $x_i^A,\ x_i^B$ and are given by the final state observable quantities $P_{i+}$ and $P_{i-}$. All longitudinal variables are hence either integrated or determined by the final state observables. In particular, taking into account only the terms which grow with $\sqrt {\cal S}$, one has that $x_i^A=x_i'^A$, $x_i^B=x_i'^B$. As we are particularly interested in the structure of the interaction in transverse plane, we express the elementary interaction amplitude as a two-dimensional Fourier transform with respect to the relative transverse distance $r_{i}$: $$\begin{aligned} T(\lambda_i, t_{i\perp})=\frac{1}{2\pi}\int {\, e^{\,\,\textstyle {i t_{i\perp}\cdot r_i}}}\tilde T(\lambda_i,r_i)d^2 r_i\end{aligned}$$ where $t_i=\frac{1}{2}(a_i-p_i-b_i+\bar p_i)$ is the momentum transfer and all longitudinal variables are summarized by $\lambda_i$. The multiparton cross section may hence be obtained by performing the $n$ two-dimensional integrations on $r_i,\ r'_i$ and in the following variables: $$\begin{aligned} \begin{cases} &P_{i\perp}=a_{i{\perp}}+b_{i{\perp}}=a_{i{\perp}}'+b_{i{\perp}}',\qquad n\ \text{variables},\\ &q_{i{\perp}}=(a_{i{\perp}}-b_{i{\perp}})/2,\qquad n\ \text{variables},\\ &q_{i{\perp}}'=(a_{i{\perp}}'-b_{i{\perp}}')/2,\qquad n\ \text{variables}. \end{cases}\end{aligned}$$ Momentum conservation imposes however a constraint on the integration variables. As the incoming hadrons have vanishing transverse momenta one has $$\begin{aligned} k_{\perp}+\sum_ia_{i{\perp}}=k_{\perp}+\sum_ia_{i{\perp}}'=0,\qquad l_{\perp}+\sum_ib_{i{\perp}}=l_{\perp}+\sum_ib_{i{\perp}}'=0,\end{aligned}$$ which imply $$\begin{aligned} \sum_i(q_{i{\perp}}-q_{i{\perp}}')=0\end{aligned}$$ in such a way that the cross section is given by $5n-1$ independent two-dimensional integrations. The integrals on the transverse components are suitably performed by representing the $\psi$ functions as two-dimensional Fourier transforms with respect to the transverse parton coordinates $s_i,\bar s_i, s_i', \bar s_i'$: $$\begin{aligned} \psi_A\bigl(x_1^A\dots x_n^A;\ a_{1\perp}&\dots& a_{n\perp};(P_A-k)^2;j\bigr)\nonumber\\&=&\int\tilde\psi_A\bigl(x_1^A\dots x_n^A;\ s_1\dots s_n;(P_A-k)^2;j\bigr)\prod_{i=1}^n{\, e^{\,\,\textstyle {i a_{i\perp}\cdot s_i}}}\frac{d^2s_i}{2\pi}\nonumber\\ \psi_A^*\bigl(x_1^A\dots x_n^A;\ a_{1\perp}'&\dots& a_{n\perp}';(P_A-k)^2;j\bigr)\nonumber\\&=&\int\tilde\psi_A^*\bigr(x_1^A\dots x_n^A;\ s_1'\dots s_n';(P_A-k)^2;j\bigr)\prod_{i=1}^n{\, e^{\,\,\textstyle {-i a_{i\perp}'\cdot s_i'}}}\frac{d^2s_i'}{2\pi}\nonumber\\ \psi_B\bigl(x_1^B\dots x_n^B;\ b_{1\perp}&\dots& b_{n\perp};(P_B-l)^2;j'\bigr)\nonumber\\&=&\int\tilde\psi_B(x_1^B\dots x_n^B;\ \bar s_1\dots\bar s_n;(P_B-l)^2;j'\bigr)\prod_{i=1}^n{\, e^{\,\,\textstyle {i b_{i\perp}\cdot \bar s_i}}}\frac{d^2\bar s_i}{2\pi}\nonumber\\ \psi_B^*\bigl(x_1^B\dots x_n^B;\ b_{1\perp}'&\dots& b_{n\perp}';(P_B-l)^2;j'\bigr)\nonumber\\&=&\int\tilde\psi_B^*\bigl(x_1^B\dots x_n^B;\ \bar s_1'\dots\bar s_n';(P_B-l)^2;j'\bigr)\prod_{i=1}^n{\, e^{\,\,\textstyle {-i b_{i\perp}'\cdot \bar s_i'}}}\frac{d^2\bar s_i'}{2\pi},\nonumber\\\end{aligned}$$ where the longitudinal components are expressed through the fractional momenta $x_i^A,\ x_i^B$. The constraint in Eq.(12) is imposed by introducing the further integration $$\begin{aligned} \frac{1}{(2\pi)^2}\int {\, e^{\,\,\textstyle {i\beta\sum(q_{i{\perp}}-q_{i{\perp}}')}}}d^2\beta\end{aligned}$$ where $\beta$ is the hadronic impact parameter. One may integrate on $\prod dP_{\perp},\ \prod dq_{\perp},\ \prod dq_{\perp}'$ by using the relations $$\begin{aligned} a_{i\perp}&=&\frac{1}{2}P_{i\perp}+q_{i\perp},\qquad b_{i\perp}=\frac{1}{2}P_{i\perp}-q_{i\perp}\nonumber\\ a_{i\perp}'&=&\frac{1}{2}P_{i\perp}+q_{i\perp}',\qquad b_{i\perp}'=\frac{1}{2}P_{i\perp}-q_{i\perp}'\\ t_{i\perp}&=& q_{i\perp}-\frac{1}{2}(p_{i\perp}-\bar p_{i\perp}),\qquad t_{i\perp}'=q_{i\perp}'-\frac{1}{2}(p_{i\perp}-\bar p_{i\perp}).\nonumber\end{aligned}$$ One obtains $$\begin{aligned} \int \frac{dP_{i\perp}}{(2\pi)^2}&\to& \delta\big((s_i+\bar s_i-s'_i-\bar s'_i)/2\big)\nonumber\\ \int \frac{dq_{i\perp}}{(2\pi)^2}&\to& \delta(s_i-\bar s_i+r_i+\beta)\nonumber\\ \int \frac{dq_{i\perp}'}{(2\pi)^2}&\to& \delta(s_i'-\bar s_i'+r_i'+\beta)\end{aligned}$$ which is equivalent to $\delta\bigr(s_i-s'_i-\frac{1}{2}(r_i'-r_i)\bigl)\delta (s_i-s'_i+\bar s_i-\bar s'_i)\delta(s_i-\bar s_i+r_i+\beta)$. The integrations on $r_i$ and $r_i'$ involve the factor $$\begin{aligned} \frac{1}{(2\pi)^2}\int {\, e^{\,\,\textstyle {i \frac{r_i-r_i'}{2}\cdot(t_{i\perp}-t_{i\perp}')}}}\tilde T(\lambda_i,r_i)\tilde T^*(\lambda_i,r_i')d^2 r_id^2 r_i'.\end{aligned}$$ The difference $\frac{1}{2}(r_i'-r_i)$ hence represents a measure of the localization of the interaction. As it appears from eq.(9), typical values of $r_i$ and $r'_i$ are of the order of $1/t_{i\perp}$, which for large transverse momenta is equal to $1/p_{i\perp}$. When the momentum exchanged in the elementary interaction is large, $r_i$ becomes much smaller as compared with the hadron size and it may be neglected, as well as $r'_i$, everywhere except in the expression in Eq.17. The dominant contribution at large c.m. energy and at large exchanged momenta is thus obtained by making the following replacements in the evaluation of the discontinuity of the diagram in Fig.1: $$\begin{aligned} \delta\bigr(s_i-s'_i-\frac{1}{2}(r_i'-r_i)\bigl)\delta (s_i-s'_i+\bar s_i-\bar s'_i)\delta(&s_i&-\bar s_i+r_i+\beta)\nonumber\\ \Longrightarrow&&\delta\bigr(s_i-s'_i\bigl)\delta\bigl(\bar s_i-\bar s'_i\bigl)\delta(s_i-\bar s_i+\beta)\nonumber\\ \frac{1}{(2\pi)^2}\int {\, e^{\,\,\textstyle {i \frac{r_i-r_i'}{2}\cdot(t_{i\perp}-t_{i\perp}')}}}\tilde T(\lambda_i,r_i)\tilde T^*(&\lambda_i&,r_i')d^2 r_id^2 r_i'\nonumber\\ \Longrightarrow &&|T(x_i,x'_i|p_i,\bar p_i)|^2,\end{aligned}$$ where only the kinematical components which grow with ${\cal S}$ are taken into account in the evaluation of $T$. One is hence left with the integrations on $\int d(P_A-k)_-d(P_B-l)_+d\beta\prod ds_i$, while the inclusive cross section depends on $|\tilde\psi_A|^2$ and $|\tilde\psi_B|^2$. Explicitly: $$\begin{aligned} \sigma_n=\frac{1}{2{\cal S}n!}\int&& \sum_j\big|\tilde\psi_A\bigl(x_1^A\dots x_n^A;s_1\dots s_n;(P_A-k)^2;j\bigr)\big|^2\nonumber\\ &\times&\sum_{j'}\big|\tilde\psi_B\bigl(x_1^B\dots x_n^B;s_1-\beta\dots s_n-\beta;(P_B-l)^2;j'\bigr)\big|^2\nonumber\\ &\times& d(P_A-k)_-d(P_B-l)_+\frac{d^2\beta}{(2\pi)^2}\Bigl[\frac{n}{2^{n-1}}\Bigr]^4\prod_i^n\frac{d^2s_i}{2(2\pi)^2} \nonumber\\ & \times&\big|T(x_i^A,x_i^B|p_i,\bar p_i)\big|^2d\Phi_i \;\Bigl(\frac{\cal S}{2}\Bigr)^n \;dx_1^A\dots dx_n^A\;dx_1^B\dots dx_n^B\end{aligned}$$ where $\Phi_i$ is the invariant adimensional final state phase space of the elementary interaction $a_i+b_i\to p_i+\bar p_i$ and all elementary interactions are considered as indistinguishable. The factors $2$, $\pi$ etc. originate from the Jacobian which leads to the variables in Eq.19. After multiplying and dividing by the flux factors $2{\cal S}x_i^Ax_i^B$, one may introduce the elementary partonic cross sections $$\begin{aligned} \hat\sigma(x_i^A,x_i^B;p_{cut})=\frac{1}{2{\cal S}x_i^Ax_i^B}\int_{p_{i\perp}>p_{cut}}\big|T(x_i^A,x_i^B|p_i,\bar p_i)\big|^2d\Phi_i\end{aligned}$$ where $p_{cut}$ is a cutoff in the transverse momenta of final state partons, introduced to allow to compute the cross section in perturbative QCD. The multi-parton densities $\Gamma(x_i;s_i)$ are hence defined as $$\begin{aligned} \Gamma(x_1\dots x_n;s_1\dots s_n)=\frac{1}{(2\pi)^{n+1}}&&\frac{n^2}{4^{n-1}}\frac{{\cal S}^{n-1}\prod_i^nx_i}{\sqrt 2(1-\sum_i^nx_i)}\nonumber\\\times&&\int\sum_j \big|\tilde\psi\bigl(x_1\dots x_n;s_1\dots s_n;(P-k)^2;j\bigr)\big|^2 d(P-k)^2\end{aligned}$$ and the cross section is finally expressed by $$\begin{aligned} \sigma_n=\frac{1}{n!}\int&&\Gamma_A(x_1^A\dots x_n^A;s_1\dots s_n)\nonumber\\ &\times&\prod_i\hat\sigma(x_i^A,x_i^B;p_{cut})\Gamma_B(x_1^B\dots x_n^B;s_1-\beta\dots s_n-\beta)d^2\beta d^2s_idx_i^Adx_i^B,\end{aligned}$$ which represents the superposition of $n$ elementary collisions localized in regions with transverse size of the order of $1/p_{cut}$, much smaller as compared with the hadron size, and with the mean value of the transverse coordinates $s_i$. Interference terms ================== There are two different ways of producing interference terms. A possibility is to have a different number of hard collisions on the left and on the right hand side of the cut. Another possibility is to have the same number of hard collisions on both sides of the cut, in which case interferences are produced by reshuffling the final states of the hard collision terms. We will first consider a case of interference between a term with $n$ and a term with $n-1$ collisions, the corresponding unitarity diagram is shown in Fig.2. ![Interference term[]{data-label="fig:interference"}](Fig2.pdf){width="13cm"} In this case the independent variables are $3n-2$: $$\begin{aligned} \begin{cases} P_i&n-\text{variables},\\ \delta_i&(n-2)-\text{variables},\\ \delta_i'&(n-1)-\text{variables},\\ k\quad\text{or}\quad l. \end{cases}\end{aligned}$$ The longitudinal variables may be discussed as in the previous case. The obvious difference is in the resulting expression, which is no longer a modulus squared. The main point for the present considerations concerns the integrations on the transverse variables. Analogously to the diagonal case, the integrations on the transverse variables are conveniently discussed taking the Fourier transforms of the functions $\psi$: $$\begin{aligned} \psi_{A,n-1}\bigl(x_0^A,x_3^A&\dots& x_n^A; a_{0\perp},a_{3\perp}\dots a_{n\perp};(P_A-k)^2;j\bigr)\nonumber\\&=&\int\tilde\psi_{A,n-1}\bigl(x_0^A,x_3^A\dots x_n^A;\ s_0,s_3\dots s_n;(P_A-k)^2;j\bigr)\prod_{i=1}^n{\, e^{\,\,\textstyle {i a_{i\perp}\cdot s_i}}}\frac{d^2s_i}{2\pi}\nonumber\\ \psi_{B,n-1}\bigl(x_0^B,x_3^B&\dots& x_n^B;\ b_{0\perp},b_{3\perp}\dots b_{n\perp};(P_B-l)^2;j\bigr)\nonumber\\&=&\int\tilde\psi_{B,n-1}(x_0^B,x_3^B\dots x_n^B;\ \bar s_0,\bar s_3\dots\bar s_n;(P_B-l)^2;j\bigr)\prod_{i=1}^n{\, e^{\,\,\textstyle {i b_{i\perp}\cdot \bar s_i}}}\frac{d^2\bar s_i}{2\pi},\end{aligned}$$ where a label representing the number of interacting parton lines ($n-1$ in this case) has been introduced. The complex conjugate functions $\tilde\psi^*_n$ are the same as in the diagonal case. As discussed in the diagonal case, the large transverse momenta exchanged in each elementary collision localize the interactions in transverse space regions much smaller as compared to the hadron radius. While the discussion of the transverse variables may be done following the lines of the diagonal case, the treatment may be simplified by neglecting since start the distances between the interacting partons $r_i,\ r_i'$ in comparison with the parton coordinates $s_i,\ s_i',\ \bar s_i,\ \bar s_i'$. Introducing the variable $q_0=a_0-b_0$ one has $$\begin{aligned} \begin{cases} a_0=\frac{1}{2}(P_1+P_2+q_0)\\ b_0=\frac{1}{2}(P_1+P_2-q_0). \end{cases}\end{aligned}$$ The integration on $q_{0\perp}$ gives $$\begin{aligned} \int dq_{0\perp}&\to&\delta(s_0-\bar s_0+\beta),\end{aligned}$$ the integrations on $q_1',\ q_2'$ (previously defined) give $$\begin{aligned} \int dq_{1\perp}'&\to&\delta(s_1'-\bar s_1'+\beta)\nonumber\\ \int dq_{2\perp}'&\to&\delta(s_2'-\bar s_2'+\beta)\end{aligned}$$ and all other integrations on $q_i,\ q_i'$ are the same as in the diagonal case. At this stage the functions $\tilde\psi$ depend on the transverse variables as follows (the dependence on the fractional momenta and on the invariant mass of the remnants of the hadron is implicit) $$\begin{aligned} \tilde\psi_{A,n-1}(s_0,s_3\dots)\tilde\psi_{A,n}^*(s_1',s_2',s_3'\dots)\tilde\psi_{B,n-1}(s_0-\beta,s_3-\beta\dots)\tilde\psi_{B,n}^*(s_1'-\beta,s_2'-\beta,s_3'-\beta\dots).\nonumber\\\end{aligned}$$ One may now perform the integrations on $P_{1\perp},\ P_{2\perp}$. The result is $$\begin{aligned} \int dP_{1\perp}&\to&\delta(s_0+\bar s_0-s_1'-\bar s_1')\nonumber\\ \int dP_{2\perp}&\to&\delta(s_0+\bar s_0-s_2'-\bar s_2'),\nonumber\\\end{aligned}$$ which is equivalent to $\delta(s_0-s_1')\delta(s_0-s_2')$. All other integrations on $P_{i\perp}$ give the same result as in the diagonal case. One hence obtains that two transverse variables coincide in $\tilde\psi_{A,n}^*$ and $\tilde\psi_{B,n}^*$ and the cross section is proportional to the integral $$\begin{aligned} \int&&\tilde\psi_{A,n-1}(s_0,s_3\dots s_n)\tilde\psi_{A,n}^*(s_0,s_0,s_3\dots s_n)\nonumber\\ \times&&\tilde\psi_{B,n-1}(s_0-\beta,s_3-\beta\dots s_n-\beta)\tilde\psi_{B,n}^*(s_0-\beta,s_0-\beta,s_3-\beta\dots s_n-\beta)ds_0d\beta\prod_{i=3}^nds_i\nonumber\\\end{aligned}$$ Analogously to the diagonal case, $s_0,s_3\dots s_n$ represent the transverse coordinates of the positions of the interaction regions. One may hence conclude that, in the interference term between a $n$ and a $n-1$ collisions amplitude, the hard component of the interaction is localized in $n-1$ points in transverse space. When the number of hard collisions is the same in both sides, a non-diagonal contribution may be obtained by linking, across the cut in Fig.1, two different collision amplitudes through the produced large $p_t$ final states. In such a case one obtains that the positions of the two hard interactions are localized within the same region, with size of ${\cal O} (r_{\perp}^2)$, which implies that the number of integrations in the transverse coordinates $s_{i\perp}$ is reduced by one unit with respect to the diagonal case. Also in this case the interference term hence corresponds to a case where the hard interaction is localized in $n-1$ points in transverse space. Rather obviously further crossings of the hard ($p,\ \bar p$) lines would further reduce the number of transverse integrations and hence the number of points in transverse space where the hard component of the interaction is localized. Concluding discussion ===================== A given multi-partons final state may be produced by interactions involving different numbers of partons in the initial state and the cross section, resulting from the coherent sum of all different terms, is expressed by a sum of diagonal and off diagonal contributions. As shown in Sec.2, the diagonal contribution, corresponding to a term with $n$ partons in the initial state, is given by the incoherent super-positions of $n$ disconnected parton interactions, localized in $n$ different points in transverse space. As shown in Sec.3, the hard component of the interaction corresponding to off diagonal contributions is disconnected and localized in no more than $n-1$ points in transverse space. One may hence argue that interference terms do not represent corrections to the $n$-partons scattering inclusive cross section. They rather correct the $n-1$-partons (or less) scattering inclusive cross section. Partons are in fact localized in the hadron by the momenta exchanged in the interaction. When partons are localized inside [*non overlapping regions*]{}, much smaller as compared to the hadron size, they are only connected one with another through soft exchanges and the picture of independent parallel collisions described in section 2 is a meaningful one. If, on the contrary, partons are localized by the interaction inside [*overlapping regions*]{}, much smaller as compared to the hadron size, they are allowed to interact by exchanging momenta of the size of their virtuality, which implies that the evaluation of the interference term, as discussed in section 3, is no longer adequate. In Fig.3 we show an interference diagram between a single collision amplitude, where two partons interact at tree level producing four large $p_t$ partons, and an amplitude, where two partons, generated by the same short distance quantum fluctuation in the hadron $B$, interact with two partons of the hadron A, producing two pairs of large $p_t$ partons. ![A particular case of interference diagram[]{data-label="fig:interference between terms at different orders in g"}](Fig3.pdf){width="13cm"} As it appears looking at Fig.3, because of the localization of the hard component of the interaction, the problem of interference is strictly linked to the problem to evaluate the single scattering amplitude, at higher orders in the coupling constant and including higher twists in the hadron structure. One may hence conclude that a convenient way to distinguish the different terms in a MPI process is by their different topologies: As a consequence of the different scales involved in the interaction, namely the hadron size and the large momenta exchanged, the structure of the hard component may be disconnected, with the different hard parts linked only through soft exchanges. The disconnected parts of the hard interaction are localized in different regions in transverse space, inside the overlap of the matter distribution of the two interacting hadrons. The different MPI terms are to be understood as the contributions to the final state due to the different disconnected parts of the hard component of the interaction. In the instance of a single scattering the hard component is wholly connected and hence localized inside a single region in transverse space. In the simplest case, a single interaction is well described by the simple QCD-parton model recipe, with the parton interaction cross section evaluated at the lowest order in the coupling constant and convoluted with the parton distributions. In other cases, one may need to take into account higher order terms in the coupling constant to evaluate the partonic cross section or/and to include higher twist terms in the parton distributions. The effects of higher order terms and of higher twists may be more important when dealing with final states with several large $p_t$ partons. A multiple interaction, on the other hand, has to be understood as a process where the hard part of the interaction is disconnected and localized in a number of different regions in transverse space. In the simplest case, in each different region the interaction may be evaluated at the lowest order in the coupling constant. The main observation in the present note is that when MPI are understood in the topological sense described here above, different MPI terms, corresponding to different localizations in transverse space, namely to different topologies, do not interfere and the final cross section is obtained simply by the superposition of the cross sections, due to the contributions of the different topologies of the hard component of the interaction. The topological feature represents on the other hand also the property which allows to recognize the contribution to the final state due to MPI. In each single parton collision all transverse momenta need to balance and a MPI process contributes to the cross section generating different groups of final state partons where the large transverse momenta are compensated separately. The compensation of transverse momenta within different subsets of large $p_t$ final state partons was in fact the feature which allowed the experimental identification and study of double parton collisions\[13-16\]. Interestingly, in a recent study of MPI at LHC energies it was shown that, requiring the compensation of transverse momenta within different subsets of large $p_t$ final state partons, one my expect to be able to isolate contributions due to triple parton collisions also in channels with a relatively low cross sections and with relatively large transverse momenta[@Maina:2009vx]. [9]{} http://www.desy.de/ heralhc/ http://www.pg.infn.it/mpi08/ A. Kulesza and W. J. Stirling, Phys. Lett.  B [**475**]{}, 168 (2000) \[arXiv:hep-ph/9912232\]. D. E. Acosta [*et al.*]{} \[CDF Collaboration\], Phys. Rev.  D [**70**]{}, 072002 (2004) \[arXiv:hep-ex/0404004\]. D. Acosta, F. Ambroglini, P. Bartalini, A. De Roeck, L. Fano, R. Field and K. Kotov, “The underlying event at the LHC,” CERN-CMS-NOTE-2006-067. M. Y. Hussein, arXiv:0710.0203 \[hep-ph\]. E. Maina, JHEP [**0904**]{}, 098 (2009) \[arXiv:0904.2682 \[hep-ph\]\]. S. Domdey, H. J. Pirner and U. A. Wiedemann, arXiv:0906.4335 \[hep-ph\]. N. Paver and D. Treleani, Z. Phys.  C [**28**]{}, 187 (1985). T. Sjostrand and M. van Zijl, Phys. Rev.  D [**36**]{}, 2019 (1987). L. Ametller and D. Treleani, Int. J. Mod. Phys.  A [**3**]{}, 521 (1988). T. C. Rogers, A. M. Stasto and M. I. Strikman, Phys. Rev.  D [**77**]{}, 114009 (2008) \[arXiv:0801.0303 \[hep-ph\]\]. T. Akesson [*et al.*]{} \[Axial Field Spectrometer Collaboration\], Z. Phys.  C [**34**]{}, 163 (1987). F. Abe [*et al.*]{} \[CDF Collaboration\], Phys. Rev. Lett.  [**79**]{}, 584 (1997). F. Abe [*et al.*]{} \[CDF Collaboration\], Phys. Rev.  D [**56**]{}, 3811 (1997). V. M. Abazov [*et al.*]{} \[D0 Collaboration\], arXiv:0906.5326 \[hep-ex\]. N. Paver and D. Treleani, Nuovo Cim.  A [**70**]{}, 215 (1982).
--- abstract: 'We enumerate the number of complex irreducible representations of each degree of general unitary groups of degree $4$ over principal ideal local rings of length two.' author: - | Matthew Levy\ \ Bielefeld University bibliography: - 'refs.bib' title: Enumerating representations of general unitary groups over principal ideal rings of length $2$ --- Introduction ============ Let $F$ be a non-Archimedean local field with ring of integers $\mathfrak{o}$ and let $\mathfrak{p}$ be the unique maximal ideal of $\mathfrak{o}$. Assume that the residue field $\bold{k}=\mathfrak{o}/\mathfrak{p}$ is finite of order $q$ and characteristic $p$. This paper concerns groups of the form $\operatorname{\bold{G}}=\operatorname{\bold{G}}(\mathfrak{o})$ where $\operatorname{\bold{G}}$ is one of the $\mathfrak{o}$-group schemes of type $\operatorname{A}_{3}$, i.e. $\operatorname{GL}_4$ or $\operatorname{GU}_4$. Here, the groups $\operatorname{GU}_n(\mathfrak{o})$ are defined over $\mathfrak{o}$ using the non-trivial Galois automorphism of an unramified quadratic extension of $\mathfrak{o}$. For $l\in\mathbb{N}$ we denote by $\mathfrak{o}_l$ the reduction of $\mathfrak{o}_l$ modulo $\mathfrak{p}^l$, i.e. $\mathfrak{o}_l=\mathfrak{o}/\mathfrak{p}^l$. We will simply write $\operatorname{\bold{G}}^{\epsilon}_n$ where $\epsilon\in\{\pm 1\}$ to denote $\operatorname{\bold{G}}^{1}_n = \operatorname{GL}_n$ and $\operatorname{\bold{G}}^{-1}_n=\operatorname{GU}_n$. We define the representation zeta function of a group $G$ to be the sum $$\zeta_G(s):=\sum_{\chi\in\operatorname{Irr}(G)}\chi(1)^{-s},$$ where we sum over the complex irreducible characters of $G$, $\operatorname{Irr}(G)$ and $s$ is a complex variable. The groups $\operatorname{\bold{G}}^{\epsilon}_n(\mathfrak{o})$ play an important role in the representation theory of the groups $\operatorname{\bold{G}}^{\epsilon}_n(F)$, being maximal compact subgroups. Furthermore, every continuous representation of $\operatorname{\bold{G}}^{\epsilon}_n(\mathfrak{o})$ factors through one of the natural homomorphisms $\operatorname{\bold{G}}^{\epsilon}_n(\mathfrak{o})\rightarrow\operatorname{\bold{G}}^{\epsilon}_n(\mathfrak{o}_l)$. This brings the study of representations of the groups $\operatorname{\bold{G}}^{\epsilon}_n(\mathfrak{o}_l)$ to the forefront. The study of representations of groups of type $\operatorname{A}_{n-1}$ has attracted much attention. In 1955 Green [@Green] described the characters of the complex irreducible representations of general linear groups over finite fields, i.e. groups of the form $\operatorname{GL}_n(\mathbb{F}_q)$. In the 1960s Ennola [@Ennola1; @Ennola2] gave a description of the characters of the general unitary groups over finite fields and made a curious observation regarding the relationship between characters of general linear and general unitary groups. This ‘Ennola Duality’ is discussed in more detail in Section \[ennola\]. Recently, Avni, Onn, Klopsch & Voll [@AKOV3] have developed explicit formulae for the representation zeta functions of the groups $\operatorname{\bold{G}}^{\epsilon}_3(\mathfrak{o}_l)$. Singla [@Pooja] has described the representation zeta function of the groups $\operatorname{GL}_4(\mathfrak{o}_2)$, general linear groups over principal ideal rings of length two. In Theorem \[main\], the main result of this paper, we give a uniform description of the representation zeta function of the groups $\operatorname{\bold{G}}^{\epsilon}_4(\mathfrak{o}_2)$ for $\epsilon\in\{\pm 1\}$. We impose no restriction on the residue characteristic of $\mathfrak{o}$. Ennola Duality {#ennola} -------------- In the 1960s Ennola (see [@Ennola1; @Ennola2]) observed a duality between the character tables of the groups $\operatorname{GL}_n(\mathbb{F}_q)$ and $\operatorname{GU}_n(\mathbb{F}_q)$. In particular, he noted that there exists a finite index set $I=I(n)$ and polynomials $g_i\in\mathbb{Z}[t]$, $i\in I$ such that $$\operatorname{cd}(\operatorname{GL}_n(\mathbb{F}_q)) = \{g_i(q):i\in I\}\mbox{ and }\operatorname{cd}(\operatorname{GU}_n(\mathbb{F}_q)) = \{(-1)^{\deg(g_i)}g_i(-q):i\in I\}.$$ where $\operatorname{cd}(G)=\{\chi(1)\,:\,\chi\in\operatorname{Irr}(G)\}$ denotes the set of character degrees of a group $G$. This phenomenon, known as ‘Ennola Duality’, was later explained by Kawanaka [@Kawanaka]. In [@AKOV3 Theorem H], Avni, Onn, Klopsch & Voll have observed an anologous form of Ennoloa duality for the groups $\operatorname{GL}_3(\mathfrak{o}_l)$ and $\operatorname{GU}_3(\mathfrak{o}_l)$. In particular, they observed that, for all $g(t)\in\mathbb{Z}[t]$ and $l\in\mathbb{N}$, $$g(q)\in\operatorname{cd}(\operatorname{GL}_3(\mathfrak{o}_l))\mbox{ if and only if }(-1)^{\deg g}g(-q)\in\operatorname{cd}(\operatorname{GU}_3(\mathfrak{o}_l)).$$ For any prime $p$, we write $\operatorname{cd}(G)_{p'}=\{\chi(1)_{p'}\,:\,\operatorname{Irr}(G)\}$ for the prime-to-$p$ parts of the irreducible character degrees of a group $G$. From Theorem \[main\] we deduce that an analogue of Ennola duality also holds for the groups $\operatorname{\bold{G}}^{\epsilon}_4(\mathfrak{o}_2)$. More specifically, we observe that the prime-to-$p$ parts of the character degrees satisfy Ennola Duality: \[corp’\] There are $20$ character degrees ($p'$-part) of the groups $\operatorname{\bold{G}}^{\epsilon}_4(\mathfrak{o}_2)$: $\operatorname{cd}(\operatorname{\bold{G}}^{\epsilon}_4(\mathfrak{o}_2))_{p'}= \operatorname{cd}(\operatorname{\bold{G}}^{\epsilon}_4(\mathfrak{o}_1))_{p'}\,\cup\,$ $\{ $ $(q+\epsilon)(q^2+1)$, $(q^3-\epsilon)(q^4-1)$, $(q+\epsilon)(q^3-\epsilon)(q^4-1)$,\ $(q-\epsilon)(q^3-\epsilon)(q^4-1)$, $(q^2-1)(q^3-\epsilon)(q^4-1)$, $(q+\epsilon)(q^2+\epsilon q + 1)(q^4-1)$,\ $(q^3+\epsilon q^2+q+\epsilon)^2\}$\ where\ $\operatorname{cd}(\operatorname{\bold{G}}^{\epsilon}_4(\mathfrak{o}_1))_{p'}= $ $\{ 1,$ $q^2+1$, $q^2+\epsilon q+1$, $(q+\epsilon)^2(q^2+1)$, $(q+\epsilon)^2(q^2+1)(q^2+\epsilon q+1)$, $(q^2+\epsilon q+1)(q^4-1)$,\ $(q^2-1)(q^4-1)$, $(q^2+1)(q^2+\epsilon q+1)$, $(q+\epsilon)(q^3+\epsilon)$, $(q^2+1)(q^3-\epsilon)$,\ $(q-\epsilon)(q^2+1)(q^3-\epsilon)$, $(q-\epsilon)(q^2-1)(q^3-\epsilon),$ $(q-\epsilon)(q^3-\epsilon)\}$.\ Moreover, for $l=1,2$, $$g(q)\in\operatorname{cd}(\operatorname{GL}_4(\mathfrak{o}_l))_{p'}\iff (-1)^{\deg(g)}g(-q)\in\operatorname{cd}(\operatorname{GU}_4(\mathfrak{o}_l))_{p'}.$$ In [@AKOV3 Theorem H] the authors note that for $n=3$ there is a case distinction between $l=1$ and $l\geq 2$ for the prime-to-$p$ parts of the set of character degrees, $\operatorname{cd}(\operatorname{\bold{G}}^{\epsilon}_3(\mathfrak{o}_l))_{p'}$. Corollary \[corp’\] shows that there is also a case distinction for the prime-to-$p$ parts of set of character degrees between $l=1$ and $l=2$ for $n=4$ but we are unable to say anything for $l\geq 3$. We make the following more general conjecture: For all $l\in\mathbb{N}$ $$g(q)\in\operatorname{cd}(\operatorname{GL}_n(\mathfrak{o}_l))_{p'}\iff (-1)^{\deg(g)}g(-q)\in\operatorname{cd}(\operatorname{GU}_n(\mathfrak{o}_l))_{p'}.$$ Symmetric matrices and the Frobenius-Schur indicator ---------------------------------------------------- In [@AKOV3 Remark 1.3] the authors note that the special value of the zeta function $\zeta_{\operatorname{GU}_3(\mathfrak{o}_l)}(s)$ at $s = -1$ (i.e. the sum of character degrees) is equal to the number of symmetric matrices in $\operatorname{GU}_3(\mathfrak{o}_l)$, that is $$\zeta_{\operatorname{GU}_3(\mathfrak{o}_l)}(-1) = (1+q^{-1})(1+q^{-3})q^{6l} = \mbox{number of symmetric matrices in $\operatorname{GU}_3(\mathfrak{o}_l)$}.$$ The corresponding assertion for $\operatorname{GL}_3(\mathfrak{o}_l)$ holds only for $l=1$. This observation, that the sum of character degrees is equal to the number of symmetric matrices, was observed by Gow & Klyachko for the groups $\operatorname{GL}_n(\mathbb{F}_q)$ and Thiem & Vinroot for the groups $\operatorname{GU}_n(\mathbb{F}_q)$ (see [@ThiemVinroot]). This phenomenon fails in the cases $\operatorname{\bold{G}}^{\epsilon}_4(\mathfrak{o}_2)$. Let $\operatorname{sym}^{\epsilon}_n(\mathfrak{o}_l)$ denote the number of symmetric matrices in $\operatorname{\bold{G}}^{\epsilon}_n(\mathfrak{o}_l)$. For $n=4$ we have $$\operatorname{sym}^{\epsilon}_4(\mathfrak{o}_l)=(1-\epsilon q^{-1})(1-\epsilon q^{-3})q^{20}.$$ We have the following corollary of Theorem \[main\] \[cormain\] The representation zeta functions of the groups $\operatorname{\bold{G}}^{\epsilon}_4(\mathfrak{o}_2)$ evaluated at $s=-1$ are given by $$\begin{aligned} \zeta_{\operatorname{GU}_4(\mathfrak{o}_2)}(-1) &=& q^2(q^2-q+1)(q^{14}+q^7-2q^6-q^5+2q^4-q^3+2q^2+q-2)(q+1)^2;\\ \zeta_{\operatorname{GL}_4(\mathfrak{o}_2)}(-1) &=& q(q^2+q+1)(q^{15}+2q^{10}-2q^8+2q^6-2q^4-4q^2+4)(q-1)^2\end{aligned}$$ where $q$ is the cardinality of $\mathfrak{o}_1$. Moreover, $$\begin{aligned} \zeta_{\operatorname{GU}_4(\mathfrak{o}_2)}(-1)-\operatorname{sym}^{-1}_4(\mathfrak{o}_2) &=& q^2(q-2)(q^2+1)(q^2-q+1)(q-1)^2(q+1)^4;\\ \zeta_{\operatorname{GL}_4(\mathfrak{o}_2)}(-1)-\operatorname{sym}^{1}_4(\mathfrak{o}_2) &=& 2q(q^2+q+1)(q^4+2)(q^2+1)(q+1)^2(q-1)^4;\\ \frac{\zeta_{\operatorname{GU}_4(\mathfrak{o}_2)}(-1)}{\operatorname{sym}^{-1}_4(\mathfrak{o}_2)} &= &\frac{q^{14}+q^7-2q^6-q^5+2q^4-q^3+2q^2+q-2}{q^{14}};\\ \frac{\zeta_{\operatorname{GL}_4(\mathfrak{o}_2)}(-1)}{\operatorname{sym}^{1}_4(\mathfrak{o}_2)}&=&\frac{q^{15}+2q^{10}-2q^8+2q^6-2q^4-4q^2+4}{q^{15}}.\end{aligned}$$ In particular, $$\lim_{q\rightarrow\infty}\frac{\operatorname{\bold{G}}^{\epsilon}_4(\mathfrak{o}_2)}{\operatorname{sym}^{\epsilon}_4(\mathfrak{o}_2)}=1.$$ The reason for the failure of the sum of character degrees to equal the number of symmetric matrices can be expressed in terms of word maps with automorphisms and generalized Frobenius-Schur indicators. It is known, see [@BumpGinzburg], that for a finite group $G$ with automorphism $\tau$ of order $2$ we have, for each $g\in G$: $$\begin{aligned} \label{fs} \sum_{\chi\in\operatorname{Irr}(G)}\mathfrak{i}_{\tau}(\chi)\chi(g) = |\{h\in G:h^{\tau}h = g\}|,\end{aligned}$$ where $\mathfrak{i}_{\tau}(\chi) = \frac{1}{|G|}\sum_{g\in G}\chi(g^{\tau}g)$ (analogous to the classic Frobenius-Schur indicator when $\tau$ is trivial). We can think of the right hand side of (\[fs\]) as the number of solutions to the word map given by $h^{\tau}h = g$ where we solve for $h\in G$ for a given element $g\in G$. When $G = \operatorname{\bold{G}}^{\epsilon}_n(\mathfrak{o}_l)$, $g=1$ and $\tau$ is the transpose-inverse automorphism equation (\[fs\]) becomes $$\sum_{\chi\in\operatorname{Irr}(G)}\mathfrak{i}_{\tau}(\chi)\chi(1) = |\{h\in G:h\mbox{ is symmetric}\}|,$$ a weighted sum of character degrees. We conclude from Corollary \[cormain\] that, for $\operatorname{\bold{G}}^{\epsilon}_4(\mathfrak{o}_2)$, some representations have non-trivial generalized Frobenius-Schur indicator. This contrasts with the field case where all groups $\operatorname{\bold{G}}^{\epsilon}_n(\mathbb{F}_q)$ have generalised Frobenius-Schur indicators equal to $1$. It would be interesting to see to what extent the final statement of Corollary \[cormain\] holds for the groups $\operatorname{\bold{G}}^{\epsilon}_n(\mathfrak{o}_l)$ with $n\geq 4$ and $l\geq 2$. The representation zeta function of $\operatorname{\bold{G}}^{\epsilon}_n(\mathfrak{o}_l)$ ========================================================================================== Let $F$ be a non-Archimedean local field with ring of integers $\mathfrak{o}$ and let $\mathfrak{p}$ be the unique maximal ideal of $\mathfrak{o}$. Assume that the residue field $\bold{k}=\mathfrak{o}/\mathfrak{p}$ is finite of order $q$ and characteristic $p$. We also fix a uniformiser $\pi$ of $\mathfrak{o}$. A typical example of such a field $F$ is $\mathbb{Q}_p$ (the $p$-adic numbers) with ring of integers $\mathbb{Z}_p$ (the $p$-adic integers), unique maximal ideal $p\mathbb{Z}_p$ and residue field $\mathbb{F}_p$. Let $\mathfrak{O}$ be an unramified quadratic extension of $\mathfrak{o}$, with valuation ideal $\mathfrak{P}$ and residue field $\bold{k}_2$, a quadratic extension of $\bold{k}$. Then $\mathfrak{O}=\mathfrak{o}[\delta]$, where $\delta=\sqrt{\rho}$ for an element $\rho\in\mathfrak{o}$ whose reduction modulo $\mathfrak{p}$ is a non-square in $\bold{k}$, and $\mathfrak{P}=\pi\mathfrak{O}$. Let $\mathfrak{I}$ denote the integral closure of $\mathfrak{O}$ in some fixed algebraic closure of its fraction field, and choose an $\mathfrak{o}$-automorphism $\circ$ of $\mathfrak{I}$ restricting to the non-trivial Galois automorphism of the quadratic extension $\mathfrak{O} | \mathfrak{o}$. Let $n\in\mathbb{N}$. For a matrix $A = (a_{ij})\in\operatorname{M}_n(\mathfrak{O})$ write $A^{\circ} = ((a_{ij}^{\circ})^{\operatorname{tr}})$, for the conjugate transpose. A matrix is *hermitian* if $A^\circ=A$ and *anti-hermitian* if $A^\circ = -A$. The *standard unitary group* over $\mathfrak{o}$ is the group $$\operatorname{GU}_n(\mathfrak{o}) = \{A\in\operatorname{GL}_n(\mathfrak{O}):A^\circ A = \operatorname{I}_n\}.$$ We also define the corresponding *standard unitary $\mathfrak{o}$-Lie lattice* to be $$\mathfrak{gu}_n(\mathfrak{o}) = \{A\in\mathfrak{gl}_n(\mathfrak{O}):A^\circ+A=0\}.$$ For $l\in\mathbb{N}$ we denote by $\mathfrak{o}_l$ the reduction of $\mathfrak{o}_l$ modulo $\mathfrak{p}^l$, i.e. $\mathfrak{o}_l=\mathfrak{o}/\mathfrak{p}^l$, and analogously $\mathfrak{O}_l=\mathfrak{D}/\mathfrak{P}^l$. A matrix $A\in\operatorname{GU}_n(\mathfrak{o}_l)$ is called *hermitian*, respectively *anti-hermitian*, if it is the image of a hermitian, respectively anti-hermitian matrix, modulo $\mathfrak{P}^l$. Recall that we will simply write $\operatorname{\bold{G}}^{\epsilon}_n$ where $\epsilon\in\{\pm 1\}$ to denote $\operatorname{\bold{G}}^{1}_n = \operatorname{GL}_n$ and $\operatorname{\bold{G}}^{-1}_n=\operatorname{GU}_n$. Our overall aim, reached in Theorem \[main\], is to compute a uniform formula for the representation zeta function of the groups of the form $\operatorname{\bold{G}}^{\epsilon}_4(\mathfrak{o}_2)$, $\epsilon\in\{\pm 1\}$. First we consider general $n\in\mathbb{N}$, specialising to $n=4$ later on. Now we describe a bijection between the irreducible representations of the groups $\operatorname{\bold{G}}^{\epsilon}_n(\mathfrak{o}_2)$ and the union of the irreducible representations of centralisers of certain matrices over the field $\mathfrak{o}_1$. Details can be found in [@Pooja2]. Let $\operatorname{\bold{K}}^{\epsilon}_n$ denote the kernel of the map $\kappa:\operatorname{\bold{G}}^{\epsilon}_n(\mathfrak{o}_2)\rightarrow\operatorname{\bold{G}}^{\epsilon}_n(\mathfrak{o}_1)$. Note that $\operatorname{\bold{K}}^{\epsilon}_n$ is a finite abelian group and let $\widehat{\operatorname{\bold{K}}^{\epsilon}_n}$ denote the set of characters of $\operatorname{\bold{K}}^{\epsilon}_n$. The group $\operatorname{\bold{G}}^{\epsilon}_n(\mathfrak{o}_2)$ acts on $\widehat{\operatorname{\bold{K}}^{\epsilon}_n}$ by conjugation: if $g\in\operatorname{\bold{G}}^{\epsilon}_n(\mathfrak{o}_2)$ and $\phi\in\widehat{\operatorname{\bold{K}}^{\epsilon}_n}$ then $\phi^g(x) = \phi(x^g)$ for $x\in\operatorname{\bold{K}}^{\epsilon}_n$. For any $\phi\in\widehat{\operatorname{\bold{K}}^{\epsilon}_n}$, write $\operatorname{T}^{\epsilon}_n(\phi)=\{g\in\operatorname{\bold{G}}^{\epsilon}_n(\mathfrak{o}_2):\phi^g=\phi\}$. Let $\mathfrak{C}^{\epsilon}_n$ denote the set of $\operatorname{\bold{G}}^{\epsilon}_n(\mathfrak{o}_2)$-orbits in $\widehat{\operatorname{\bold{K}}^{\epsilon}_n}$. It is shown in [@Pooja2] that for a character $\phi\in\widehat{\operatorname{\bold{K}}^{\epsilon}_n}$ there exists a canonical extension $\chi_{\phi}$ to $T^{\epsilon}_n(\phi)$ so that $\chi_{\phi}|_{\operatorname{\bold{K}}^{\epsilon}_n}=\phi$. By Clifford Theory and [@Pooja2] there exists a bijetion between the sets $$\amalg_{\phi\in\mathfrak{C}^{\epsilon}_n}\{\operatorname{Irr}(\operatorname{T}^{\epsilon}_n(\phi)/\operatorname{\bold{K}}^{\epsilon}_n)\} \longleftrightarrow\operatorname{Irr}(\operatorname{\bold{G}}^{\epsilon}_n(\mathfrak{o}_2))$$ given by $$\delta\mapsto\operatorname{Ind}_{\operatorname{T}^{\epsilon}_n(\phi)}^{\operatorname{\bold{G}}^{\epsilon}_n(\mathfrak{o}_2)}(\chi_{\phi}\otimes\delta).$$ Fix a non-trivial additive character $\psi:\mathfrak{o}_1\rightarrow\mathbb{C}^*$. Define $\operatorname{\mathfrak{g}}^1_n(\mathfrak{o}_1):=\mathfrak{gl}_n(\mathfrak{o}_1)$. For each matrix $A\in$ $\operatorname{\mathfrak{g}}^1_n(\mathfrak{o}_1)$ define the character $\psi_A:\operatorname{\bold{K}}_n^{1}\rightarrow\mathbb{C}^*$ by $$\psi_A(\operatorname{I}_n + \pi X) = \psi(\operatorname{Tr}(AX)).$$ The assignment $A\mapsto\psi_A$ defines an isomorphism $\operatorname{\mathfrak{g}}^1_n(\mathfrak{o}_1)\cong\widehat{\operatorname{\bold{K}}_n^{1}}$. Define $\operatorname{\mathfrak{g}}^{-1}_n(\mathfrak{o}_1)$ to be the subgroup of $\operatorname{\mathfrak{g}}^1_n(\mathfrak{D}_1)$ such that $X\mapsto I_n + \pi X$ defines an isomorphism $\operatorname{\mathfrak{g}}^{-1}_n(\mathfrak{o}_1)\cong\widehat{\operatorname{\bold{K}}_n^{-1}}$. Thus $\operatorname{\mathfrak{g}}^{-1}_n(\mathfrak{o}_1) = \{X\in\operatorname{\mathfrak{g}}^1_n(\mathfrak{O}_1):X+X^\circ = 0\}=\mathfrak{gu}_n(\mathfrak{o}_1)$. Let $\mathfrak{S}^{\epsilon}_n$ denote the set of orbits of $\operatorname{\mathfrak{g}}^{\epsilon}_n(\mathfrak{o}_1)$ by $\operatorname{\bold{G}}^{\epsilon}_n(\mathfrak{o}_2)$. Since $T^{\epsilon}_n(\phi)/\operatorname{\bold{K}}^{\epsilon}_n\cong Z_{\operatorname{\bold{G}}^{\epsilon}_n(\mathfrak{o}_1)}(A)$ we have the following bijection $$\begin{aligned} \label{eqnsim} \amalg_{A\in\mathfrak{S}^{\epsilon}_n}\{\operatorname{Irr}(Z_{\operatorname{\bold{G}}^{\epsilon}_n(\mathfrak{o}_1)}(A))\} \longleftrightarrow\operatorname{Irr}(\operatorname{\bold{G}}^{\epsilon}_n(\mathfrak{o}_2)).\end{aligned}$$ It follows that to find the complex irreducible representations of the groups $\operatorname{\bold{G}}^{\epsilon}_n(\mathfrak{o}_2)$ one can simply write down representatives for the similarity classes $\mathfrak{S}^{\epsilon}_n$ and induce representations of their centralisers. The case $\epsilon = 1$, $n=4$ has been described by Singla [@Pooja]. In Theorem \[main\] we give formulae for the number of irreducible representations of each degree of the group $\operatorname{GU}_4(\mathfrak{o}_2)$. This is achieved by writing down representatives $A$ of similarity classes in $\operatorname{\mathfrak{g}}^{-1}_4(\mathfrak{o}_1)$ and studying the representations of their centralisers $Z_{\operatorname{\bold{G}}^{\epsilon}_4(\mathfrak{o}_1)}(A)$. Let $f(t) = t^d-a_{d-1}t^{d-1}-...-a_0$ be a polynomial over a field $\mathbb{F}_q$ of degree $d$. Define matrices $$U(f) = U_1(f) = \begin{pmatrix} 0 & 1\\ 0 & 0 & 1\\ . & . & . & . & . \\ 0 & 0 & 0 & ... &1\\ a_0 & a_1 & a_2 & ... & a_{d-1}\end{pmatrix}$$ and $$U_m(f) = \begin{pmatrix} U(f) & \operatorname{I}_d \\ & U(f) & \operatorname{I}_d\\ . & . & . & . & \\ & & & U(f)\end{pmatrix}$$ with $m$ diagonal blocks $U(f)$ and where $\operatorname{I}_d$ is the $d\times d$ identity matrix. For a partition $\lambda = \{l_1,l_2,...,l_p\}$ of a positive integer $k$ with $l_1\geq l_2\geq...\geq l_p>0$ write $$U_{\lambda}(f) = \begin{pmatrix} U_{l_1}(f) \\ & U_{l_1}(f) \\ . & . & . & . & \\ & & & U_{l_p}(f)\end{pmatrix}.$$ In [@Green] Green shows that there is a one-to-one correspondence between similarity clases in $\operatorname{\mathfrak{g}}^{1}_n(\mathfrak{o}_1)$ and collections of irreducible polynomials with associated partitions satisfying certain conditions. More specifically, suppose that the characteristic polynomial of a similarity class $C$ is $f_1^{k_1}...f_N^{k_N}$ where the $f_i$ are distinct irreducible polynomials over $\mathfrak{o}_1$, $k_i\geq 0$ and if the respective degrees of the $f_i$ are $d_i$ then $\sum_{i=1}^{N}k_id_i=n$. Then $C$ is similar to the diagonal block matrix $$\operatorname{diag}\{U_{\nu_C(f_1)}(f_1), U_{\nu_C(f_2)}(f_2),...,U_{\nu_C(f_N)}(f_N)\}$$ where each $\nu_C(f_i)$ is a certain partition of $k_i$ depending on $C$. We may therefore represent the similarity class $C$ by the symbol $$\{f_1^{\nu_C(f_1)},...,f_N^{\nu_C(f_N)}\}.$$ Now let $C= \{...,f_i^{\nu_C(f_i)},...\}$ for some irreducible polynomials $f_i$. For a natural number $d\geq 1$ and a partition $\nu$ other than $0$ write $r_C(d,\nu)$ for the number of irreducible polynomials of degree $d$ appearing in the characteristic polynomial of $C$ with partition $\nu_C(f)=\nu$. Let $\rho_C(\nu)$ be the partition $$(n^{r_C(n,\nu)},(n-1)^{r_C(n-1,\nu)},...).$$ We say that two similarity classes, $A$ and $B$, are of the same *type* if and only if $\rho_A(\nu)=\rho_B(\nu)$ for every non-zero partition $\nu$. We will also say that two matrices are of the same type if their respective similarity classes are of the same type. Now let $\rho_\nu$ be a partition valued function on the non-zero partitions $\nu$ (we allow $\rho_\nu$ to take the value zero). The function $\rho_\nu$ describes a type in $\operatorname{\mathfrak{g}}^{1}_n(\mathfrak{o}_1)$ if and only if $$\begin{aligned} \label{eqntype} \sum_{\nu}|\rho_\nu||\nu| = n.\end{aligned}$$ We analogously define the *type* of a similarity class in $\operatorname{\mathfrak{g}}^{-1}_n(\mathfrak{o}_1)$. The following lemmas, from [@AKOV3], allow us to describe the types of matrices that occur in $\operatorname{\mathfrak{g}}^{\epsilon}_n(\mathfrak{o}_1)$: Let $A,B\in\operatorname{\mathfrak{g}}^{-1}_n(\mathfrak{o})$ be similar, i.e. $\operatorname{GL}_n(\mathfrak{O})$-conjugate. Then $A,B$ are already $\operatorname{GU}_n(\mathfrak{o})$-conjugate. Let $A\in\operatorname{\mathfrak{g}}^{1}_n(\mathfrak{O}_l)$ with characteristic polynomial $f_A=t^n+\sum_{i=0}^{n-1}c_it^i\in\mathfrak{O}_l[t]$. If $A$ is $\operatorname{GL}_n(\mathfrak{O})$-conjugate to an anti-hermitian matrix, then $c_i^{\circ}=(-1)^{n-i}c_i$ for $0\leq i<n$. The following lemma highlights the importance of type in the context of computing representation zeta functions: If matrices $A$ and $B$ in $\operatorname{\mathfrak{g}}^\epsilon_n(\mathfrak{o}_1)$ are of the same type, then their centralisers are isomorphic. Since $A$ and $B$ are of the same type there exist irreducible polynomials $f_1,...,f_N$ and $g_1,...,g_N$ such that $\deg(f_i)=\deg(g_i)=d_i$ and positive integers $k_i$ such that the characteristic polynomial of $A$ is $f_1^{k_1}...f_N^{k_N}$and the characteristic polynomial of $B$ is $g_1^{k_1}...g_N^{k_N}$. Moreover, there exist partitions $\nu_i$ of the $k_i$ such that $A$ is similar to $$\operatorname{diag}\{U_{\nu_1}(f_1),...,U_{\nu_N}(f_N)\}$$ and $B$ is similar to $$\operatorname{diag}\{U_{\nu_1}(g_1),...,U_{\nu_N}(g_N)\}.$$ From this it is clear that $A$ and $B$ have isomorphic centralisers. For any group $\operatorname{\bold{G}}^{\epsilon}_n(\mathfrak{o}_1)$, using equation (\[eqntype\]) we may wrtie down a complete and irredundant list of representatives of types that are characterised by sets of irreducible polynomials and their associated partitions. This list will not depend on the underlying field, $\mathfrak{o}_1$, however each of these representatives may be parameterised by coefficients that do depend on the underlying field. Let $\mathbb{T}^{\epsilon}_n$ denote the set of representatives of types in $\operatorname{\mathfrak{g}}^{\epsilon}_n(\mathfrak{o}_1)$ and for each $A\in\mathbb{T}^{\epsilon}_n$, let $n_A$ be the total number of similarity classes of type $A$. We will also write $Z_{\operatorname{\bold{G}}^{\epsilon}_n(\mathfrak{o}_1)}(A)$ for the centraliser of a matrix of type $A$. Then $$\begin{aligned} \label{eqnzeta} \zeta_{\operatorname{\bold{G}}^{\epsilon}_n(\mathfrak{o}_2)}(s)=\sum_{A\in\mathbb{T}^{\epsilon}_n}n_{A}\zeta_{Z_{\operatorname{\bold{G}}^{\epsilon}_n(\mathfrak{o}_1)}(A)}(s)|\operatorname{\bold{G}}^{\epsilon}_n(\mathfrak{o}_1):Z_{\operatorname{\bold{G}}^{\epsilon}_n(\mathfrak{o}_1)}|^{-s}.\end{aligned}$$ Representations of $\operatorname{\bold{G}}^{\epsilon}_2(\mathfrak{o}_l)$ ------------------------------------------------------------------------- Before proceeding with the proof of Theorem \[main\] we summarise what is already known about the number of complex irreducible representations of each degree of the groups $\operatorname{\bold{G}}^{\epsilon}_2(\mathfrak{o}_l)$ for $\epsilon\in\{\pm 1\}$ and $l = 1, 2$. For $(\epsilon, n, l) = (1,2,1)$ see Steinberg [@Steinberg2]. For $(\epsilon, n, l) = (-1,2,1)$ see Ennola [@Ennola2]. The number of irreducible representations of the groups $\operatorname{\bold{G}}^{\epsilon}_2(\mathfrak{o}_1)$ is given in Table \[table21\]. Details on the irreducible representations for $(\epsilon, n, l) = (1,2,2)$ are described by Nagornyi [@Nag] and Onn [@Onn]. For $(\epsilon, n, l) = (-1,2,2)$ see [@AKOV3]. Table \[table22\] is a complete and irredundant list of representatives of similarity class types of $\operatorname{\mathfrak{g}}^{\epsilon}_2(\mathfrak{o}_1)$ under the action by $\operatorname{\bold{G}}^{\epsilon}_2(\mathfrak{o}_2)$. Using equation (\[eqnzeta\]) one can obtain the representation zeta function of the groups $\operatorname{\bold{G}}^{\epsilon}_2(\mathfrak{o}_2)$. \[table21\] Number of irreducible representations Degree ----------------------------------------- -------------- $q-\epsilon$ $1$ $q-\epsilon$ $q$ $\frac{1}{2}(q-\epsilon-1)(q-\epsilon)$ $q+\epsilon$ $\frac{1}{2}(q+\epsilon-1)(q-\epsilon)$ $q-\epsilon$ : Representations of $\operatorname{\bold{G}}^{\epsilon}_2(\mathfrak{o}_1)$ \[table22\] -------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------------- ---------------------- ------------------------------------------------------------------------------------------------------------------ ----------------------------------------------------------- Type $A\in\mathbb{T}^{\epsilon}_2$ Parameter Number of similarity Isomorphism type Index of $Z$ classes, $n_A$ $Z$ of $Z_{\operatorname{\bold{G}}^{\epsilon}_2(\mathfrak{o}_1)}(A)$ in $\operatorname{\bold{G}}^{\epsilon}_2(\mathfrak{o}_1)$ $\parbox{4.5cm}{$\{(t-\alpha)^{(1,1)}\}$}$ $\parbox{4.2cm}{$\epsilon=1:\alpha\in\mathbb{F}_q$\\$\epsilon=-1: \alpha+\alpha^\circ=0$}$ $q$ $\operatorname{\bold{G}}^{\epsilon}_2(\mathfrak{o}_1)$ $1$ $\parbox{4.5cm}{$\{(t-\alpha)^{(2)}\}$}$ $\parbox{4.2cm}{$\epsilon=1:\alpha\in\mathbb{F}_q$\\$\epsilon=-1: \alpha+\alpha^\circ=0$}$ $q$ $\operatorname{\bold{G}}^{\epsilon}_1(\mathfrak{o}_2)$ $q^2-1$ $\parbox{4.5cm}{$\{(t-\alpha_1)^{(1)},(t-\alpha_2)^{(1)}\}$}$ $\parbox{4.2cm}{$\epsilon=1:\alpha_1\neq\alpha_2\in\mathbb{F}_q$\\$\epsilon=-1: \alpha_i+\alpha_i^\circ=0$\\$\alpha_1\neq\alpha_2$}$ $\frac{1}{2}q(q-1)$ $\operatorname{\bold{G}}^{\epsilon}_1(\mathfrak{o}_1)\times\operatorname{\bold{G}}^{\epsilon}_1(\mathfrak{o}_1)$ $q(q+\epsilon)$ $\parbox{4.5cm}{$\epsilon = 1: \{f^{(1)}\}$\\$\epsilon=-1:\{(t-\alpha_1)^{(1)},(t-\alpha_2)^{(1)}\}$}$ $\parbox{4.2cm}{$\epsilon=1:f$ irreducible quadratic\\$\epsilon=-1:\alpha_1=-\alpha_2^\circ$ distinct}$ $\frac{1}{2}q(q-1)$ $\mathbb{F}_{q^2}^*$ $q(q-\epsilon)$ -------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------------- ---------------------- ------------------------------------------------------------------------------------------------------------------ ----------------------------------------------------------- : Representatives of similarity classes in $\operatorname{\mathfrak{g}}^{\epsilon}_2(\mathfrak{o}_1)$ under $\operatorname{\bold{G}}^{\epsilon}_2(\mathfrak{o}_2)$ Representations of $\operatorname{\bold{G}}^{\epsilon}_3(\mathfrak{o}_l)$ ------------------------------------------------------------------------- We summarise what is already known about the number of complex irreducible representations of each degree of the groups $\operatorname{\bold{G}}^{\epsilon}_3(\mathfrak{o}_l)$ for $\epsilon\in\{\pm 1\}$ and $l = 1, 2$. For $(\epsilon, n, l) = (1,3,1)$ see Steinberg [@Steinberg2]. For $(\epsilon, n, l) = (-1,3,1)$ see Ennola [@Ennola2]. The number of irreducible representations of the groups $\operatorname{\bold{G}}^{\epsilon}_3(\mathfrak{o}_1)$ is given in Table \[table31\]. \[table31\] Number of irreducible representations Degree ------------------------------------------------------- ----------------------------------- $q-\epsilon$ $1$ $q-\epsilon$ $q(q+\epsilon)$ $q-\epsilon$ $q^3$ $(q-\epsilon-1)(q-\epsilon)$ $q^2+\epsilon q+1$ $(q-\epsilon-1)(q-\epsilon)$ $q(q^2+\epsilon q+1)$ $\frac{1}{6}(q-\epsilon-2)(q-\epsilon-1)(q-\epsilon)$ $(q+\epsilon)(q^2+\epsilon q+ 1)$ $\frac{1}{2}(q+\epsilon-1)(q-\epsilon)^2$ $q^3-\epsilon$ $\frac{1}{3}q(q^2-1)$ $(q^2-1)(q-\epsilon)$ : Representations of $\operatorname{\bold{G}}^{\epsilon}_3(\mathfrak{o}_1)$ For $l\geq 2$ we define $\operatorname{\bold{G}}^{\epsilon}_{(l,1)}$ to be the group $H^{\epsilon}\rtimes D_l^{\epsilon}$, where $$\begin{aligned} H^{1}&:=&\left\{\begin{pmatrix}1 & \alpha & \gamma \\ 0 & 1 & \beta \\ 0 & 0 & 1\end{pmatrix}:\alpha,\beta,\gamma\in\mathbb{F}_{q}\right\};\\ H^{-1}&:=&\left\{\begin{pmatrix}1 & \alpha & \gamma \\ 0 & 1 & \bar{\alpha} \\ 0 & 0 & 1\end{pmatrix}:\alpha,\gamma\in\mathbb{F}_{q^2}, \alpha\bar{\alpha} = \gamma+\bar{\gamma}\right\};\\ D_l^{\epsilon}&:=&\left\{\begin{pmatrix}a & 0 & 0\\0 & b & 0\\0 & 0 & a\end{pmatrix}: a\in \operatorname{\bold{G}}^{\epsilon}_1(\mathfrak{o}_{l-1}), b\in \operatorname{\bold{G}}^{\epsilon}_1(\mathfrak{o}_{1})\right\}.\end{aligned}$$ Before describing the irreducible representations of the groups $\operatorname{\bold{G}}^{\epsilon}_3(\mathfrak{o}_2)$ we first describe the irreducible representations of the groups $\operatorname{\bold{G}}^{\epsilon}_{(l,1)}$. The complex irreducible representations of the groups $\operatorname{\bold{G}}^{\epsilon}_{(l,1)}$ for $l\geq 2$ are given in the Table 4. Hence, its zeta function is given by $$\zeta_{\operatorname{\bold{G}}^\epsilon_{(l,1)}}(s)=q^{l-2}(q-\epsilon)((q-\epsilon)+(q+\epsilon)(q-\epsilon)^{-s}+(q-1)(q-\epsilon)q^{-s}).$$ This follows by adapting the proof of [@AKOV3 Proposition 6.9] or [@Onn Theorem 4.1]. We will provide a sketch proof for completion. The group $H^{\epsilon}$ has $q-1$ irreducible representations of degree $q$ that correspond to the non-trivial characters of the centre, and $q^2$ linear characters factoring through its abelianisation by its centre $Z^{\epsilon} = Z(H^{\epsilon})$. Write $Q^{\epsilon} := H^{\epsilon}/Z^{\epsilon}=\mathbb{F}_q\times\mathbb{F}_q$. For each of the $q-1$ non-trivial characters of the centre, $\chi$, there is a unique irreducible representation $\rho_{\chi, H^\epsilon}$ of $H^{\epsilon}$ of dimension $q$. The remaining representations of $H^\epsilon$ correspond to the trivial character of the centre and hence factor through the quotient $Q^{\epsilon}$. Each of the representations $\rho_{\chi, H^\epsilon}$ is stabilised by $H^{\epsilon}$. Let $T^\epsilon=Z^{\epsilon}D^{\epsilon}\cong\operatorname{\bold{G}}^\epsilon(\mathfrak{o}_l)\times\operatorname{\bold{G}}^\epsilon(\mathfrak{o}_1)$ and let $H^\epsilon_1$ be a maximal abelian subgroup of $H^{\epsilon}$. Then $\chi$ can be extended from $Z^{\epsilon}=T^{\epsilon}\cap H^\epsilon_1$ to $T^\epsilon H^\epsilon_{1}$. Inducing the extension from $T^\epsilon H^\epsilon_1$ to $\operatorname{\bold{G}}^\epsilon_{(l,1)}$ gives a $q$-dimensional representation which must extend $\rho_{\chi, H^\epsilon}$. This yields $|\operatorname{\bold{G}}^{\epsilon}_{(l,1)}/H^{\epsilon}| = |T^\epsilon/Z^{\epsilon}|=q^{l-2}(q-\epsilon)^2$ different extensions of $\rho_{\chi,H^\epsilon}$ and so $q^{l-2}(q-\epsilon)^2(q-1)$ representations of $\operatorname{\bold{G}}^{\epsilon}_{(l,1)}$. The remaining irreducible characters of $\operatorname{\bold{G}}^{\epsilon}_{(l,1)}$ factor through its quotient by $Z^\epsilon$. The case $\epsilon = 1$ is done in [@Onn]. If $\epsilon = -1$ identify $Q^{-1}$ and its dual $Q^{-1\vee}$ with the additive group $\mathbb{F}_{q^2}$. The action of $\mbox{diag}(a,b,a)\in D_l^{-1}$ on $Q^{-1\vee}$ is given by $\mathbb{F}_{q^2}\ni u\mapsto(a^{-1}bu)$. The orbits of $D_l^{-1}$ on $Q^{-1\vee}$ are: \[tab:table1\] Orbit Parameter Stabiliser in $D$ --------- ---------------------------------------------------------- ------------------------------------------------------------------------------------ $[0,0]$ $-$ $\operatorname{GU}_1(\mathfrak{o}_{l-1})\times\operatorname{GU}_1(\mathfrak{o}_1)$ $[s]$ $s\in\mathbb{F}_{q^2}/\operatorname{GU}_1(\mathbb{F}_q)$ $\operatorname{GU}_1(\mathfrak{o}_{l-1})$ By Mackey’s method for semi-direct products (see [@JPS], Section 8.2) this yields $|\operatorname{GU}_1(\mathfrak{o}_{l-1})\times\operatorname{GU}_1(\mathfrak{o}_1)| = q^{l-2}(q+1)^2$ linear characters and $|\mathbb{F}_{q^2}^*/\operatorname{GU}_1(\mathfrak{o}_{l-1})||\operatorname{GU}_1(\mathfrak{o}_1)| = q^{l-2}(q^2-1)$ irreducible characters of degree $|\operatorname{GU}_1(\mathfrak{o}_1)| = (q+1)$ of $\operatorname{\bold{G}}_{(l,1)}^{-1}$. Putting this all together yields the required result. \[tab:table1\] Number of irreducible representations Degree --------------------------------------- -------------- $q^{l-2}(q-\epsilon)^2$ $1$ $q^{l-2}(q^2-1)$ $q-\epsilon$ $q^{l-2}(q-\epsilon)^2(q-1)$ $q$ : Representations of $\operatorname{\bold{G}}^{\epsilon}_{(l,1)}$ The irreducible representations of $\operatorname{\bold{G}}^{\epsilon}_n(\mathfrak{o}_l)$ for $(\epsilon, n, l) = (1,3,2), (-1,3,2)$ can be found in [@AKOV3]. Table \[table32\] is a complete and irredundant list of representatives of similarity class types of $\operatorname{\mathfrak{g}}^{\epsilon}_3(\mathfrak{o}_1)$ under the action by $\operatorname{\bold{G}}^{\epsilon}_3(\mathfrak{o}_2)$. Using equation (\[eqnzeta\]) one can obtain the representation zeta function of the groups $\operatorname{\bold{G}}^{\epsilon}_3(\mathfrak{o}_2)$. \[table32\] ------------------------------------------------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------------------------------------------------------------------------- -------------------------- ------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------- Type $A\in\mathbb{T}^{\epsilon}_3$ Parameter Number of similarity Isomorphism type $Z$ of Index of $Z$ in classes, $n_A$ $Z_{\operatorname{\bold{G}}^{\epsilon}_3(\mathfrak{o}_1)}(A)$ $\operatorname{\bold{G}}^{\epsilon}_3(\mathfrak{o}_1)$ $\parbox{6cm}{$\{(t-\alpha)^{(1,1,1)}\}$}$ $\parbox{5.7cm}{$\epsilon=1:\alpha\in\mathbb{F}_q$\\$\epsilon=-1: \alpha+\alpha^\circ=0$}$ $q$ $\operatorname{\bold{G}}^{\epsilon}_3(\mathfrak{o}_1) $ $1$ $\parbox{6cm}{$\{(t-\alpha)^{(2,1)}\}$}$ $\parbox{5.7cm}{$\epsilon=1:\alpha\in\mathbb{F}_q$\\$\epsilon=-1: \alpha+\alpha^\circ=0$}$ $q$ $\operatorname{\bold{G}}^{\epsilon}_{(2,1)}$ $(q^3-\epsilon)(q+\epsilon)$ $\parbox{6cm}{$\{(t-\alpha)^{(3)}\}$}$ $\parbox{5.7cm}{$\epsilon=1:\alpha\in\mathbb{F}_q$\\$\epsilon=-1: \alpha+\alpha^\circ=0$}$ $q$ $\operatorname{\bold{G}}^{\epsilon}_1(\mathfrak{o}_3)$ $q(q^2-1)(q^3-\epsilon)$ $\parbox{6cm}{$\{(t-\alpha_1)^{(1,1)},(t-\alpha_2)^{(1)}\}$}$ $\parbox{5.7cm}{$\epsilon=1:\alpha_1\neq\alpha_2\in\mathbb{F}_q$\\$\epsilon=-1: \alpha_i+\alpha_i^\circ=0,\,\alpha_1\neq\alpha_2$}$ $q(q-1)$ $\operatorname{\bold{G}}^{\epsilon}_1(\mathfrak{o}_1)\times \operatorname{\bold{G}}^{\epsilon}_2(\mathfrak{o}_1)$ $q^2(q+\epsilon)(q^3-\epsilon)$ $\parbox{6cm}{$\{(t-\alpha_1)^{(1,1)},(t-\alpha_2)^{(1)}\}$}$ $\parbox{5.7cm}{$\epsilon=1:\alpha_1\neq\alpha_2\in\mathbb{F}_q$\\$\epsilon=-1: \alpha_i+\alpha_i^\circ=0,\,\alpha_1\neq\alpha_2$}$ $q(q-1)$ $\operatorname{\bold{G}}^{\epsilon}_1(\mathfrak{o}_1)\times\operatorname{\bold{G}}^{\epsilon}_1(\mathfrak{o}_2)$ $q^2(q+\epsilon)(q^3-\epsilon)$ $\parbox{6cm}{$\{(t-\alpha_1)^{(1)},(t-\alpha_2)^{(1)},(t-\alpha_3)^{(1)}\}$}$ $\parbox{5.7cm}{$\epsilon=1:\alpha_i\in\mathbb{F}_q$ distinct\\$\epsilon=-1: \alpha_i+\alpha_i^\circ=0$ distinct}$ $\frac{1}{6}q(q-1)(q-2)$ $\operatorname{\bold{G}}^{\epsilon}_1(\mathfrak{o}_1)\times\operatorname{\bold{G}}^{\epsilon}_1(\mathfrak{o}_1) $q^3(q+\epsilon)(q^2+\epsilon q+ 1)$ \times\operatorname{\bold{G}}^{\epsilon}_1(\mathfrak{o}_1)$ $\parbox{6cm}{$\epsilon = 1: \{(t-\alpha)^{(1)},f^{(1)}\}$\\$\epsilon=-1:\{(t-\alpha_1)^{(1)},(t-\alpha_2)^{(1)},(t-\alpha_3)^{(1)}\}$}$ $\parbox{5.7cm}{$\epsilon=1:\alpha\in\mathbb{F}_q,\, f$ irreducible quadratic\\$\epsilon=-1:\alpha_1+\alpha_1^{\circ} = 0,\, \alpha_2=-\alpha_3^\circ$ distinct}$ $\frac{1}{2}q^2(q-1)$ $\operatorname{\bold{G}}^{\epsilon}_1(\mathfrak{o}_1)\times\mathbb{F}_{q^2}^*$ $q^3(q^3-\epsilon)$ $\parbox{6cm}{$\{f^{(1)}\}$}$ $\parbox{5.7cm}{$f$ irreducible cubic *}$ $\frac{1}{3}q(q^2-1)$ $\operatorname{\bold{G}}^{\epsilon}_1(\mathbb{F}_{q^3})$ $q^3(q^2-1)(q-\epsilon)$ ------------------------------------------------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------------------------------------------------------------------------- -------------------------- ------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------- : Representatives of similarity classes in $\operatorname{\mathfrak{g}}^{\epsilon}_3(\mathfrak{o}_1)$ under $\operatorname{\bold{G}}^{\epsilon}_3(\mathfrak{o}_2)$ If $\epsilon=-1$ we require that $f=t^3+\sum_{i=0}^{2}c_it^i\in\mathbb{F}_{q^2}[t]$ where $c_i^{\circ}=(-1)^{i+1}c_i$ for $0\leq i<3$. Representations of $\operatorname{\bold{G}}^{\epsilon}_4(\mathfrak{o}_l)$ ------------------------------------------------------------------------- The irreducible representations of $\operatorname{\bold{G}}^{\epsilon}_n(\mathfrak{o}_l)$ for $(\epsilon, n, l) = (1,4,1)$ can be found in Steinberg [@Steinberg2] and are displayed in Table \[table41\]. The case $(\epsilon, n, l) = (-1,4,1)$ is described by Nozawa [@Nozawa]. \[table41\] Number of irreducible representations Degree ---------------------------------------------------------------------- -------------------------------------------- $q-\epsilon$ $1$ $q-\epsilon$ $q(q^2+\epsilon q+1)$ $q-\epsilon$ $q^2(q^2+1)$ $q-\epsilon$ $q^3(q^2+\epsilon q+1)$ $q-\epsilon$ $q^6$ $(q-\epsilon-1)(q-\epsilon)$ $(q+\epsilon)(q^3+\epsilon)$ $(q-\epsilon-1)(q-\epsilon)$ $q(q^2+1)(q+\epsilon)^2$ $(q-\epsilon-1)(q-\epsilon)$ $q^3(q^3+\epsilon)(q+\epsilon)$ $\frac{1}{2}(q-\epsilon-1)(q-\epsilon)$ $(q^2+1)(q^2+\epsilon q+1)$ $(q-\epsilon-1)(q-\epsilon)$ $q(q^2+1)(q^2+\epsilon q+1)$ $\frac{1}{2}(q-\epsilon-1)(q-\epsilon)$ $q^2(q^2+1)(q^2+\epsilon q+1)$ $\frac{1}{2}(q-\epsilon-2)(q-\epsilon-1)(q-\epsilon)$ $(q+\epsilon)(q^2+1)(q^2+\epsilon q+1)$ $\frac{1}{2}(q-\epsilon-2)(q-\epsilon-1)(q-\epsilon)$ $q(q+\epsilon)(q^2+1)(q^2+\epsilon q+1)$ $\frac{1}{24}(q-\epsilon-3)(q-\epsilon-2)(q-\epsilon-1)(q-\epsilon)$ $(q^2+1)(q+\epsilon)^2(q^2+\epsilon q+ 1)$ $\frac{1}{2}(q+\epsilon-1)(q-\epsilon)^2$ $(q-\epsilon)(q^2+1)(q^2+\epsilon q+1)$ $\frac{1}{2}(q+\epsilon-1)(q-\epsilon)^2$ $q(q-\epsilon)(q^2+1)(q^2+\epsilon q+1)$ $\frac{1}{4}q(q-2)(q-\epsilon)^2$ $(q^4-1)(q^2+\epsilon q+1)$ $\frac{1}{2}(q+\epsilon-1)(q-\epsilon)$ $q^2(q-\epsilon)^2(q^2+\epsilon q + 1)$ $\frac{1}{2}(q+\epsilon-1)(q-\epsilon)$ $(q-\epsilon)^2(q^2+\epsilon q + 1)$ $\frac{1}{8}(q^2-q-2)(q^2-q-2+2\epsilon)$ $(q-\epsilon)^2(q^2+1)(q^2+\epsilon q+1)$ $\frac{1}{3}q(q^2-1)(q-\epsilon)$ $(q^4-1)(q^2-1)$ $\frac{1}{4}q^2(q^2-1)$ $(q-\epsilon)^2(q^2-1)(q^2+\epsilon q+1)$ : Representations of $\operatorname{\bold{G}}^{\epsilon}_4(\mathfrak{o}_1)$ We define $\operatorname{\bold{G}}^{\epsilon}_{(2,1,1)}$ to be the group $E^{\epsilon}\rtimes M^{\epsilon}$ where $$\begin{aligned} E^{1}&:=&\left\{\begin{pmatrix}1 & \alpha & \beta & \gamma \\ 0 & 1 & 0 & \delta \\ 0 & 0 & 1 &\eta\\ 0 & 0 & 0 & 1\end{pmatrix}:\alpha,\beta,\gamma, \delta,\eta\in\mathbb{F}_{q}\right\};\\ E^{-1}&:=&\left\{\begin{pmatrix}1 & \alpha & \beta & \gamma \\ 0 & 1 & 0 & \bar{\alpha} \\ 0 & 0 & 1 & \bar{\beta} \\ 0 & 0 & 0 & 1\end{pmatrix}:\alpha,\beta,\gamma\in\mathbb{F}_{q^2}; \alpha\bar{\alpha}+\beta\bar{\beta}=\gamma+\bar{\gamma}\right\};\\ M^{\epsilon}&:=&\left\{\begin{pmatrix}a & 0 & 0 & 0\\0 & w & z & 0\\0 & y & x & 0\\ 0 & 0 & 0 & a\end{pmatrix}: a\in \operatorname{\bold{G}}^{\epsilon}_1(\mathfrak{o}_1), \begin{pmatrix}w & z\\y & x\end{pmatrix}\in\operatorname{\bold{G}}^{\epsilon}_2(\mathfrak{o}_1)\right\}.\end{aligned}$$ Before describing the irreducible representations of $\operatorname{\bold{G}}^{\epsilon}_n(\mathfrak{o}_l)$ for $(\epsilon, n, l) = (1,4,2), (-1,4,2)$ we must first describe the irreducible representations of the groups $\operatorname{\bold{G}}^{\epsilon}_{(2,1,1)}$. The representation zeta function of the group $\operatorname{\bold{G}}^{\epsilon}_{(2,1,1)}$ is given by $$\zeta_{\operatorname{\bold{G}}^\epsilon_{(2,1,1)}}(s)=(q-1)(q-\epsilon)\zeta_{\operatorname{\bold{G}}^\epsilon_2(\mathfrak{o}_1)}(s)q^{-2s} + \zeta_{\operatorname{\bold{K}}^\epsilon}(s)$$ where $$\operatorname{\bold{K}}^{\epsilon} := E^{\epsilon}/Z(E^{\epsilon})\rtimes M^{\epsilon}$$ and $$\begin{aligned} \zeta_{\operatorname{\bold{K}}^1}(s)&=&(q-1)\zeta_{\operatorname{GL}_2(\mathfrak{o}_1)}(s)+2(q-1)^2(q^2-1)^{-s}+(q-1)(q+2)((q^2-1)(q-1))^{-s}+\\&&(q-1)^3(q(q^2-1))^{-s};\\ \zeta_{\operatorname{\bold{K}}^{-1}}(s)&=&(q+1)\zeta_{\operatorname{GU}_2(\mathfrak{o}_1)}(s)+(q^2-1)(q+1)(q(q^2-1))^{-s}+q(q^2-1)(q-1)(q+1)^{-2s}.\end{aligned}$$ The case $\epsilon = 1$ can be found in [@Pooja]. We proceed with the proof for $\epsilon = -1$ which is similar to the proof for $\epsilon = 1$. For simplicity write $E:=E^{-1}$ and $M:=M^{-1}$ and let $H = E\rtimes M$. The group $E\cong H_4(\mathbb{F}_q)$, the Heisenberg group of degree $4$, has $q-1$ irreducible representations of dimension $q^2$ which lie above the non-trivial linear representations of the centre $Z=Z(E)\cong\mathbb{F}_q$. The group $M$ acts trivially on $Z$ and hence stabilises all the $q^2$-dimensional irreducible characters of $E$. Inducing these representations to $H$ contributes $(q-1)\zeta_M(s)q^{-2s}=(q-1)(q+1)q^{-2s}\zeta_{\operatorname{\bold{G}}^{\epsilon}_2(\mathfrak{o}_1)}$ to the zeta function of $H$. We now deal with the remaining representations. These correspond to representations of $E$ whose central representation is trivial and factor through $Q = E/Z$. Consider $Q\rtimes M$ and identify $Q$ and its dual $Q^{\vee}$ with the additive group $\mathbb{F}_{q^2}\times\mathbb{F}_{q^2}$. The action of $m\in M$ on $Q^{\vee}$ is given by $\mathbb{F}_{q^2}\times\mathbb{F}_{q^2}\ni(u,v)\mapsto(a(u\bar{x}+v\bar{y}), a(-u\bar{D}y+vx\bar{D}))$ where we write an element $m\in M$ as $$m = \begin{pmatrix} a & 0 & 0 & 0\\0 & x & y & 0\\0 & -\bar{y}D & \bar{x}D & 0\\ 0 & 0 & 0 & a\end{pmatrix}$$ for some $a,x,y,D\in\mathbb{F}_{q^2}$ satisfying $a\bar{a} = 1$, $x\bar{x}+y\bar{y} = 1$ and $D\bar{D} = 1$. We now use Mackey’s method for semi-direct products (see [@JPS], Section 8.2). The orbits of $M$ on $Q^{\vee}$ are: \[tab:table1\] Orbit Parameter Stabiliser in $D$ --------- ---------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------- $[0,0]$ $-$ $\operatorname{\bold{G}}^{\epsilon}_2(\mathfrak{o}_1)\times\operatorname{\bold{G}}^{\epsilon}_1(\mathfrak{o}_1)$ $[s,0]$ $s\in\mathbb{F}_{q^2}/\operatorname{GU}_1(\mathbb{F}_q)$ $\operatorname{\bold{G}}^{\epsilon}_1(\mathfrak{o}_1)\times \operatorname{\bold{G}}^{\epsilon}_1(\mathfrak{o}_1)$ $[s,1]$ $s\in\mathbb{F}_{q^2}/\operatorname{GU}_1(\mathbb{F}_q)$ $T$ where $$T=\left\{\begin{pmatrix}x & y\\ y & x\end{pmatrix}:x,y\in\mathbb{F}_{q^2}, x\bar{x}+y\bar{y}=1\right\}.$$ This completes the proof. The irreducible representations of $\operatorname{\bold{G}}^{\epsilon}_n(\mathfrak{o}_l)$ for $(\epsilon, n, l) = (1,4,2)$ can be found in [@Pooja]. We have now found every irreducible representation for ($\epsilon, n, l) = (-1,4,2)$ and we summarise this in Table \[table42\]. The number of complex irreducible representations of each degree of the groups $\operatorname{\bold{G}}^{\epsilon}_4(\mathfrak{o}_2)$ can be obtained from the information in Table \[table42\] using equation (\[eqnzeta\]). \[main\] The zeta function of the group $\operatorname{\bold{G}}^{\epsilon}_4(\mathfrak{o}_2)$ is given by $$\zeta_{\operatorname{\bold{G}}^{\epsilon}_4(\mathfrak{o}_2)}(s) = \sum_{A\in\mathbb{T}^{\epsilon}_4}n_A\zeta_{Z_{\operatorname{\bold{G}}^{\epsilon}_4(\mathfrak{o}_1)}(A)}(s)|\operatorname{\bold{G}}^{\epsilon}_4(\mathfrak{o}_1):Z_{\operatorname{\bold{G}}^{\epsilon}_4(\mathfrak{o}_1)}(A)|^{-s},$$ where $\mathbb{T}^{\epsilon}_4$ denotes the set of types of similarity classes in $\operatorname{\mathfrak{g}}^{\epsilon}_4(\mathfrak{o}_1)$. \[table42\] ---------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------------- -------------------------- -------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------ Type $A\in\mathbb{T}^{\epsilon}_4$ Parameter Number of similarity Isomorphism type Index of $Z$ in $\operatorname{\bold{G}}^{\epsilon}_4(\mathfrak{o}_1)$ classes, $n_A$ $Z$ of $Z_{\operatorname{\bold{G}}^{\epsilon}_4(\mathfrak{o}_1)}(A)$ $\parbox{5cm}{$\{(t-\alpha)^{(1,1,1,1)}\}$}$ $\parbox{4.4cm}{$\epsilon=1:\alpha\in\mathbb{F}_q$\\$\epsilon=-1: \alpha+\alpha^\circ=0$}$ $q$ $\operatorname{\bold{G}}^{\epsilon}_4(\mathfrak{o}_1) $ $1$ $\parbox{5cm}{$\{(t-\alpha)^{(2,1,1)}\}$}$ $\parbox{4.4cm}{$\epsilon=1:\alpha\in\mathbb{F}_q$\\$\epsilon=-1: \alpha+\alpha^\circ=0$}$ $q$ $\operatorname{\bold{G}}^{\epsilon}_{(2,1,1)}$ $(q^2+1)(q^3-\epsilon)(q+\epsilon)$ $\parbox{5cm}{$\{(t-\alpha)^{(2,2)}\}$}$ $\parbox{4.4cm}{$\epsilon=1:\alpha\in\mathbb{F}_q$\\$\epsilon=-1: \alpha+\alpha^\circ=0$}$ $q$ $\operatorname{\bold{G}}^{\epsilon}_2(\mathfrak{o}_2)$ $q(q^4-1)(q^3-\epsilon)$ $\parbox{5cm}{$\{(t-\alpha)^{(3,1)}\}$}$ $\parbox{4.4cm}{$\epsilon=1:\alpha\in\mathbb{F}_q$\\$\epsilon=-1: \alpha+\alpha^\circ=0$}$ $q$ $\operatorname{\bold{G}}^{\epsilon}_{(3,1)}$ $q^2(q^4-1)(q^3-\epsilon)(q+\epsilon)$ $\parbox{5cm}{$\{(t-\alpha)^{(4)}\}$}$ $\parbox{4.4cm}{$\epsilon=1:\alpha\in\mathbb{F}_q$\\$\epsilon=-1: \alpha+\alpha^\circ=0$}$ $q$ $\operatorname{\bold{G}}^{\epsilon}_1(\mathfrak{o}_4)$ $q^3(q^4-1)(q^3-\epsilon)(q^2-1)$ $\parbox{5cm}{$\{(t-\alpha_1)^{(1,1,1)},(t-\alpha_2)^{(1)}\}$}$ $\parbox{4.4cm}{$\epsilon=1:\alpha_1\neq\alpha_2\in\mathbb{F}_q$\\$\epsilon=-1: \alpha_i+\alpha_i^\circ=0,/,\alpha_1\neq\alpha_2$}$ $q(q-1)$ $\operatorname{\bold{G}}^{\epsilon}_3(\mathfrak{o}_1)\times\operatorname{\bold{G}}^{\epsilon}_1(\mathfrak{o}_1)$ $q^3(q+\epsilon)(q^2+1)$ $\parbox{5cm}{$\{(t-\alpha_1)^{(2,1)},(t-\alpha_2)^{(1)}\}$}$ $\parbox{4.4cm}{$\epsilon=1:\alpha_1\neq\alpha_2\in\mathbb{F}_q$\\$\epsilon=-1: \alpha_i+\alpha_i^\circ=0,\,\alpha_1\neq\alpha_2$}$ $q(q-1)$ $\operatorname{\bold{G}}^{\epsilon}_{(2,1)}\times\operatorname{\bold{G}}^{\epsilon}_1(\mathfrak{o}_1)$ $q^3(q^2+1)(q+\epsilon)^2(q^3-\epsilon)$ $\parbox{5cm}{$\{(t-\alpha_1)^{(3)},(t-\alpha_2)^{(1)}\}$}$ $\parbox{4.4cm}{$\epsilon=1:\alpha_1\neq\alpha_2\in\mathbb{F}_q$\\$\epsilon=-1: \alpha_i+\alpha_i^\circ=0,\,\alpha_1\neq\alpha_2$}$ $q(q-1)$ $\operatorname{\bold{G}}^{\epsilon}_1(\mathfrak{o}_3)\times\operatorname{\bold{G}}^{\epsilon}_1(\mathfrak{o}_1)$ $q^4(q^4-1)(q^3-\epsilon)(q+\epsilon)$ $\parbox{5cm}{$\{(t-\alpha_1)^{(1,1)},(t-\alpha_2)^{(1,1)}\}$}$ $\parbox{4.4cm}{$\epsilon=1:\alpha_1\neq\alpha_2\in\mathbb{F}_q$\\$\epsilon=-1: \alpha_i+\alpha_i^\circ=0,\, \alpha_1\neq\alpha_2$}$ $\frac{1}{2}q(q-1)$ $\operatorname{\bold{G}}^{\epsilon}_2(\mathfrak{o}_1)\times\operatorname{\bold{G}}^{\epsilon}_2(\mathfrak{o}_1)$ $q^4(q^2+1)(q^2+\epsilon q+1)$ $\parbox{5cm}{$\{(t-\alpha_1)^{(2)},(t-\alpha_2)^{(1,1)}\}$}$ $\parbox{4.4cm}{$\epsilon=1:\alpha_1\neq\alpha_2\in\mathbb{F}_q$\\$\epsilon=-1: \alpha_i+\alpha_i^\circ=0,\, \alpha_1\neq\alpha_2$}$ $q(q-1)$ $\operatorname{\bold{G}}^{\epsilon}_1(\mathfrak{0}_2)\times\operatorname{\bold{G}}^{\epsilon}_2(\mathfrak{o}_1)$ $q^4(q^2+\epsilon q+1)(q^4-1)$ $\parbox{5cm}{$\{(t-\alpha_1)^{(2)},(t-\alpha_2)^{(2)}\}$}$ $\parbox{4.4cm}{$\epsilon=1:\alpha_1\neq\alpha_2\in\mathbb{F}_q$\\$\epsilon=-1: \alpha_i+\alpha_i^\circ=0,\, \alpha_1\neq\alpha_2$}$ $\frac{1}{2}q(q-1)$ $\operatorname{\bold{G}}^{\epsilon}_1(\mathfrak{o}_2)\times\operatorname{\bold{G}}^{\epsilon}_1(\mathfrak{o}_2)$ $q^4(q+\epsilon)(q^4-1)(q^3-\epsilon)$ $\parbox{5cm}{$\{(t-\alpha_1)^{(1,1)},(t-\alpha_2)^{(1)},(t-\alpha_3)^{(1)}\}$}$ $\parbox{4.4cm}{$\epsilon=1:\alpha_i\in\mathbb{F}_q$ distinct\\$\epsilon=-1: \alpha_i+\alpha_i^\circ=0$, distinct}$ $\frac{1}{2}q(q-1)(q-2)$ $\operatorname{\bold{G}}^{\epsilon}_2(\mathfrak{o}_1)\times\operatorname{\bold{G}}^{\epsilon}_1(\mathfrak{o}_1)^2$ $q^5(q+\epsilon)(q^2+1)(q^2+\epsilon q+1)$ ---------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------------- -------------------------- -------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------ : Representatives of similarity classes in $\operatorname{\mathfrak{g}}^{\epsilon}_4(\mathfrak{o}_1)$ under $\operatorname{\bold{G}}^{\epsilon}_4(\mathfrak{o}_2)$ \[tab:table1\] ------------------------------------------------------------------------------------------------------------------------------------------------ -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -------------------------------- -------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------- Type $A\in\mathbb{T}^{\epsilon}_4$ Parameter Number of similarity Isomorphism type Index of $Z$ classes, $n_A$ $Z$ of $Z_{\operatorname{\bold{G}}^{\epsilon}_4(\mathfrak{o}_1)}(A)$ in $\operatorname{\bold{G}}^{\epsilon}_4(\mathfrak{o}_1)$ $\parbox{6.5cm}{$\{(t-\alpha_1)^{(2)},(t-\alpha_2)^{(1)},(t-\alpha_3)^{(1)}\}$}$ $\parbox{6.75cm}{$\epsilon=1:\alpha_i\in\mathbb{F}_q$ distinct\\$\epsilon=-1: \alpha_i+\alpha_i^\circ=0$ distinct}$ $\frac{1}{2}q(q-1)(q-2)$ $\operatorname{\bold{G}}^{\epsilon}_1(\mathfrak{o}_1)^3$ $q^5(q^2+1)(q+\epsilon)^2(q^3-\epsilon)$ $\parbox{6.5cm}{$\{(t-\alpha_i)^{(1)}\}_{i=1,2,3,4}$}$ $\parbox{6.75cm}{$\epsilon=1:\alpha_i\in\mathbb{F}_q$ distinct\\$\epsilon=-1: \alpha_i+\alpha_i^\circ=0$ distinct}$ $\frac{1}{24}q(q-1)(q-2)(q-3)$ $\operatorname{\bold{G}}^{\epsilon}_1(\mathfrak{o}_1)^4$ $\parbox{3cm}{$q^6(q^3+\epsilon q^2+q+\epsilon)\times$\\$(q+\epsilon)(q^2+\epsilon q+ 1)$}$ $\parbox{6.5cm}{$\epsilon = 1: \{(t-\alpha)^{(1,1)},f^{(1)}\}$\\$\epsilon=-1:\{(t-\alpha_1)^{(1,1)},(t-\alpha_2)^{(1)},(t-\alpha_3)^{(1)}\}$}$ $\parbox{6.75cm}{$\epsilon=1:\alpha\in\mathbb{F}_q,\,f$ irreducible quadratic\\$\epsilon=-1:\alpha_1+\alpha_1^{\circ} = 0,\,\alpha_2=-\alpha_3^\circ$ distinct}$ $\frac{1}{2}q^2(q-1)$ $\operatorname{\bold{G}}^{\epsilon}_2(\mathfrak{o}_1)\times\mathbb{F}_{q^2}^*$ $q^5(q^3-\epsilon)(q^4-1)$ $\parbox{6.5cm}{$\epsilon = 1: \{(t-\alpha)^{(2)},f^{(1)}\}$\\$\epsilon=-1:\{(t-\alpha_1)^{(2)},(t-\alpha_2)^{(1)},(t-\alpha_3)^{(1)}\}$}$ $\parbox{6.75cm}{$\epsilon=1:\alpha\in\mathbb{F}_q,\,f$ irreducible quadratic\\$\epsilon=-1:\alpha_1+\alpha_1^{\circ} = 0,\,\alpha_2=-\alpha_3^\circ$ distinct}$ $\frac{1}{2}q^2(q-1)$ $\operatorname{\bold{G}}^{\epsilon}_1(\mathfrak{o}_2)\times\mathbb{F}_{q^2}^*$ $q^5(q^3-\epsilon)(q^4-1)$ $\parbox{6.5cm}{$\epsilon = 1: \{(t-\alpha_1)^{(1)},(t-\alpha_2)^{(1)},f^{(1)}\}$\\$\epsilon=-1:\{(t-\alpha_i)^{(1)}\}_{i=1,2,3,4}$}$ $\parbox{6.75cm}{$\epsilon=1:\alpha_i\in\mathbb{F}_q$ distinct, $f$ irreducible quadratic\\$\epsilon=-1:\alpha_i+\alpha_i^{\circ} = 0,\,i=1,2$ distinct\\$\alpha_3=-\alpha_4^\circ$ distinct}$ $\frac{1}{4}q^2(q-1)^2$ $\operatorname{\bold{G}}^{\epsilon}_1(\mathfrak{o}_1)^2 \times\mathbb{F}_{q^2}^*$ $q^6(q+\epsilon)(q^2+1)(q^3-\epsilon)$ $\parbox{6.5cm}{$\epsilon = 1: \{f^{(1,1)}\}$\\$\epsilon=-1:\{(t-\alpha_1)^{(1,1)},(t-\alpha_2)^{(1,1)}\}$}$ $\parbox{6.75cm}{$\epsilon=1:f$ irreducible quadratic\\$\epsilon=-1:\alpha_1=-\alpha_2^\circ$ distinct}$ $\frac{1}{2}q(q-1)$ $\operatorname{GL}_2(\mathbb{F}_{q^2})$ $q^4(q-\epsilon)(q^3-\epsilon)$ $\parbox{6.5cm}{$\epsilon = 1: \{f^{(2)}\}$\\$\epsilon=-1:\{(t-\alpha_1)^{(2)},(t-\alpha_2)^{(2)}\}$}$ $\parbox{6.75cm}{$\epsilon=1:f$ irreducible quadratic\\$\epsilon=-1:\alpha_1=-\alpha_2^\circ$ distinct}$ $\frac{1}{2}q(q-1)$ $\mathbb{F}_{q^2}\times\mathbb{F}_{q^2}^*$ $q^4(q^4-1)(q^3-\epsilon)(q-\epsilon)$ $\parbox{6.5cm}{$\epsilon = 1: \{f^{(1)},g^{(1)}\}$\\$\epsilon=-1:\{(t-\alpha_i)^{(1)}\}_{i=1,2,3,4}$}$ $\parbox{6.75cm}{$\epsilon=1:f\neq g$ irreducible quadratics\\$\epsilon=-1:\alpha_1=-\alpha_2^\circ$ distinct, $\alpha_3=-\alpha_4^\circ$ distinct\\$\{\alpha_1,\alpha_2\}\neq\{\alpha_3,\alpha_4\}$}$ $\frac{1}{8}q(q-1)(q^2-q-2)$ $\mathbb{F}_{q^2}^*\times\mathbb{F}_{q^2}^*$ $q^6(q^2+1)(q^3-\epsilon)(q-\epsilon)$ $\parbox{6.5cm}{$\{(t-\alpha)^{(1)},f^{(1)}\}$ $f$ irreducible cubic}$ $\parbox{6.75cm}{$\epsilon=1:\alpha\in\mathbb{F}_q$\\$\epsilon=-1:\alpha+\alpha^\circ=0$ *}$ $\frac{1}{3}q^2(q^2-1)$ $\operatorname{\bold{G}}^{\epsilon}_1(\mathfrak{o}_1)\times\operatorname{\bold{G}}^{\epsilon}_1(\mathbb{F}_{q^3})$ $q^6(q^4-1)(q^2-1)$ $\parbox{6.5cm}{$\{f^{(1)}\}$ $f$ irreducible quartic}$ $\parbox{6.75cm}{**}$ $\frac{1}{4}q^2(q^2-1)$ $\mathbb{F}_{q^4}^*$ $q^6(q-\epsilon)(q^2-1)(q^3-\epsilon)$ ------------------------------------------------------------------------------------------------------------------------------------------------ -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -------------------------------- -------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------- If $\epsilon=-1$ we require that $f=t^3+\sum_{i=0}^{2}c_it^i\in\mathbb{F}_{q^2}[t]$ where $c_i^{\circ}=(-1)^{i+1}c_i$ for $0\leq i<3$.\ \*\* If $\epsilon=-1$ we require that $f=t^4+\sum_{i=0}^{3}c_it^i\in\mathbb{F}_{q^2}[t]$ where $c_i^{\circ}=(-1)^{i+1}c_i$ for $0\leq i<4$.\ Acknowledgements ================ I would like to thank Christopher Voll for introducing me to this problem and for his support, guidance and insight throughout this project.
--- author: - | Stephen Semmes\ Rice University title: 'Cellular structures, quasisymmetric mappings, and spaces of homogeneous type' --- Let $X$ be a compact Hausdorff topological space. A collection $\mathcal{C}$ of nonempty subsets of $X$ may be described as a *cellular structure* for $X$ if it satisfies the following three properties. First, each $C \in \mathcal{C}$ is both open and closed, and $X \in \mathcal{C}$. Second, $\mathcal{C}$ is a base for the topology of $X$. Third, if $C, C' \in \mathcal{C}$, then either $$C \cap C' = \emptyset, \hbox{ or } C \subseteq C', \hbox{ or } C' \subseteq C.$$ In this case, a set $C \in \mathcal{C}$ may be called a *cell* in $X$. A compact Hausdorff space with a cellular structure is a *cellular space*. For example, the usual construction of the Cantor set leads to a natural cellular structure, where the cells are the parts of the Cantor set in the closed intervals generated in the construction. Of course, a Hausdorff topological space with a base for its topology consisting of sets that are both open and closed is automatically totally disconnected, in the sense that there are no connected subsets with more than one element. The collection of all subsets of the space that are both open and closed is then an algebra of sets as well as a base for the topology. One can think of a cellular structure as a kind of geometric structure on such a space. For the sake of simplicity, let us restrict our attention to compact spaces, although one could consider non-compact spaces too. For instance, one might consider locally compact Hausdorff spaces that are $\sigma$-compact. Let $(X, \mathcal{C})$ be a cellular space, and suppose that $A \subseteq X$ is open and closed. In particular, $A$ is compact, since $X$ is compact. Because $A$ is open and $\mathcal{C}$ is a base for the topology of $X$, $A$ can be expressed as the union of a collection of cells. If $A = \emptyset$, then one can interpret this as meaning that $A$ is the union of the empty collection of cells. This is an open covering of $A$, since cells are open sets. By compactness, $A$ is the union of finitely many cells. Using the nesting property of cells, it follows that $A$ is the union of finitely many disjoint cells. Suppose that $C_1, \ldots, C_n$ are finitely many pairwise-disjoint cells in $X$. In particular, $C_1, \ldots C_n$ are both open and closed, as is $$A = X \backslash (C_1 \cup \cdots \cup C_n).$$ The preceding observation implies that $X$ is the union of a collection of finitely many pairwise-disjoint cells that includes the $C_i$’s. Suppose that $C_1, \ldots, C_n$ are finitely many pairwise-disjoint cells whose union is $X$. If $C$ is a cell such that $C_i \subseteq C$ for some $i$, $1 \le i \le n$, then $C$ is the union of some of the $C_j$’s. Hence there can only be finitely many such cells $C$. It follows that every cell in $X$ is contained in only finitely many other cells. If $C$ is a cell with at least two elements, then every point in $C$ is contained in a cell that is a proper subset of $C$. By compactness, $C$ is the union of finitely many cells that are proper subsets of $C$. As usual, the smaller cells can also be taken to be pairwise disjoint. If a cell has only one element $p$, then $p$ is an isolated point in $X$. For each $p \in X$, $$\mathcal{C}(p) = \{C \in \mathcal{C} : p \in C\}$$ is linearly ordered by inclusion, because of the nesting property for cells. Thus the collection of cells that contains a fixed cell $C_0$ is finite and linearly ordered by inclusion, which means that any cell in some collection of cells is contained in a maximal cell in the same collection. Using this, one can check that every cell $C$ with at least two elements is the union of finitely many cells that are proper subsets of $C$ and maximal with respect to inclusion. Note that maximal cells in any collection are automatically pairwise disjoint. A consequence of these remarks is that there are only finitely or countably many cells in $X$. This is trivial when $X$ has only one element, and otherwise $X$ is the union of finitely many pairwise-disjoint proper sub-cells $C_1, \ldots, C_n$. There are only finitely many cells that contain one of the $C_i$’s, and every other cell is contained in one of the $C_i$’s. Each $C_i$ with at least two elements is also a union of finitely many pairwise-disjoint proper sub-cells, and so one can repeat the process. Every cell is contained in only finitely many other cells, and hence is reached in finitely many steps. Let $(X, \mathcal{C})$ be a cellular space, and suppose that $Y \subseteq X$ is nonempty and compact. Put $$\mathcal{C}_Y = \{C \cap Y : C \in \mathcal{C}, \, C \cap Y \ne \emptyset\}.$$ This is a cellular structure on $Y$, which is induced from the one on $X$. The example of the Cantor set can be extended, as follows. Let $X_1, X_2, \ldots$ be a sequence of finite sets with at least two elements, and let $X$ be the set of sequences $x = \{x_i\}_{i = 1}^\infty$ such that $x_i \in X_i$ for each $i$. Thus $X$ is the Cartesian product of the $X_i$’s, which is a compact Hausdorff space with respect to the product topology using the discrete topology on each $X_i$. For every nonnegative integer $l$ and $x \in X$, let $N_l(x)$ be the set of $y \in X$ such that $y_i = x_i$ when $i \le l$. Note that $N_0(x) = X$, and $N_l(x)$ is both open and closed in $X$ for all $l \ge 0$ and $x \in X$. The collection of $N_l(x)$’s is a base for the product topology on $X$. It is easy to check that the collection of $N_l(x)$’s defines a cellular structure for $X$. Let us call this the *product cellular structure* on $X$. If $(X, \mathcal{C})$ is a cellular space, then there is a natural graph $\mathcal{T}$ whose vertices are the cells in $X$. Specifically, we can attach an edge between the vertices associated to two cells $C$, $C'$ when $C' \subseteq C$, $C' \ne C$, and $C'$ is a maximal proper sub-cell in $C$. The nesting property of cells implies that $\mathcal{T}$ is a tree. By definition, $X$ is a cell, which we can take to be the root of the tree. Conversely, suppose that $\mathcal{T}$ is a locally-finite tree with root $\tau$. A *ray* in $\mathcal{T}$ is a simple path beginning at $\tau$ and continuing as long as possible. More precisely, a ray in $\mathcal{T}$ may stop after finitely many steps when it arrives at a vertex with no additional edge to follow, or it may traverse infinitely many edges. Let $X$ be the set of rays in $\mathcal{T}$. It is convenient to represent a ray in $\mathcal{T}$ as an infinite sequence of vertices in $\mathcal{T}$, where the last vertex of a finite ray is repeated indefinitely. For each nonnegative integer $l$, let $X_l$ be the set of vertices of $\mathcal{T}$ that can be reached from $\tau$ in $\le l$ steps. Thus $X_l$ has only finitely many elements for every $l \ge 0$, and $X$ can be identified with a subset of the Cartesian product of the $X_l$’s. It is easy to see that this is a closed set in the product topology, so that $X$ becomes a compact Hausdorff space using the induced topology. Each finite simple path in $\mathcal{T}$ starting at $\tau$ determines a set of rays, i.e., the set of rays that are continuations of the path. One can use these sets of rays as cells in $X$, which are the same as the ones induced by the product space. Suppose that $(X, d(x, y))$ is a compact ultrametric space. This means that $(X, d(x, y))$ is a compact metric space, and that $$d(x, z) \le \max(d(x, y), d(y, z))$$ for every $x, y, z \in X$. In an ultrametric space, closed balls with positive radii are open sets, and form a base for the topology. The ultrametric version of the triangle inequality also implies that any two closed balls are either disjoint or one is contained in the other, so that closed balls with positive radii determine a cellular structure on $X$. More precisely, a set $C \subseteq X$ would be a cell if it could be expressed as a closed ball with some center $x \in X$ and radius $r > 0$, but $x$ and $r$ are not necessarily uniquely determined by $C$. If $X = \prod_{i = 1}^\infty X_i$ with the product cellular structure, then compatible ultrametrics on $X$ can be obtained as follows. Let $\rho = \{\rho_i\}_{i = 0}^\infty$ be a strictly decreasing sequence of positive real numbers such that $\rho_0 = 1$ and $$\lim_{i \to \infty} \rho_i = 0.$$ For each $x, y \in X$, put $d_\rho(x, y) = 0$ when $x = y$, and otherwise $$d_\rho(x, y) = \rho_n$$ where $l$ is the largest nonnegative integer such that $x_i = y_i$ when $i \le n$. It is not difficult to verify that $d_\rho(x, y)$ is an ultrametric on $X$ for which the corresponding topology is the product topology, and for which the closed balls are the cells in the product cellular structure. A standard regularity condition for $\rho$ asks that there be real numbers $0 < a \le b < 1$ such that $$\label{regularity condition} a \le \frac{\rho_{i + 1}}{\rho_i} \le b$$ for each $i \ge 0$. For instance, this holds if $\rho_i$ is the $i$th power of a fixed positive real number less than $1$. If $\rho$, $\widetilde{\rho}$ are two such sequences, then the corresponding ultrametrics $d_\rho(x, y)$, $d_{\widetilde{\rho}}(x, y)$ are quasisymmetrically equivalent, in the sense that the identity mapping on $X$ is quasisymmetric as a mapping from $(X, d_\rho(x, y))$ to $(X, d_{\widetilde{\rho}}(x, y))$. Remember that a metric space $(X, d(x, y))$ is said to be *doubling* if every ball in $X$ can be covered by a bounded number of balls of half the radius. A positive Borel measure on $X$ is said to be a *doubling measure* with respect to the metric if the measure of every ball is bounded by a constant times the measure of the ball with the same center and half the radius. A well known covering argument implies that a metric space with a doubling measure is doubling. In analogy with this, let us say that a cellular space $(X, \mathcal{C})$ is *doubling* if there is a $k_1 \ge 1$ such that every cell $C$ in $X$ contains no more than $k_1$ maximal proper sub-cells. Similarly, a positive Borel measure $\mu$ on $X$ is a *doubling measure* with respect to the cellular structure if $$0 < \mu(C) < \infty$$ for every cell $C$, and if there is a $k_2 \ge 1$ such that $$\mu(C) \le \mu(C')$$ whenever $C$, $C'$ are cells such that $C' \subseteq C$, $C' \ne C$, and $C'$ is a maximal proper sub-cell in $C$. This implies that $C$ has at most $k_2$ maximal proper sub-cells, since $C$ is the disjoint union of its maximal proper sub-cells. For example, if $X = \prod_{i = 1}^\infty X_i$ with the product cellular structure, then $X$ is doubling with constant $k_1$ if and only if each $X_i$ has at most $k_1$ elements. A nice class of measures on $X$ is given by product measures $\mu = \prod_{i = 1}^\infty \mu_i$, where each $\mu_i$ is a probability measure on the finite set $X_i$. Thus $\mu_i$ is defined by assigning weights to the elements of $X_i$ whose sum is $1$, and $\mu$ is doubling on $X$ with constant $k_2$ if and only if the $\mu_i$ measure of each element of $X_i$ is at least $1/k_2$. If $\rho$ is a strictly decreasing sequence of positive real numbers that satisfies the regularity condition (\[regularity condition\]), then the doubling condition for $X$ as a metric space with the metric $d_\rho(x, y)$ is also equivalent to the boundedness of the number elements of the $X_i$’s. Also, a positive Borel measure on $X$ is then doubling with respect to the metric $d_\rho(x, y)$ if and only if it is doubling with respect to the associated cellular structure. Let $(X, \mathcal{C})$ be a cellular space with a metric $d(x, y)$ that determines the same topology on $X$. A more abstract version of the regularity condition (\[regularity condition\]) for the compatibility of the metric $d(x, y)$ with the cellular structure $\mathcal{C}$ asks that there be positive real numbers $\alpha$, $\beta$, $\gamma$ with $\alpha \le \beta < 1$ such that $$\label{regularity condition, 1} \alpha \, \diam C \le \diam C' \le \beta \diam C$$ for every cell $C$ in $X$ and maximal proper sub-cell $C'$ in $C$, and $$\label{regularity condition, 2} \dist (C', C'') \ge \gamma \, \diam C$$ when $C'$, $C''$ are distinct maximal proper sub-cells of a cell $C$. As usual, $\diam C$ denotes the diameter of $C$, which is the supremum of the distances between elements of $C$, and $\dist (C', C'')$ denotes the distance between $C'$ and $C''$, which is to say the infimum of the distances between elements of $C'$ and $C''$. In the special case where $X = \prod_{i = 1}^\infty X_i$ with the product cellular structure and $d(x, y) = d_\rho(x, y)$, $\alpha$ and $\beta$ correspond exactly to $a$ and $b$, and one can take $\gamma = 1$. Note that a cell $C$ with at least two elements has a proper sub-cell, since the cells form a base for the topology. Conversely, if a cell $C$ has a proper sub-cell, then $C$ has at least two elements, and hence the diameter of $C$ is positive. If $C'$ is a maximal proper sub-cell of a cell $C$, then the preceding regularity condition implies that $C'$ has positive diameter as well. Applying this repeatedly, it follows that $X$ has no isolated points when $X$ has at least two elements. Suppose that $d(x, y)$ is an ultrametric on $X$ and $\mathcal{C}$ consists of the closed balls in $X$ with positive radius. If $C$ is a cell in $X$ with diameter $r$, then $C$ is the same as the closed ball in $X$ defined by $d(x, y)$ with radius $r$ and centered at any element of $C$. It may be that $r = 0$, so that $C$ consists of a single point $p$, in which case $p$ should be an isolated point in $X$. In any case, it may be possible to represent $C$ as a ball of radius larger than $r$. Note that the diameter of a ball of radius $t \ge 0$ in an ultrametric space is less than or equal to $t$, while in an ordinary metric space it is less than or equal to $2 \, t$ and often equal to $2 \, t$. If $C$, $C'$ are cells in $X$ such that $C' \subseteq C$ and $C' \ne C$, then $$\diam C' < \diam C.$$ Indeed, $\diam C' \le \diam C$ since $C' \subseteq C$, and equality of the diameters would imply that $C' = C$, by the previous remarks. If $C'$, $C''$ are cells in $X$ and $$t = \max(\diam C', \diam C'', \dist(C', C'')),$$ then $$d(x, y) \le t$$ for every $x \in C'$ and $y \in C''$, and $C' \cup C''$ is contained in the closed ball with radius $t$ centered at any point in $C' \cup C''$. If $C'$, $C''$ are distinct maximal proper sub-cells of a cell $C$, then $\diam C = t$, because $t \le \diam C$ by the inclusion $C', C'' \subseteq C$, and $t < \diam C$ would imply that there is a proper sub-cell of $C$ that contains $C'$ and $C''$. Thus one can take $\gamma = 1$ when $d(x, y)$ is an ultrametric on $X$ and $\mathcal{C}$ is the cellular structure associated to the ultrametric. As in the product case, if $d(x, y)$ satisfies the regularity conditions (\[regularity condition, 1\]) and (\[regularity condition, 2\]), then $X$ is doubling with respect to $d(x, y)$ if and only if $X$ is doubling with respect to the cellular structure $\mathcal{C}$, and a positive Borel measure on $X$ is doubling with respect to $d(x, y)$ if and only if it is doubling with respect to $\mathcal{C}$. If $\widetilde{d}(x, y)$ is another metric on $X$ that determines the same topology and satisfies the regularity conditions, then $d(x, y)$ and $\widetilde{d}(x, y)$ are quasisymmetrically equivalent in the sense that the identity mapping is quasisymmetric as a mapping from $(X, d(x, y))$ to $(X, \widetilde{d}(x, y))$. Of course, there are interesting situations where the regularity conditions do not hold. For example, one can have fat Cantor sets in the real line which the standard Euclidean metric satisfies (\[regularity condition, 1\]) and not (\[regularity condition, 2\]), and which are doubling with respect to both the metric and cellular structure. One may have an upper bound as in (\[regularity condition, 1\]) with $\beta < 1$ and not a lower bound. It may be that (\[regularity condition, 2\]) still holds, or one might ask for a lower bound in terms of a multiple of the diameters of $C'$ and $C''$. It may be that the regularity conditions are satisfied, and that $X$ is quite large and not doubling. Let $(X, \mathcal{C})$ be a cellular space, and let $\rho$ be a nonnegative real-valued function on $\mathcal{C}$ such that $\rho(C) = 0$ if and only if $C$ has only one element, and $$\rho(C') < \rho(C)$$ when $C, C' \in \mathcal{C}$, $C' \subseteq C$, and $C' \ne C$. For $x, y \in X$, put $d_\rho(x, y) = 0$ when $x = y$, and otherwise $$\label{d_rho(x, y) = rho(C(x, y))} d_\rho(x, y) = \rho(C(x, y))$$ where $C$ is the minimal cell that contains $x$ and $y$. Thus $d_\rho(x, y) > 0$ when $x \ne y$, and $$d_\rho(y, x) = d_\rho(x, y)$$ for every $x, y \in X$. Let us check that $$d_\rho(x, z) \le \max(d_\rho(x, y), d_\rho(y, z))$$ for every $x, y, z \in X$. This is trivial when $x = y$ or $y = z$, and so we may suppose that $x \ne y \ne z$. If $C(x, y)$, $C(y, z)$ are the minimal cells such that contain $x, y$ and $y, z$, respectively, then either $C(x, y) \subseteq C(y, z)$ or $C(y, z) \subseteq C(x, y)$, since $C(x, y)$ and $C(y, z)$ both contain $y$ are are therefore not disjoint. Thus $z \in C(x, y)$ or $x \in C(y, z)$, and the inequality follows. This shows that $d_\rho(x, y)$ is an ultrametric on $X$. The topology determined by $d_\rho(x, y)$ is the same as the initial topology on $X$ if for each $x \in X$ and $\epsilon > 0$ there is a cell $C$ such that $x \in C$ and $\rho(C) < \epsilon$. By compactness, this is the same as saying that for each $\epsilon > 0$ there are finitely many cells $C_1, \ldots, C_n$ such that $X = \bigcup_{i = 1}^n C_i$ and $\rho(C_i) < \epsilon$ for each $i$. Suppose that $C$ is a cell with at least two elements. Thus $C$ contains a proper sub-cell, and hence a maximal proper sub-cell $C'$. If $x \in C'$ and $y \in C \backslash C'$, then $C$ is the minimal cell that contains both $x$ and $y$. This implies that the diameter of $C$ is equal to $\rho(C)$ with respect to $d_\rho$, which holds trivially when $C$ has only one element. One can check that each cell $C$ is equal to the closed ball centered at any element of $C$ with radius $\rho(C)$ with respect to $d_\rho$, and that every closed ball of positive radius with respect to $d_\rho$ is a cell. If $X$ has at least two elements and no isolated points, then every cell has at least two elements. This implies that every cell has at least two distinct maximal proper sub-cells. In this case, one can choose $\rho$ so that $d_\rho$ satisfies the regularity conditions (\[regularity condition, 1\]) and (\[regularity condition, 2\]). As in the earlier examples, one might also be interested in metrics that do not satisfy the regularity conditions. The setting of cellular spaces seems to be quite natural for having some nice properties while at the same time accommodating a range of possibilities. [49]{} M. Bonk, J. Heinonen, and P. Koskela, [*Uniformizing Gromov Hyperbolic Spaces*]{}, Astérisque [**270**]{}, 2001. R. Coifman and G. Weiss, [*Analyse Harmonique Non-Commutative sur Certains Espaces Homogènes*]{}, Lecture Notes in Mathematics [**242**]{}, Springer-Verlag, 1971. R. Coifman and G. Weiss, [*Extensions of Hardy spaces and their use in analysis*]{}, Bulletin of the American Mathematical Society [**83**]{} (1977), 569–645. G. David and S. Semmes, [*Fractured Fractals and Broken Dreams: Self-Similar Geometry through Metric and Measure*]{}, Oxford University Press, 1997. K. Falconer, [*The Geometry of Fractal Sets*]{}, Cambridge University Press, 1986. P. Halmos, [*Lectures on Boolean Algebras*]{}, Springer-Verlag, 1974. J. Heinonen, [*Lectures on Analysis on Metric Spaces*]{}, Springer-Verlag, 2001. J. Heinonen, [*Geometric embeddings of metric spaces*]{}, Reports of the Department of Mathematics and Statistics [**90**]{}, University of Jyväskylä, 2003. W. Hurewicz and H. Wallman, [*Dimension Theory*]{}, Princeton University Press, 1941. S. Krantz, [*A Panorama of Harmonic Analysis*]{}, Mathematical Association of America, 1999. J. Luukkainen and H. Movahedi-Lankarani, [*Minimal bi-Lipschitz embedding dimension of ultrametric spaces*]{}, Fundamenta Mathematic[æ]{} [**144**]{} (1994), 181–193. R. Macías and C. Segovia, [*Lipschitz functions on spaces of homogeneous type*]{}, Advances in Mathematics [**33**]{} (1979), 257–270. P. Mattila, [*Geometry of Sets and Measures in Euclidean Spaces*]{}, Cambridge University Press, 1995. H. Movahedi-Lankarani, [*An invariant for bi-Lipschitz maps*]{}, Fundamenta Mathematic[æ]{} [**143**]{} (1993), 1–9. A. Papadopoulos, [*Metric Spaces, Convexity and Nonpositive Curvature*]{}, European Mathematical Society, 2005. S. Staples and L. Ward, [*Quasisymmetrically thick sets*]{}, Annales AcademiæScientiarum Fennic[æ]{} Mathematica [**23**]{} (1998), 151–168. E. Stein, [*Singular Integrals and Differentiability Properties of Functions*]{}, Princeton University Press, 1970. E. Stein, [*Harmonic Analysis: Real-Variable Methods, Orthogonality, and Oscillatory Integrals*]{}, with the assistance of T. Murphy, Princeton University Press, 1993. E. Stein and G. Weiss, [*Introduction to Fourier Analysis on Euclidean Spaces*]{}, Princeton University Press, 1971. M. Taibleson, [*Fourier Analysis on Local Fields*]{}, Princeton University Press, 1975. D. Trotsenko and J. Väisälä, [*Upper sets and quasisymmetric mappings*]{}, Annales Academi[æ]{} Scientiarum Fennic[æ]{} Mathematica [**24**]{} (1999), 465–488. P. Tukia and J. Väisälä, [*Quasisymmetric embeddings of metric spaces*]{}, Annales Academi[æ]{} Scientiarum Fennic[æ]{} Series A I Mathematica [**5**]{} (1980), 97–114. J. Väisälä, [*Hyperbolic and uniform domains in Banach spaces*]{}, Annales Academi[æ]{} Scientiarum Fennic[æ]{} Mathematica [**30**]{} (2005), 261–302. J. Väisälä, M. Vuorinen, and H. Wallin, [*Thick sets and quasisymmetric maps*]{}, Nagoya Mathematical Journal [**135**]{} (1994), 121–148.
--- abstract: 'The HyperCP collaboration has recently reported the observation of three events for the decay $\Sigma^{+}\to p \mu^{+}\mu^{-}$. They have suggested that new physics may be required to understand the implied decay rate and the observed $M_{\mu\mu}^{}$ distribution. Motivated by this result, we re-examine this mode within the standard model, considering both the short-distance and long-distance contributions. The long-distance part depends on four complex form-factors. We determine their imaginary parts from unitarity, fix two of the real parts from the $\Sigma^{+}\to p \gamma$ measurements, and estimate the other two with vector-meson-dominance models. Taking into account constraints from $\Sigma^{+}\to p e^{+}e^{-}$, we find that $\Sigma^{+}\to p \mu^{+}\mu^{-}$ is long-distance dominated and its rate falls within the range suggested by the HyperCP measurement.' author: - 'Xiao-Gang He' - Jusak Tandean - 'G. Valencia' title: 'The Decay $\bm{\Sigma^{+}\to p \ell^{+}\ell^{-}}$ within the Standard Model' --- hep-ph/0506067\ WSU-HEP-0504 Introduction ============ Three events for the decay mode $\Sigma^{+}\to p \mu^{+}\mu^{-}$ have been recently observed by the HyperCP (E871) collaboration [@Park:2005ek] with results that suggest new physics may be needed to explain them. In this paper we re-examine this mode [@Bergstrom:1987wr] within the standard model. There are short- and long-distance contributions to this decay. In the standard model (SM), the leading short-distance contribution comes from the $Z$-penguin and box diagrams, as well as the electromagnetic penguin with the photon connected to the dimuon pair [@Buchalla:1995vs]. We find that this contribution yields a branching ratio of order $10^{-12}$, which is much smaller than the central experimental value of $8.6\times 10^{-8}$ reported by HyperCP [@Park:2005ek]. It is well known that the long-distance contribution to the weak radiative mode $\Sigma^+\to p\gamma$ is much larger than the short-distance contribution. It is therefore also possible to have enhanced long-distance contributions to $\Sigma^{+}\to p\mu^{+}\mu^{-}$ via an intermediate virtual photon from $\Sigma^+\to p\gamma$. We find that the resulting branching ratio is in agreement with the measured value. There is, of course, still the possibility [@He:1999ik] that new physics is responsible for the observed branching ratio of $\Sigma^+\to p\gamma$ and hence that of $\Sigma^{+}\to p\mu^{+}\mu^{-}$. This implies that it is essential to have an up-to-date estimate of the standard-model contributions, on which we concentrate in this work. In Sec. \[sd\] we update the estimate of the short-distance amplitude. We use the standard effective Hamiltonian for the $s\to d\ell^{+}\ell^{-}$ transition [@Buchalla:1995vs] supplemented with hadronic matrix elements for the relevant currents. In Sec. \[ld\] we study the long-distance contributions mediated by a real or a virtual photon. These can be parameterized by four (complex) gauge-invariant form-factors [@Bergstrom:1987wr]. We determine the imaginary parts of these form factors from unitarity. The real parts of two of the form factors can be reasonably assumed to be constant as a first approximation and can then be extracted from the measured rate and asymmetry parameter for $\Sigma^{+}\to p\gamma$ up to a fourfold ambiguity. The real parts of the two remaining form-factors cannot be extracted from experiment at present, and so we estimate them using vector-meson-dominance models. Finally, in Sec. \[sum\] we combine all these results to present the predictions for the rates and spectra of the two modes $\Sigma^{+}\to p\mu^{+}\mu^{-},\,p e^+e^-$. Before concluding, we discuss the implications of our analysis for the possibility that new physics could be present in the recent measurement by HyperCP. Short-distance contributions\[sd\] ================================== The short-distance effective Hamiltonian responsible for $\Sigma^+\to p \ell^+\ell^-$ contains contributions originating from the $Z$-penguin, box, and electromagnetic-penguin diagrams. It is given by [@Shifman:1976de; @Buchalla:1995vs] $$\begin{aligned} {\cal H}_{\rm eff}^{} &=& \frac{G_F^{}}{\sqrt{2}} V_{ud}^* V_{us}^{}\, \bigl[ \bigl(z_{7V}^{}+\tau y_{7V}^{}\bigr) O_{7V}^{}+ \tau y_{7A}^{} O_{7A}^{}\bigr] + \frac{G_F^{}}{\sqrt{2}} \sum_j V_{jd}^* V_{js}^{}\, c^j_{7\gamma}O_{7\gamma} \,\,,\end{aligned}$$ where $V_{kl}^{}$ are the elements of the Cabibbo-Kobayashi-Maskawa (CKM) matrix [@ckm], $z$, $y$, and $c$ are the Wilson coefficients, $\,\tau =- V_{td}^*V_{ts}^{}/\bigl(V_{ud}^*V_{us}^{}\bigr)$, and $$\begin{aligned} O_{7V}^{} &=& \bar d\gamma^\mu(1-\gamma_5^{})s\, \bar\ell^-\gamma_\mu^{}\ell^+ \,\,, \hspace{2em} O_{7A}^{} \,\,=\,\, \bar d\gamma^\mu(1-\gamma_5^{})s\, \bar\ell^-\gamma_\mu^{}\gamma_5^{}\ell^+ \,\,, \nonumber\\ O_{7\gamma}^{} &=& \frac{e}{16\pi^2}\, \bar d \sigma^{\mu\nu} F_{\mu\nu}^{} \bigl[ m_s^{} (1+\gamma_5^{}) + m_d^{} (1-\gamma_5^{})\bigr] s \,\,,\end{aligned}$$ with $F_{\mu\nu}^{}$ being the photon field-strength tensor. The contribution of $O_{7\gamma}^{}$ to $\Sigma^+\to p \ell^+\ell^-$ occurs via the photon converting to a lepton pair. The total short-distance contribution to the $\Sigma^+\to p\ell^+\ell^-$ amplitude is then given by $$\begin{aligned} && \hspace*{-4ex} {\cal M}(\Sigma^+\to p\ell^+\ell^-) \,\,=\,\, \bigl\langle p\ell^+\ell^-\bigr|{\cal H}_{\rm eff}^{}\bigl|\Sigma^+\bigr\rangle \\ &=& \frac{G_F^{}}{\sqrt{2}} \left\{ V_{ud}^*V_{us}^{} \bigl[ (z_{7V}^{} + \tau y_{7V}^{}) \langle p|\bar d\gamma^\mu(1-\gamma_5^{})s|\Sigma^+\rangle \bar\ell^-\gamma_\mu^{}\ell^+ + \tau y_{7A}^{} \langle p|\bar d\gamma^\mu(1-\gamma_5^{})s|\Sigma^+\rangle \bar\ell^-\gamma_\mu^{}\gamma_5^{}\ell^+ \bigr] \vphantom{\sum_i} \right. \nonumber\\ &&-\, \left. \sum_j V_{jd}^*V_{js}^{}\,\frac{i\alpha\,c^j_{7\gamma}}{2\pi q^2} \bigl[ (m_s^{} + m_d^{}) \langle p|\bar d \sigma^{\mu\nu}q_\nu^{} s|\Sigma^+\rangle + (m_s^{}-m_d^{})\langle p|\bar d\sigma^{\mu\nu}q_\nu^{}\gamma_5^{}s|\Sigma^+\rangle \bigr]\, \bar\ell^-\gamma_\mu^{}\ell^+ \right\} \,\,, \nonumber\end{aligned}$$ where $q=p_\Sigma^{}-p_p^{}$. To obtain the corresponding branching ratio, one needs to know the hadronic matrix elements. Employing the leading-order strong Lagrangian in chiral perturbation theory ($\chi$PT), given in Eq. (\[Ls1\]), we find $$\begin{aligned} \langle p|\bar d \gamma^\mu s|\Sigma^+\rangle \,\,=\,\, -\bar p\gamma^\mu \Sigma \,\,, \,\,&&\,\, \langle p|\bar d\gamma^\mu\gamma_5^{} s|\Sigma^+\rangle \,\,=\,\, (D-F)\, \bar p\gamma^\mu\gamma_5^{} \Sigma \,\,,\end{aligned}$$ where $D=0.80$ and $F=0.46$ from fitting to hyperon semileptonic decays, and using quark-model results [@Donoghue:1992dd] we obtain $$\begin{aligned} \langle p|\bar d\sigma^{\mu\nu}s|\Sigma^+\rangle \,\,=\,\, c_\sigma^{}\,\bar p\sigma^{\mu\nu}\Sigma \,\,, \,\,&&\,\, \langle p|\bar d\sigma^{\mu\nu}\gamma_5^{} s|\Sigma^+\rangle \,\,=\,\, c_\sigma^{}\, \bar p\sigma^{\mu\nu} \gamma_5^{} \Sigma \,\,,\end{aligned}$$ where $c_\sigma^{}=-1/3$. Furthermore, we adopt the CKM-matrix elements given in Ref. [@pdg], the typical Wilson coefficients obtained in the literature [@Shifman:1976de; @Buchalla:1995vs], namely $z_{7V}^{} = -0.046\alpha$, $y_{7V}^{} = 0.735\alpha$, $y_{7A}^{} = -0.700\alpha$ [@Buchalla:1995vs], and $c_{7\gamma}^j$ being dominated by $c^c_{7\gamma} =0.13$ [@Shifman:1976de], and the quark masses $m_d^{}=9\rm\,MeV$ and $m_s^{}=120\rm\,MeV$. The resulting branching ratio for $\Sigma^+\to p\mu^+\mu^-$ is about $10^{-12}$, which is way below the observed value. There are uncertainties in the hadronic matrix elements, the Wilson coefficients, and the CKM-matrix elements, but these uncertainties will not change this result by orders of magnitude. We therefore conclude that in the SM the short-distance contribution is too small to explain the HyperCP data on $\Sigma^+\to p\mu^+\mu^-$. Now, a large branching ratio for $\Sigma^+\to p \ell^+\ell^-$ may be related to the large observed branching ratio for $\Sigma^+\to p\gamma$, compared with their respective short-distance contributions. With only the short-distance contribution to $\Sigma^+\to p \gamma$ within the SM, the branching ratio is predicted to be much smaller than the experimental value [@He:1999ik]. However, beyond the SM it is possible to have an enhanced short-distance contribution to $\Sigma^+\to p \gamma$ [@He:1999ik] which would enhance the amplitude for $\Sigma^+\to p\mu^+\mu^-$. The origin of the enhancement may be from new interactions such as $W_L$-$W_R$ mixing in left-right symmetric models and left-right squark mixing in supersymmetric models [@He:1999ik]. These types of interactions have small effects on other related flavor-changing processes such as $K^0$-$\bar K^0$ mixing, but can have large effects on $\Sigma^+\to p\gamma$ and therefore also on $\Sigma^+\to p\ell^+\ell^-$. Thus the observed branching ratio for $\Sigma^+\to p\gamma$ can be reproduced even if one assumes that there is only the short-distance contribution. More likely, however, the enhancement is due to long-distance contributions within the SM. In the next section we present the most complete estimate possible at present for these long-distance contributions. Long-distance contributions\[ld\] ================================= In this section we deal with the contributions to $\,\Sigma^{+}\to p\ell^{+}\ell^{-}\,$ that are mediated by a photon. For a real intermediate photon there are two form factors that can be extracted from the weak radiative hyperon decay $\,B_{i}^{}\to B_{f}^{}\gamma\,$ and are usually parameterized by the effective Lagrangian $${\cal L} \,\,=\,\, \frac{eG_{F}}{2}\, \bar{B}_{f}\left(a+b\gamma_{5}\right)\sigma^{\mu\nu}B_{i}\, F_{\mu\nu} \,\,. \label{radff}$$ The two form factors, $a$ and $b$, are related to the width and decay distribution of the radiative decay by $$\begin{aligned} \Gamma(B_{i}\to B_{f}\gamma) &=& \frac{G_{F}^{2}e^{2}}{\pi}\left(|a|^{2}+|b|^{2}\right)\omega^{3} \,\,,\end{aligned}$$ $$\begin{aligned} \frac{d\Gamma}{d\cos\theta} \,\,\sim\,\, 1+\alpha\,\cos\theta \,\,, \hspace{2em} \alpha \,\,=\,\, \frac{2\,Re\,(ab^{*})}{|a|^{2}+|b|^{2}} \,\,,\end{aligned}$$ where $\omega$ is the photon energy, and $\theta$ is the angle between the spin of $B_i^{}$ and the three-momentum of $B_f^{}$. The measured values for $\,\Sigma^{+}\to p\gamma\,$ are [@pdg] $$\begin{aligned} \label{Spgdata} \Gamma(\Sigma^{+}\to p\gamma) \,\,=\,\, (10.1\pm 0.4)\times10^{-15}{\rm~MeV} \,\,, \hspace{2em} \alpha \,\,=\,\, -0.76 \pm 0.08 \,\,.\end{aligned}$$ When the photon is a virtual one, there are two additional form-factors, and the total amplitude can be parameterized as $$\begin{aligned} \label{M_BBg} {\cal M}(B_i\to B_f\gamma^*) &=& - e G_{F}^{}\, \bar{B}_f^{} \left[ i\sigma^{\mu\nu}q_\mu^{}(a+b\gamma_5^{}) +(q^2\gamma^\nu-q^\nu\!\!\not{\!q}) (c+d\gamma_5^{}) \right] B_i^{}\, \varepsilon_\nu^{*} \,\,,\end{aligned}$$ where $q$ is the photon four-momentum. We note that the $a$ and $c$ ($b$ and $d$) terms are parity conserving (violating). The corresponding amplitude for $\,B_{i}^{}\to B_{f}^{}\ell^{+}\ell^{-}\,$ is then $$\begin{aligned} {\cal M}(B_i\to B_f\ell^+\ell^-) &=& \frac{-i e^2 G_{F}^{}}{q^2}\, \bar{B}_{f}^{}\left(a+b\gamma_5^{}\right)\sigma_{\mu\nu}^{}q^{\mu}B_{i}^{}\, \bar\ell^{-}\gamma^\nu\ell^{+} \nonumber \\ && \vphantom{\int^|} -\,\, e^2 G_{F}^{}\, \bar{B}_{f}^{}\gamma_\mu^{}(c+d\gamma_{5}^{})B_{i}^{}\, \bar\ell^{-}\gamma^{\mu}\ell^{+} \,\,, \label{ffabcd}\end{aligned}$$ where now $\,q=p_{\ell^+}^{}+p_{\ell^-}^{}.\,$ In general $a$, $b$, $c$, and $d$ depend on $q^2$, and for $\,\Sigma^{+}\to p\gamma^*\,$ the first two are constrained at $\,q^2=0\,$ by the data in Eq. (\[Spgdata\]) as $$\begin{aligned} \label{Spgcons} |a(0)|^{2}+|b(0)|^{2} &=& (15.0\pm 0.3)^{2}{\rm ~MeV}^{2} \,\,, \nonumber \\ {\rm Re}\,\bigl(a(0)\,b^*(0)\bigr) &=& (-85.3\pm 9.6) {\rm ~MeV}^{2} \,\,.\end{aligned}$$ These form factors are related to the ones in Ref. [@Bergstrom:1987wr] by $$\begin{aligned} a \,\,=\,\, 2i b_1^{} \,\,, \hspace{2em} b \,\,=\,\, 2i b_2^{} \,\,, \hspace{2em} c \,\,=\,\, \frac{i a_1^{}}{q^2} \,\,, \hspace{2em} d \,\,=\,\, -\frac{i a_2^{}}{q^2} \,\,.\end{aligned}$$ As we will estimate later on, these form factors have fairly mild $q^2$-dependence. If they are taken to be constant, by integrating numerically over phase space we can determine the branching ratios of $\Sigma^{+}\to p\ell^{+}\ell^{-}$ to be, with $a$ and $b$ in MeV, \[rateres\] $$\begin{aligned} {\cal B}(\Sigma^{+}\to p \mu^{+}\mu^{-}) &=& \left[ 2.00 \left(|a|^{2}+|b|^{2}\right) -1.60 \left(|a|^{2}-|b|^{2}\right) \right] \times 10^{-10} \nonumber \\ &&+\,\, \left( 1.05\, |c|^{2}+ 18.2\, |d|^{2}\right)\times 10^{-6}\nonumber \\ &&+\,\, \left[ 0.29 {\rm~Re}\,(ac^{*}) - 16.1 {\rm~Re}\,(bd^{*})\right] \times 10^{-8} \,\,,\end{aligned}$$ $$\begin{aligned} {\cal B}(\Sigma^{+}\to p e^{+}e^{-}) &=& \left[ 4.22 \left(|a|^{2}+|b|^{2}\right) -0.21 \left(|a|^{2}-|b|^{2}\right) \right] \times 10^{-8} \nonumber \\ &&+\,\, \left( 5.38\, |c|^{2}+ 15.9\, |d|^{2}\right)\times 10^{-5}\nonumber \\ &&+\,\, \left[ 1.51 {\rm~Re}\,(ac^{*}) - 21.1 {\rm~Re}\,(bd^{*})\right] \times 10^{-7} \,\,.\end{aligned}$$ If the form factors have $q^2$-dependence, the expression is different, and the rate should be calculated with the formula which we give in Appendix \[diffrate\]. Imaginary parts of the form factors from unitarity -------------------------------------------------- The form factors which contribute to the weak radiative hyperon decays have been studied in chiral perturbation theory [@Neufeld:1992hb; @Jenkins:1992ab; @Bos:1996ig]. The imaginary parts of $a$ and $b$ for $\,\Sigma^{+}\to p\gamma$ have been determined from unitarity with different results in the literature. Neufeld [@Neufeld:1992hb] employed relativistic baryon $\chi$PT to find, for $\,q^2=0$, $$\begin{aligned} \label{neufeld} {\rm Im}\, a(0) \,\,=\,\, 2.60~{\rm MeV} \,\,, \hspace{2em} {\rm Im}\, b(0) \,\,=\,\, -1.46~{\rm MeV}\end{aligned}$$ in the notation of Eq. (\[radff\]), whereas Jenkins [*et al.*]{} [@Jenkins:1992ab] using the heavy-baryon formulation obtained $$\begin{aligned} \label{jenkins} {\rm Im}\, a(0) \,\,=\,\, 6.18~{\rm MeV} \,\,, \hspace{2em} {\rm Im}\, b(0) \,\,=\,\, -0.53~{\rm MeV} \,\,.\end{aligned}$$ Because of this disagreement, and since we also need the imaginary parts of the form factors $c$ and $d$, we repeat here the unitarity calculation employing both the relativistic and heavy baryon approaches. Our strategy to derive the imaginary parts of the four form-factors in Eq. (\[ffabcd\]) from unitarity is illustrated in Fig. \[fig\_cut\]. As the figure shows, these imaginary parts can be determined from the amplitudes for the weak nonleptonic decays $\,\Sigma^{+}\to p\pi^{0}\,$ and $\,\Sigma^{+}\to n\pi^{+}$ (the vertex indicated by a square in Fig. \[fig\_cut\]) as well as the reactions $\,N\pi\to N\gamma^*$ (the vertex indicated by a blob in Fig. \[fig\_cut\]). The weak decays have been measured [@pdg], and we express their amplitudes as[^1] $$\begin{aligned} \label{M_SNpi} {\cal M}(\Sigma^{+}\to N\pi) &=& i G_{F}^{}m_{\pi^{+}}^{2}\, \bar{N} \left(A_{N\pi} - B_{N\pi}\gamma_{5}^{}\right) \Sigma \,\,,\end{aligned}$$ where $$\begin{aligned} \label{ABnlhd} A_{n\pi^+} \,\,=\,\, 0.06 \,\,, \,\,\, && B_{n\pi^+} \,\,=\,\, 18.53 \,\,, \nonumber \\ A_{p\pi^0} \,\,=\,\, -1.43 \,\,, && B_{p\pi^0} \,\,=\,\, 11.74 \,\,.\end{aligned}$$ Following Refs. [@Neufeld:1992hb; @Jenkins:1992ab], we adopt the $\,N\pi\to p\gamma^*\,$ amplitudes derived in lowest-order $\chi$PT. ![Unitarity cut.\[fig\_cut\]](fig_cut.eps) We present the details of our unitarity calculation in Appendix \[imabcd\]. The results in the relativistic and heavy baryon approaches are given in Eqs. (\[imFF\_r\]) and (\[imFF\_hb\]), respectively. In Fig. \[fig\_imF\] we display the two sets of form factors for $\,0\le q^2\le(m_\Sigma^{}-m_N^{})^2$. We note that, although only the $\,\Sigma^+\to n\pi^+\,$ transition contributes to the heavy-baryon form-factors at leading order, the sizable difference between the ${\rm Im}\,a$, or ${\rm Im}\,c$, curves arises mainly from relativistic corrections, which reduce the heavy-baryon numbers by about 50%. On the other hand, the difference between the ${\rm Im}\,b$, or ${\rm Im}\,d$, curves is due not only to relativistic corrections, but also to $A_{n\pi^+}$ being much smaller than $A_{p\pi^0}$. ![Imaginary parts of the form factors in $\Sigma^+\to p\gamma^*$, obtained using heavy baryon $\chi$PT (solid lines) and relativistic baryon $\chi$PT (dashed lines). \[fig\_imF\]](fig_imF.eps) To compare with the numbers in Eqs. (\[neufeld\]) and (\[jenkins\]) calculated in earlier work, we find from the relativistic formulas in Eq. (\[imFF\_r\]) $$\begin{aligned} \label{imab_r} {\rm Im}\, a(0) \,\,=\,\, 2.84~{\rm MeV} \,\,, \hspace{2em} {\rm Im}\, b(0) \,\,=\,\, -1.83~{\rm MeV} \,\,,\end{aligned}$$ and from the heavy-baryon results in Eq. (\[imFF\_hb\]) $$\begin{aligned} \label{imab_hb} {\rm Im}\, a(0) \,\,=\,\, 6.84~{\rm MeV} \,\,, \hspace{2em} {\rm Im}\, b(0) \,\,=\,\, -0.54~{\rm MeV} \,\,.\end{aligned}$$ Thus our relativistic results are close to those in Eq. (\[neufeld\]), from Ref. [@Neufeld:1992hb], and our heavy-baryon numbers to those in Eq. (\[jenkins\]), from Ref. [@Jenkins:1992ab].[^2] These two sets of numbers are different for the reasons mentioned in the preceding paragraph. Real parts of the form factors ------------------------------ The real parts of the form factors cannot be completely predicted at present from experimental input alone. For ${\rm Re}\,a(q^{2})$ and ${\rm Re}\,b(q^{2})$, the values at $\,q^{2}=0\,$ can be extracted from Eq. (\[Spgcons\]) after using Eq. (\[imab\_r\]) or (\[imab\_hb\]) for the imaginary parts. Thus the relativistic numbers in Eq. (\[imab\_r\]) lead to the four sets of solutions $$\begin{aligned} \label{reab_r} {\rm Re}\, a(0) \,\,=\,\, \pm 13.3{\rm~MeV} \,\,, \,\,&&\,\, {\rm Re}\, b(0) \,\,=\,\, \mp 6.0{\rm~MeV} \,\,, \nonumber \\ {\rm Re}\, a(0) \,\,=\,\, \pm 6.0{\rm~MeV} \,\,, \,\,&&\,\, {\rm Re}\, b(0) \,\,=\,\, \mp 13.3{\rm~MeV} \,\,,\end{aligned}$$ while the heavy-baryon results in Eq. (\[imab\_hb\]) imply $$\begin{aligned} \label{reab_hb} {\rm Re}\, a(0) \,\,=\,\, \pm 11.1{\rm~MeV} \,\,, \,\,&&\,\, {\rm Re}\, b(0) \,\,=\,\, \mp 7.3{\rm~MeV} \,\,, \nonumber \\ {\rm Re}\, a(0) \,\,=\,\, \pm 7.3{\rm~MeV} \,\,, \,\,&&\,\, {\rm Re}\, b(0) \,\,=\,\, \mp 11.1{\rm~MeV} \,\,.\end{aligned}$$ Since these numbers still cannot be predicted reliably within the framework of $\chi$PT [@Neufeld:1992hb; @Jenkins:1992ab], we will assume that $$\begin{aligned} \label{reab} {\rm Re}\, a(q^2) \,\,=\,\, {\rm Re}\, a(0) \,\,, \,\,&&\,\, {\rm Re}\, b(q^2) \,\,=\,\, {\rm Re}\, b(0) \,\,,\end{aligned}$$ where the $\,q^2=0\,$ values are those in Eqs. (\[reab\_r\]) and (\[reab\_hb\]) in the respective approaches. This assumption is also reasonable in view of the fairly mild $q^2$-dependence of the imaginary parts seen in Fig. \[fig\_imF\], and of the real parts of $c$ and $d$ below. In predicting the $\Sigma^+\to p\ell^+\ell^-$ rates in the following section, we will use the 8 sets of possible solutions in Eqs. (\[reab\_r\]) and (\[reab\_hb\]). The real parts of $c$ and $d$ cannot be extracted from experiment at present. Our interest here, however, is in predicting the SM contribution, and therefore we need to estimate them. To do so, we employ a vector-meson-dominance assumption, presenting the details in Appendix \[recd\]. The results for ${\rm Re}\,c(q^2)$ and ${\rm Re}\,d(q^2)$ are given in Eqs. (\[rec\]) and (\[red\]), respectively. In Fig. \[fig\_reF\] we display the two form factors for $\,0\le q^2\le(m_\Sigma^{}-m_N^{})^2$. We can see from Figs. \[fig\_imF\] and \[fig\_reF\] that $c$ is dominated by its imaginary part, but that $d$ is mostly real. ![Real parts of $c$ and $d$.\[fig\_reF\]](fig_reF.eps) Results and conclusions\[sum\] ============================== We can now evaluate the rates and spectra of $\Sigma^+\to p\ell^+\ell^-$ resulting from the various standard-model contributions. Since the short-distance contributions discussed in Sec. \[sd\] are very small, we shall neglect them. Consequently, the rates are determined by the various form factors in $\Sigma^+\to p\gamma^*$ calculated in the preceding section and applied in Eq. (\[diffrateform\]). In Table \[rates\], we have collected the branching ratios of $\Sigma^+\to p\mu^+\mu^-$ and $\Sigma^+\to p e^+e^-$ corresponding to the 8 sets of solutions in Eqs. (\[reab\_r\]) and (\[reab\_hb\]), under the assumption of Eq. (\[reab\]) for ${\rm Re}\,a$ and ${\rm Re}\,b$. The real parts of $c$ and $d$ in Eqs. (\[rec\]) and (\[red\]) are used in all the unbracketed branching ratios. For the imaginary parts of the form factors, the expressions in Eq. (\[imFF\_r\]) \[Eq. (\[imFF\_hb\])\] contribute to the unbracketed branching ratios in the upper (lower) half of this table. Within each pair of square brackets, the first number is the branching ratio obtained without contributions from both $c$ and $d$, whereas the second number is the branching ratio calculated with only the real parts of all the form factors. 0.5 ${\rm Re}\,a$ (MeV) $\displaystyle\vphantom{\int}$ ${\rm Re}\,b$ (MeV) $10^8\,{\cal B}\bigl(\Sigma^+\to p\mu^+\mu^-\bigr)$ $10^6\,{\cal B}\bigl(\Sigma^+\to p e^+e^-\bigr)$ ---------------------------------------------------- --------------------- ----------------------------------------------------- -------------------------------------------------- 13.3 $-$6.0 1.6 \[2.2,1.3\] 9.1 \[9.2,8.6\] $-$13.3 6.0 3.4 \[2.2,3.1\] 9.4 \[9.2,8.8\] 6.0 $-$13.3 5.1 \[6.7,4.7\] 9.6 \[9.8,9.0\] $-$6.0 13.3 9.0 \[6.7,8.6\] 10.1 \[9.8,9.5\] 11.1 $-$7.3 2.3 \[2.9,1.5\] 9.3 \[9.3,7.2\] $-$11.1 7.3 4.5 \[2.9,3.7\] 9.6 \[9.3,7.5\] 7.3 $-$11.1 4.0 \[5.1,3.2\] 9.5 \[9.6,7.4\] $-$7.3 11.1 7.3 \[5.1,6.4\] 10.0 \[9.6,7.8\] : \[rates\]Branching ratios of $\Sigma^+\to p\mu^+\mu^-, p e^+e^-$ in the standard model. The unbracketed branching ratios receive contributions from all the form factors, with the expressions in Eq. (\[imFF\_r\]) \[Eq. (\[imFF\_hb\])\] for the imaginary parts contributing to the numbers in the first (last) four rows. Within each pair of square brackets, the first number has been obtained with $\,c=d=0$, and the second with only the real parts of all the form factors. In Fig. \[fig\_BRmu\] we show the invariant-mass distributions of the $\mu^+\mu^-$ pair, with $M_{\mu\mu}^{}=\sqrt{q^2}$, that correspond to the smallest and largest rates of $\Sigma^+\to p\mu^+\mu^-$ listed in Table \[rates\] for both the relativistic baryon \[(a)and(b)\] and heavy baryon \[(c)and(d)\] cases. For $\Sigma^+\to p e^+e^-$, the mass distributions of the $e^+e^-$ pair, two of which are displayed in Fig. \[fig\_BRee\], differ very little from each other and are strongly peaked at low $M_{ee}^{}=\sqrt{q^2}$. Also shown in the figures are the distributions obtained with $c=d=0$ (dashed curves), as well as those without contributions from the imaginary parts of all the form factors (dotted curves). ![Invariant-mass distributions of the lepton pair in $\Sigma^+\to p\mu^+\mu^-$ corresponding to the smallest and largest branching ratios for the (a,b) relativistic and (c,d) heavy baryon cases in Table \[rates\]. In all distribution figures, each solid curve receives contributions from all the form factors, each dashed curve has been obtained with $c=d=0$, and each dotted curve involves no imaginary parts of all the form factors. \[fig\_BRmu\]](fig_BRmu.eps) ![Low-mass portion of the invariant-mass distributions of the lepton pair in $\Sigma^+\to p e^+e^-$ corresponding to two of the branching ratios in Table \[rates\], for the (a) relativistic and (b) heavy baryon cases. \[fig\_BRee\]](fig_BRee.eps) We can see from Table \[rates\], Fig. \[fig\_BRmu\], and Fig. \[fig\_BRee\] that the effect of the $c$ and $d$ contributions on the total rates can be up to nearly 40% in $\Sigma^+\to p\mu^+\mu^-$, but it is much smaller in $\Sigma^+\to p e^+e^-$. Furthermore, the contributions of the imaginary parts of the form factors can be as large as 35% to the $p\mu^+\mu^-$ rate and roughly 20% to the $pe^+e^-$ rate. This implies that a careful analysis of experimental results, especially in the case of $\Sigma^+\to p\mu^+\mu^-$, should take into account the imaginary parts of the form factors. For $\Sigma^+\to p\mu^+\mu^-$, HyperCP measured the branching ratio to be $\bigl(8.6_{-5.4}^{+6.6}\pm5.5\bigr)\times10^{-8}$ [@Park:2005ek]. It is evident that all the predictions in Table \[rates\] for the $p\mu^+\mu^-$ mode corresponding to the different sets of form factors fall within the experimental range. For $\Sigma^+\to p e^+e^-$, the branching ratio can be inferred from the experimental results given in Ref. [@ang], which reported the width ratio $\Gamma(\Sigma^+\to p e^+e^-)/\Gamma(\Sigma^+\to p\pi^0)=(1.5\pm0.9)\times10^{-5}$ and interpreted the observed events as proceeding from $\Sigma^+\to p\gamma^*$, based on the very low invariant-masses of the $e^+e^-$ pair.[^3] This number, in conjunction with the current data on $\Sigma^+\to p\pi^0$ [@pdg], translates into ${\cal B}(\Sigma^+\to p e^+e^-)=(7.7\pm4.6)\times10^{-6}$. Clearly, the results for the $p e^+e^-$ mode in Table \[rates\] are well within the experimentally allowed range. Based on the numbers in Table \[rates\], we may then conclude that within the standard model $$\begin{aligned} \label{results} \begin{array}{c} \displaystyle 1.6\times10^{-8} \,\,\le\,\, {\cal B}\bigl(\Sigma^+\to p\mu^+\mu^-\bigr) \,\,\le\,\, 9.0\times10^{-8} \,\,, \vspace{2ex} \\ \displaystyle 9.1\times10^{-6} \,\,\le\,\, {\cal B}\bigl(\Sigma^+\to p e^+e^-\bigr) \,\,\le\,\, 10.1\times10^{-6} \,\,. \end{array}\end{aligned}$$ The agreement above between the predicted and observed rates of $\Sigma^+\to p\ell^+\ell^-$ indicates that these decays are dominated by long-distance contributions. However, the predicted range for ${\cal B}(\Sigma^+\to p\mu^+\mu^-)$ is sufficiently wide that we cannot rule out the possibility of a new-physics contribution of the type suggested by HyperCP [@Park:2005ek]. Motivated by the narrow distribution of dimuon masses of the events they observed, they proposed that the decay could proceed via a new intermediate particle of mass $\sim$214MeV, with a branching ratio of $\bigl(3.1_{-1.9}^{+2.4}\pm1.5\bigr)\times10^{-8}$ [@Park:2005ek]. For this hypothesis to be realized, however, the new physics would have to dominate the decay. It will be interesting to see if this hypothesis will be confirmed by future measurements. Finally, we observe that the smaller numbers ${\cal B}(\Sigma^+\to p\mu^+\mu^-)\sim2\times10^{-8}$ in Table \[rates\] correspond to the mass distributions peaking at lower masses, $M_{\mu\mu}^{}\sim220\rm\,MeV$, in Fig. \[fig\_BRmu\]. It is perhaps not coincidental that these numbers are similar to the branching ratio and new-particle mass, respectively, in the HyperCP hypothesis above. This may be another indication that it is not necessary to invoke new physics to explain the HyperCP results. We thank HyangKyu Park for conversations. The work of X.G.H. was supported in part by the National Science Council under NSC grants. The work of G.V. was supported in part by DOE under contract number DE-FG02-01ER41155. Differential rate of $\bm{\Sigma^+\to p\ell^+\ell^-}$\[diffrate\] ================================================================= If the form factors have $q^2$-dependence, before integrating over phase space to obtain the branching ratio we should use $$\begin{aligned} \label{diffrateform} && \hspace*{-3em} \frac{d\Gamma(\Sigma^+\to p\ell^+\ell^-)}{d q^2\, dt} \,\,=\,\, \frac{\alpha^2 G_F^2}{4 \pi\,m^3_\Sigma} \nonumber\\ &\times& \left\{ \bigl[(2m_l^2 + q^2)((m_p^{}-m_\Sigma^{})^2-q^2)(m_\Sigma^{} + m_p^{})^2 + 2 q^2\, f(m_p^{}, m_\Sigma^{}, m_l, q^2,t)\bigr] \frac{|a|^2}{q^4} \right. \nonumber\\ &&+\,\, \bigl[(2m_l^2 + q^2)((m_p^{}+m_\Sigma^{})^2-q^2)(m_\Sigma^{} - m_p^{})^2 + 2 q^2\, f(m_p^{}, m_\Sigma^{}, m_l, q^2,t) \bigr] \frac{|b|^2}{q^4} \nonumber\\ &&+\,\, \bigl[(2m_l^2+q^2)((m_p^{}-m_\Sigma^{})^2-q^2)-2f(m_p^{},m_\Sigma^{},m_l,q^2,t)\bigr]\,|c|^2 \nonumber\\ &&+\,\, \bigl[(2m_l^2+q^2)((m_p^{}+m_\Sigma^{})^2-q^2)-2f(m_p^{},m_\Sigma^{},m_l,q^2,t)\bigr]\,|d|^2 \nonumber\\ &&+\,\, 2(m_\Sigma^{}+m_p^{})(2m_l^2+q^2)\bigl[(m_p^{}-m_\Sigma^{})^2-q^2)\bigr]\, \frac{{\rm Re}\,({a c^*})}{q^2} \nonumber\\ &&- \left. 2(m_\Sigma^{}-m_p^{}) (2m_l^2 + q^2) \bigl[(m_p^{}+ m_\Sigma^{})^2-q^2\bigr]\, \frac{{\rm Re}\,(b d^*)}{q^2}\right \} \,\,,\end{aligned}$$ where $t=(p_\Sigma^{}-p_{\ell^-}^{})^2$ and $$\begin{aligned} f(m_p^{},m_\Sigma^{}, m_l,q^2,t) = m_l^4+ (m^2_p+m^2_\Sigma -q^2- 2t) m^2_l + m^2_p m^2_\Sigma - (m^2_p + m^2_\Sigma) t + (q^2+t) t\;,\nonumber\end{aligned}$$ with the integration intervals given by $$\begin{aligned} \begin{array}{c} \displaystyle t_{\rm max, min}^{} \,\,=\,\, \mbox{$\frac{1}{2}$} \left[ m^2_\Sigma +m^2_p + 2 m^2_l -q^2 \pm \sqrt{1-\frac{4m^2_l}{q^2}} \sqrt{(m^2_\Sigma-m^2_p-q^2)^2 -4m^2_p q^2} \right] \,\,, \vspace{2ex} \\ \displaystyle q^2_{\rm min} \,\,=\,\, 4 m^2_l \,\,, \hspace{2em} q^2_{\rm max} \,\,=\,\, (m_\Sigma^{} - m_p^{})^2 \,\,. \end{array}\end{aligned}$$ It is worth mentioning that, since the form factors belong to the $\Sigma^+\to p\gamma^*$ amplitude, they do not depend on $t$. Imaginary parts of form factors in $\bm{\chi}$PT\[imabcd\] ========================================================== The chiral Lagrangian for the interactions of the lowest-lying mesons and baryons is written down in terms of the lightest meson-octet and baryon-octet fields, which are collected into $3\times3$ matrices $\varphi$ and $B$, respectively [@Bijnens:1985kj]. The mesons enter through the exponential $\,\Sigma=\xi^2=\exp({\rm i}\varphi/f),\,$ where $\,f=f_\pi^{}=92.4\rm\,MeV\,$ is the pion decay constant. In the relativistic baryon $\chi$PT, the lowest-order strong Lagrangian is given by [@Bijnens:1985kj] $$\begin{aligned} \label{Ls1} {\cal L}_{\rm s}^{} &=& \bigl\langle \bar{B}\, i\gamma^\mu \bigl(\partial_\mu^{}B+\bigl[{\cal V}_\mu^{},B\bigr]\bigr) \bigr\rangle + m_0^{}\, \langle \bar{B}B \rangle + D\, \bigl\langle \bar{B}\gamma^\mu\gamma_5^{}\, \bigl\{ {\cal A}_\mu, B \bigr\} \bigr\rangle + F\, \bigl\langle \bar{B}\gamma^\mu\gamma_5^{}\, \bigl[ {\cal A}_\mu, B \bigr] \bigr\rangle \,\,, \hspace*{2em}\end{aligned}$$ where $\,\langle\cdots\rangle\equiv{\rm Tr}(\cdots)\,$ in flavor space, $m_0^{}$ is the baryon mass in the chiral limit, $\,{\cal V}^\mu=\frac{1}{2}\bigl(\xi\,\partial^\mu\xi^\dagger+\xi^\dagger\,\partial^\mu\xi\bigr) + \frac{i}{2}\,e A^\mu\bigl(\xi^\dagger Q\xi+\xi Q\xi^\dagger\bigr),\,$ and $\,{\cal A}^\mu=\frac{i}{2}\bigl(\xi\,\partial^\mu\xi^\dagger-\xi^\dagger\,\partial^\mu\xi\bigr) + \frac{1}{2}\,e A^\mu\bigl(\xi^\dagger Q\xi-\xi Q\xi^\dagger\bigr),\,$ with $A^\mu$ being the photon field and $\,Q={\rm diag}(2,-1,-1)/3\,$ the quark-charge matrix.[^4] The parameters $D$ and $F$ will enter our results below only through the combination $\,D+F=1.26.\,$ From ${\cal L}_{\rm s}^{}$ we derive two sets of diagrams, shown in Fig. \[NpiNg\], which represent the $\,N\pi\to p\gamma^*\,$ reactions involved in the unitarity calculation of the imaginary parts of the form factors $a$, $b$, $c$, and $d$. It then follows from Fig. \[fig\_cut\] that the first set of diagrams is associated with the weak transition $\,\Sigma^{+}\to n\pi^{+}$, and the second with $\,\Sigma^{+}\to p\pi^{0}$. Consequently, we express our results as $$\begin{aligned} \label{imFF_r} {\rm Im}\,{\cal F} &=& \frac{(D+F)m_{\pi^{+}}^{2}}{8\sqrt{2}\,\pi f_{\pi}} \left(\tilde{\cal F}_+^{} + \frac{\tilde{\cal F}_0^{}}{\sqrt{2}}\right) \hspace{2em} \mbox{for $\,\,{\cal F}=a,b,c,d$} \,\,,\end{aligned}$$ where $\tilde{\cal F}_+^{}$ $\bigl(\tilde{\cal F}_0^{}\bigr)$ comes from the $n\pi^+$ $(p\pi^0)$ contribution, and write them in terms of the weak amplitudes $\,A_+^{}=A_{n\pi^+}$, $\,A_0^{}=A_{p\pi^0}$, $B_+^{}=B_{n\pi^+}$, and $\,B_0^{}=B_{p\pi^0}$ given in Eq. (\[ABnlhd\]). ![Leading-order diagrams for $\,N\pi\to p\gamma^*$ reactions. \[NpiNg\]](fig_NpiNg.eps) Working in the $\Sigma^+$ rest-frame, which implies that the energies and momenta of the photon and proton in the final state and of the pion in the intermediate are fixed by kinematics, we define $$\begin{aligned} z_{+}^{} \,\,=\,\, \left(\frac{2 E_{\pi}^{} E_{\gamma}^{}-2 |\bm{p}_{\pi}^{}|\,|\bm{p}_{\gamma}^{}|-q^{2}} {2 E_{\pi}^{} E_{\gamma}^{}+2 |\bm{p}_{\pi}^{}|\,|\bm{p}_{\gamma}^{}|-q^{2}}\right) \,\,, \,\,&&\,\, z_{0}^{} \,\,=\,\, \left(\frac{2 E_{\pi}^{} E_{p}^{}-2 |\bm{p}_{\pi}^{}|\,|\bm{p}_{p}^{}|-m_{\pi}^{2}} {2 E_{\pi}^{} E_{p}^{}+2 |\bm{p}_{\pi}^{}|\,|\bm{p}_{p}^{}|-m_{\pi}^{2}}\right) \,\,.\end{aligned}$$ The expression for $\tilde{\cal F}$ from each set of diagrams can then be written as $$\begin{aligned} \tilde{a}_{+,0}^{} &=& \frac{B_{+,0}^{}\, m_N^{}}{2m_\Sigma^{2}\, |\bm{p}_{\gamma}^{}|}\, \frac{\left[2 |\bm{p}_{\pi}^{}|\, |\bm{p}_{\gamma}^{}|\, f_{+,0}^{(a)}+\ln(z_{+,0}^{})\, g_{+,0}^{(a)}\right]} {\left[(m_\Sigma^{}-m_N^{})^{2}-q^{2}\right] \left[(m_\Sigma^{}+m_N^{})^{2}-q^{2}\right]^{2} } \,\,, \nonumber \\ \tilde{b}_{+,0}^{} &=& \frac{-A_{+,0}^{}\, m_N^{}}{2m_\Sigma^{2}\, |\bm{p}_{\gamma}^{}|}\, \frac{\left[2 |\bm{p}_{\pi}^{}|\, |\bm{p}_{\gamma}^{}|\, f_{+,0}^{(b)}+\ln(z_{+,0}^{})\, g_{+,0}^{(b)}\right]} {\left[(m_\Sigma^{}-m_N^{})^{2}-q^{2}\right]^{2} \left[(m_\Sigma^{}+m_N^{})^{2}-q^{2}\right] } \,\,, \nonumber \\ \tilde{c}_{+,0}^{} &=& \frac{B_{+,0}^{}\, m_N^{}}{2m_\Sigma^{2}\, |\bm{p}_{\gamma}^{}|\,(m_N^{}-m_\Sigma^{})}\, \frac{\left[2 |\bm{p}_{\pi}^{}|\, |\bm{p}_{\gamma}^{}|\, f_{+,0}^{(c)}+\ln(z_{+,0}^{})\, g_{+,0}^{(c)}\right]} {\left[(m_\Sigma^{}-m_N^{})^{2}-q^{2}\right] \left[(m_\Sigma^{}+m_N^{})^{2}-q^{2}\right]^{2} } \,\,, \nonumber \\ \tilde{d}_{+,0}^{} &=& \frac{-A_{+,0}^{}\, m_N^{}}{2m_\Sigma^{2}\, |\bm{p}_{\gamma}^{}|\,(m_\Sigma^{}+m_N^{})}\, \frac{\left[2 |\bm{p}_{\pi}^{}|\, |\bm{p}_{\gamma}^{}|\, f_{+,0}^{(d)}+\ln(z_{+,0}^{})\, g_{+,0}^{(d)}\right]} {\left[(m_\Sigma^{}-m_N^{})^{2}-q^{2}\right]^{2} \left[(m_\Sigma^{}+m_N^{})^{2}-q^{2}\right]} \,\,,\end{aligned}$$ where $$\begin{aligned} f_{+}^{(a)} &=& m_N^{} m_\Sigma^5+\left(q^2+2 m_{\pi }^2+m_N^2\right) m_\Sigma^4 -m_N^{} \left(3 q^2-3 m_{\pi }^2+2 m_N^2\right) m_\Sigma^3 \nonumber \\ &&-\, \left(q^4-5m_{\pi }^2 q^2+2 m_N^4+\left(q^2+m_{\pi }^2\right) m_N^2\right) m_\Sigma^2 \nonumber \\ &&+\, m_N^{} \left(m_N^2-q^2\right) \left(2 q^2-3 m_{\pi }^2+m_N^2\right)m_\Sigma^{} + \left(q^2-m_N^2\right)^2 \left(m_N^2-m_{\pi }^2\right) \,\,, \nonumber \\ g_{+}^{(a)} &=& m_\Sigma^{} \left(m_N^{} q^6+\left(m_N^{} \left(2 m_N^{}-m_\Sigma^{}\right) \left(m_N^{}+m_\Sigma^{}\right)-m_{\pi }^2 \left(3 m_N^{}+m_\Sigma^{}\right)\right) q^4\right. \nonumber \\ &&+\, \left. m_{\pi }^2 \left(3 m_{\pi }^2-4 m_N^2\right) \left(m_N^{}+m_\Sigma^{}\right) q^2+m_{\pi }^2 \left(m_N^{}-m_\Sigma^{}\right)^2 \left(m_N^{}+m_\Sigma^{}\right)^3\right) \,\,, \nonumber \\ f_{0}^{(a)} &=& 3 m_N^{} m_\Sigma^5-\left(q^2-2 m_{\pi }^2-3 m_N^2\right)m_\Sigma^4 -m_N^{} \left(4 m_N^2-3 \left(q^2+m_{\pi }^2\right)\right)m_\Sigma^3 \nonumber \\ &&+\, \left(q^4+5 m_{\pi }^2 q^2-4 m_N^4-\left(q^2+m_{\pi }^2\right) m_N^2\right) m_\Sigma^2 \nonumber \\ &&+\, m_N^{} \left(m_N^2-q^2\right) \left(2 q^2-3 m_{\pi }^2+m_N^2\right) m_\Sigma^{} + \left(q^2-m_N^2\right)^2 \left(m_N^2-m_{\pi }^2\right) \,\,, \nonumber \\ g_{0}^{(a)} &=& m_\Sigma^{} \left(-2 m_{\pi }^2 m_\Sigma^{} q^4-\left(m_N^{}+m_\Sigma^{}\right) \left(3 m_{\pi }^4-2 \left(3 m_N^2-2 m_\Sigma^{} m_N^{}+m_\Sigma^2\right) m_{\pi }^2 \right.\right. \nonumber \\ &&+\, \left.\left. m_N^{} \left(m_N^{}-m_{\Sigma }\right)^2 \left(3 m_N^{}+m_\Sigma^{}\right)\right) q^2+m_N^{} \left(m_N^{}-m_\Sigma^{}\right)^2 m_\Sigma^{} \left(m_N^{}+m_\Sigma^{}\right)^3\right) \,\,, \hspace*{2em}\end{aligned}$$ $$\begin{aligned} f_{+}^{(b)} &=& m_N^{} m_\Sigma^5-\left(q^2+2 m_{\pi }^2+m_N^2\right) m_\Sigma^4 -m_N^{} \left(3 q^2-3 m_{\pi }^2+2 m_N^2\right) m_\Sigma^3 \nonumber \\ &&+\, \left(q^4-5 m_{\pi }^2 q^2+2 m_N^4+\left(q^2+m_{\pi }^2\right) m_N^2\right) m_\Sigma^2 \nonumber \\ &&+\, m_N^{} \left(m_N^2-q^2\right) \left(2 q^2-3 m_{\pi }^2+m_N^2\right) m_\Sigma^{} + \left(q^2-m_N^2\right)^2 \left(m_{\pi }^2-m_N^2\right) \,\,, \nonumber \\ g_{+}^{(b)} &=& - m_\Sigma^{} \left(-m_N^{} q^6+\left(\left(3 m_N^{}-m_\Sigma^{}\right) m_{\pi }^2+m_N^{} \left(-2 m_N^2+m_\Sigma^{} m_N^{}+m_{\Sigma }^2\right)\right) q^4\right.\nonumber\\ &&-\, \left. m_{\pi }^2 \left(3 m_{\pi }^2-4 m_N^2\right) \left(m_N^{}-m_\Sigma^{}\right) q^2-m_{\pi }^2 \left(m_N^{}-m_\Sigma^{}\right)^3 \left(m_N^{}+m_{\Sigma }\right)^2\right) \,\,, \nonumber \\ f_{0}^{(b)} &=& 3 m_N^{} m_\Sigma^5+\left(q^2-2 m_{\pi }^2-3 m_N^2\right) m_\Sigma^4 +m_N^{} \left(3 \left(q^2+m_{\pi }^2\right)-4 m_N^2\right) m_\Sigma^3 \nonumber \\ &&-\, \left(q^4+5 m_{\pi }^2 q^2-4 m_N^4-\left(q^2+m_{\pi }^2\right) m_N^2\right) m_\Sigma^2 \nonumber \\ &&+\, m_N^{} \left(m_N^2-q^2\right) \left(2 q^2-3 m_{\pi }^2+m_N^2\right) m_\Sigma^{} + \left(q^2-m_N^2\right)^2 \left(m_{\pi }^2-m_N^2\right) \,\,, \nonumber \\ g_{0}^{(b)} &=& - m_\Sigma^{} \left(-2 m_{\pi }^2 m_\Sigma^{} q^4+\left(m_N^{}-m_\Sigma^{}\right) \left(3 m_{\pi }^4-2 \left(3 m_N^2+2 m_\Sigma^{} m_N^{}+m_{\Sigma }^2\right) m_{\pi }^2\right.\right.\nonumber\\ &&+\, \left.\left. m_N^{} \left(3 m_N^{}-m_{\Sigma }\right) \left(m_N^{}+m_\Sigma^{}\right)^2\right) q^2+m_N^{} \left(m_N^{}-m_\Sigma^{}\right)^3 m_\Sigma^{} \left(m_N^{}+m_\Sigma^{}\right)^2\right) \,\,, \hspace*{2em}\end{aligned}$$ $$\begin{aligned} f_{+}^{(c)} &=& m_{\pi }^2 \left(8 m_\Sigma^4+5 m_N^{} m_\Sigma^3 -\left(3 q^2+m_N^2\right) m_\Sigma^2+3 m_N^{} \left(m_N^2-q^2\right) m_\Sigma^{} +\left(q^2-m_N^2\right)^2\right) \nonumber \\ &&-\, \left(m_N^{}-m_\Sigma^{}\right) \left(-m_N^{} m_\Sigma^4 +\left(q^2-2 m_N^2\right) m_\Sigma^3-4 q^2 m_N^{} m_\Sigma^2 \vphantom{|_|^|} \right. \nonumber \\ &&-\, \left. \left(q^4+m_N^2 q^2-2 m_N^4\right) m_\Sigma^{} \right. + \left. m_N^{} \left(q^2-m_N^2\right)^2\right) \,\,, \nonumber \\ g_{+}^{(c)} &=& - \left(m_N^{}-m_\Sigma^{}\right) m_\Sigma^{} \left(m_N^{} \left(2 m_N^{}+m_\Sigma^{}\right) q^4+\left(m_{\pi }^4-2 \left(3 m_N^2+2 m_\Sigma^{} m_N^{}+m_\Sigma^2\right) m_{\pi }^2\right.\right.\nonumber\\ &&+\, \left.\left.m_N^{} \left(m_N^{}-m_\Sigma^{}\right) \left(m_N^{}+m_{\Sigma }\right)^2\right) q^2+2 m_{\pi }^2 \left(m_N^{}+m_{\Sigma }\right)^2 \left(m_{\pi }^2+m_\Sigma^{} \left(m_{\Sigma }-m_N^{}\right)\right)\right) \,\,, \nonumber \\ f_{0}^{(c)} &=& m_{\pi }^2 \left(8 m_\Sigma^4+5 m_N^{} m_\Sigma^3 -\left(3 q^2+m_N^2\right) m_\Sigma^2+3 m_N^{} \left(m_N^2-q^2\right) m_\Sigma^{} +\left(q^2-m_N^2\right)^2\right) \nonumber \\ &&-\, \left(m_N^{}-m_\Sigma^{}\right)^2 \left(2 m_\Sigma^4-m_N^{} m_\Sigma^3 -\left(3 q^2+m_N^2\right) m_\Sigma^2+3 m_N^{} \left(m_N^2-q^2\right) m_\Sigma^{} \vphantom{|_|^|} \right. \nonumber \\ &&+\, \left. \left(q^2-m_N^2\right)^2\right) \,\,, \nonumber \\ g_{0}^{(c)} &=& \left(m_N^{}-m_\Sigma^{}\right) m_{\Sigma } \left(\left(m_{\pi }^4-\left(2 m_N^2-4 m_\Sigma^{} m_N^{}-2 m_\Sigma^2\right) m_{\pi }^2 + m_N^{} \left(m_N^{}-m_\Sigma^{}\right)^2 \left(m_N^{}+m_{\Sigma }\right)\right) q^2\right. \nonumber\\ &&+\, \left. \left(m_N^{}+m_\Sigma^{}\right)^2 \left(2 m_{\pi }^4-2 \left(2 m_N^2-m_\Sigma^{} m_N^{}+m_\Sigma^2\right) m_{\pi }^2 + m_N^{} \left(m_N^{}-m_\Sigma^{}\right)^2 \left(2 m_N^{}-m_{\Sigma }\right)\right)\right) \,\,, \nonumber \\\end{aligned}$$ $$\begin{aligned} f_{+}^{(d)} &=& \left(m_N^{}+m_\Sigma^{}\right) \left(-m_N^{} m_\Sigma^4 -\left(q^2-2 m_N^2\right) m_\Sigma^3-4 q^2 m_N^{} m_\Sigma^2 \vphantom{|_|^|} \right. \nonumber \\ &&+\, \left. \left(q^4+m_N^2 q^2-2 m_N^4\right) m_\Sigma^{} \right. + \left. m_N^{} \left(q^2-m_N^2\right)^2\right) \nonumber \\ &&-\,\, m_{\pi }^2 \left(8 m_\Sigma^4-5 m_N^{} m_\Sigma^3-\left(3 q^2+m_N^2\right) m_\Sigma^2 + 3 m_N^{} \left(q^2-m_N^2\right) m_\Sigma^{} + \left(q^2-m_N^2\right)^2\right) \,\,, \nonumber \\ g_{+}^{(d)} &=& m_\Sigma^{} \left(m_N^{}+m_{\Sigma }\right) \left(-m_N^{} \left(2 m_N^{}-m_\Sigma^{}\right) q^4-\left(m_{\pi }^4-2 \left(3 m_N^2-2 m_\Sigma^{} m_N^{}+m_\Sigma^2\right) m_{\pi }^2\right.\right. \nonumber\\ &&+\, \left.\left. m_N^{} \left(m_N^{}-m_\Sigma^{}\right)^2 \left(m_N^{}+m_{\Sigma }\right)\right) q^2 - 2 m_{\pi }^2 \left(m_N^{}-m_{\Sigma }\right)^2 \left(m_{\pi }^2+m_\Sigma^{} \left(m_N^{}+m_\Sigma^{}\right)\right)\right) \,\,, \nonumber \\ f_{0}^{(d)} &=& \left(m_N^{}+m_\Sigma^{}\right)^2 \left(2 m_\Sigma^4 +m_N^{} m_\Sigma^3-\left(3 q^2+m_N^2\right) m_\Sigma^2 \vphantom{|_|^|} + 3 m_N^{} \left(q^2-m_N^2\right) m_\Sigma^{} +\left(q^2-m_N^2\right)^2\right) \nonumber \\ &&-\,\, m_{\pi }^2 \left(8 m_\Sigma^4-5 m_N^{} m_\Sigma^3-\left(3 q^2+m_N^2\right) m_\Sigma^2 +3 m_N^{} \left(q^2-m_N^2\right) m_\Sigma^{}+\left(q^2-m_N^2\right)^2\right) \,\,, \nonumber \\ g_{0}^{(d)} &=& m_\Sigma^{} \left(m_N^{}+m_{\Sigma }\right) \left(\left(m_{\pi }^4-2 \left(m_N^2+2 m_\Sigma^{} m_N^{}-m_\Sigma^2\right) m_{\pi }^2 + m_N^{} \left(m_N^{}-m_\Sigma^{}\right) \left(m_N^{}+m_{\Sigma }\right)^2\right) q^2\right.\nonumber \\ &&+\, \left. \left(m_N^{}-m_\Sigma^{}\right)^2 \left(2 m_{\pi }^4-2 \left(2 m_N^2+m_\Sigma^{} m_N^{}+m_\Sigma^2\right) m_{\pi }^2 \right.\right. \nonumber \\ &&+\, \left.\left. m_N^{} \left(m_N^{}+m_\Sigma^{}\right)^2 \left(2 m_N^{}+m_{\Sigma }\right)\right)\right) \,\,.\end{aligned}$$ In our numerical computations, $\,m_\Sigma^{}=m_{\Sigma^+}^{},\,$ $\,m_N^{}=\frac{1}{2}\bigl(m_p^{}+m_n^{}\bigr),\,$ $\,m_\pi^{}=\frac{1}{3}\bigl(2m_{\pi^+}^{}+m_{\pi^0}^{}\bigr),\,$ the numbers being from Ref. [@pdg]. In heavy baryon $\chi$PT [@Jenkins:1991ne], the relevant Lagrangian can be found in Ref. [@Jenkins:1992ab], and the weak radiative and nonleptonic amplitudes in Eqs. (\[M\_BBg\]) and (\[M\_SNpi\]) become, respectively, $$\begin{aligned} {\cal M}(B_i\to B_f\gamma^*) &=& -e G_{\rm F}^{}\, \bar{B}_f\, \Bigl[ 2 \bigl(S\cdot q\,S^\mu-S^\mu\,S\cdot q\bigr)a+2\bigl(S\cdot q\,v^\mu-S^\mu\,v\cdot q\bigr) b \Bigr] \, B_i\, \varepsilon_\mu^* \nonumber \\ && -\,\, e G_{\rm F}^{}\, \bar{B}_f\, \Bigl[ \bigl(q^2\,v^\mu-q^\mu\, v\cdot q\bigr) c+2 \bigl(q^2\,S^\mu-q^\mu\,S\cdot q\bigr) d \Bigr] \, B_i\, \varepsilon_\mu^* \,\,,\end{aligned}$$ $$\begin{aligned} {\cal M}(\Sigma^+\to N\pi) &=& i G_{F}^{} m_{\pi^+}^2\, \bar{N} \left( A_{N\pi}^{} + 2S\cdot p_\pi^{}\, \frac{B_{N\pi}^{}}{2m_\Sigma^{}} \right) \Sigma \,\,,\end{aligned}$$ where $v$ is the baryon four-velocity and $S$ is the baryon spin operator. Following Ref. [@Jenkins:1992ab], to obtain the imaginary parts of the form factors we evaluate the loop diagrams displayed in Fig. \[loops\]. In the heavy-baryon approach, only the diagrams with the $\,\Sigma^+\to n\pi^+\,$ transition yield nonzero contributions to the leading-order imaginary parts. ![Diagrams for imaginary part of $\Sigma^+\to p\gamma^*$ amplitude.\[loops\]](fig_loops.eps) The results are \[imFF\_hb\] $$\begin{aligned} {\rm Im}\,a &=& \frac{(D+F)\,m_{\pi^+}^2}{8\sqrt2\,\pi f_\pi^{}}\, \frac{B_{n\pi^+}^{}}{2m_\Sigma^{}} \left\{ \sqrt{\Delta^2-m_\pi^2}\Biggl( 1+\frac{\frac{1}{2}\, q^2}{\Delta^2-q^2}\Biggr) \right. \nonumber \\ && \left. +\,\, \frac{q^4+4 m_\pi^2\bigl(\Delta^2-q^2\bigr)}{4\bigl(\Delta^2-q^2\bigr)^{3/2}} \ln \left[ \frac{2\Delta^2-q^2-2\sqrt{\Delta^2-m_\pi^2}\sqrt{\Delta^2-q^2}} {\sqrt{q^4+4m_\pi^2\bigl(\Delta^2-q^2\bigr)}} \right] \right\} \,\,,\end{aligned}$$ $$\begin{aligned} {\rm Im}\,b &=& \frac{(D+F)\,m_{\pi^+}^2}{8\sqrt2\,\pi f_\pi^{}}\, A_{n\pi^+}^{} \left\{ \frac{-\Delta\, \sqrt{\Delta^2-m_\pi^2}}{\Delta^2-q^2} \Biggl( 1-\frac{\frac{3}{2}\, q^2}{\Delta^2-q^2}\Biggr) \right. \nonumber \\ && \left. +\,\, \Delta\, \frac{3q^4+4 m_\pi^2\bigl(\Delta^2-q^2\bigr)} {4\bigl(\Delta^2-q^2\bigr)^{5/2}} \ln \left[ \frac{2\Delta^2-q^2-2\sqrt{\Delta^2-m_\pi^2}\sqrt{\Delta^2-q^2}} {\sqrt{q^4+4m_\pi^2\bigl(\Delta^2-q^2\bigr)}} \right] \right\} \,\,, \hspace*{2em}\end{aligned}$$ $$\begin{aligned} {\rm Im}\,c &=& \frac{(D+F)\,m_{\pi^+}^2}{8\sqrt2\,\pi f_\pi^{}}\, \frac{B_{n\pi^+}^{}}{2m_\Sigma^{}} \left\{ \sqrt{\Delta^2-m_\pi^2}\, \frac{\Delta^2-2 m_\pi^2}{\Delta\,\bigl(\Delta^2-q^2\bigr)} \right. \nonumber \\ && \left. +\,\, \frac{\Delta\, \bigl(q^2-2 m_\pi^2\bigr)}{2\bigl(\Delta^2-q^2\bigr)^{3/2}}\, \ln \left[ \frac{2\Delta^2-q^2-2\sqrt{\Delta^2-m_\pi^2}\sqrt{\Delta^2-q^2}} {\sqrt{q^4+4m_\pi^2\bigl(\Delta^2-q^2\bigr)}} \right] \right\} \,\,,\end{aligned}$$ $$\begin{aligned} {\rm Im}\,d &=& \frac{(D+F)\,m_{\pi^+}^2}{8\sqrt2\,\pi f_\pi^{}}\, A_{n\pi^+}^{} \left\{ \sqrt{\Delta^2-m_\pi^2}\, \frac{\frac{3}{2}\, q^2}{\bigl(\Delta^2-q^2\bigr)^2} \right. \nonumber \\ && \left. +\,\, \frac{q^4+2q^2\Delta^2+4 m_\pi^2\bigl(\Delta^2-q^2\bigr)} {4\bigl(\Delta^2-q^2\bigr)^{5/2}}\, \ln \left[ \frac{2\Delta^2-q^2-2\sqrt{\Delta^2-m_\pi^2}\sqrt{\Delta^2-q^2}} {\sqrt{q^4+4m_\pi^2\bigl(\Delta^2-q^2\bigr)}} \right] \right\} \,\,, \hspace*{3em}\end{aligned}$$ where $\,\Delta=m_\Sigma^{}-m_N^{}.\,$ We have checked that these formulas can be reproduced from the relativistic results in Eq. (\[imFF\_r\]) by expanding the latter in terms of $\Delta/m_\Sigma^{}$, $\sqrt{q^2}/m_\Sigma^{}$, and $m_\pi^{}/m_\Sigma^{}$ and keeping the leading nonzero terms. Real parts of $\bm{c(q^2)}$ and $\bm{d(q^2)}$\[recd\] ===================================================== Vector mesons can contribute to $c$ via the pole diagrams shown in Fig. \[cd-poles\](a). The strong vertices in the diagrams come from the Lagrangian [@Ecker:1989yg; @Kubis:2000zd] $$\begin{aligned} \label{Ls'} {\cal L}_{\rm s}' &=& {\cal G}_D^{}\,\bigl\langle\bar{B}\,\gamma^\mu\,\bigl\{{\sf V}_\mu^{},B\bigr\}\bigr\rangle + {\cal G}_F^{}\,\bigl\langle\bar{B}\,\gamma^\mu\,\bigl[{\sf V}_\mu^{},B\bigr]\bigr\rangle + {\cal G}_0^{}\,\bigl\langle\bar{B}\gamma^\mu B\bigr\rangle\, \bigl\langle{\sf V}_\mu^{}\bigr\rangle \nonumber \\ && -\,\, \mbox{$\frac{1}{2}$}\, e f_{\sf V}^{}\, \bigl\langle \bigl(D^\mu{\sf V}^\nu-D^\nu{\sf V}^\mu\bigr) \bigl( \xi^\dagger Q\xi+\xi Q\xi^\dagger\bigr) \bigr\rangle\, \bigl( \partial_\mu^{}A_\nu^{}-\partial_\nu^{}A_\mu^{}\bigr) \,\,,\end{aligned}$$ with $\,{\sf V}=\frac{1}{2}\lambda_3^{}\rho^0+\cdots$ containing the nonet of vector-meson fields and $\,D^\mu{\sf V}^\nu=\partial^\mu{\sf V}^\nu+\bigl[{\cal V}^\mu,{\sf V}^\nu\bigr]$,[^5] whereas the weak vertices arise from $$\begin{aligned} \label{Lw} {\cal L}_{\rm w}^{} &=& G_{\rm F}^{}m_{\pi^+}^2 \left( h_D^{}\, \bigl\langle \bar{B}\, \bigl\{ \xi^\dagger h \xi, B \bigr\} \bigr\rangle + h_F^{}\, \bigl\langle \bar{B}\, \bigl[ \xi^\dagger h \xi, B \bigr] \bigr\rangle \,+\, h_{\sf V}^{}\, \bigl\langle h\, \xi{\sf V}^\mu{\sf V}_\mu^{}\xi^\dagger\bigr\rangle \right) \,\,+\,\, {\rm H.c.} \,\,,\end{aligned}$$ with $h$ being a 3$\times$3-matrix having elements $\,h_{kl}^{}=\delta_{k2}^{}\delta_{3l}^{}\,$ which selects out $\,s\to d\,$ transitions. ![\[cd-poles\]Pole diagrams contributing to the $c$ and $d$ amplitudes. A single line (double line) denotes a baryon (vector meson) field, and a solid dot (hollow square) represents a strong (weak) vertex. ](fig_cdpoles.eps) The relevant parameters in ${\cal L}_{\rm s}'$ are ${\cal G}_D^{}=-13.9$ and ${\cal G}_F^{}=17.9$ from a recent dispersive analysis [@Kubis:2000zd; @Mergell:1995bf],[^6] and $f_{\sf V}^{}=0.201$ from $\,\rho^0\to e^+ e^-\,$ rate [@pdg], while those in ${\cal L}_{\rm w}^{}$ are $\,h_D^{}=-72\,{\rm MeV}\,$ and $\,h_F^{}=179\,{\rm MeV}$ extracted at tree level from S-wave hyperon nonleptonic decays [@AbdEl-Hady:1999mj], but $h_{\sf V}^{}$ cannot be determined directly from data. To estimate $h_{\sf V}^{}$, we use the SU(6$)_w^{}$ relation $\,\bigl\langle\pi^0\bigr|{\cal H}_{\rm w}^{}\bigl|\bar{K}^0\bigr\rangle = \bigl\langle\rho^0\bigr|{\cal H}_{\rm w}^{}\bigl|\bar{K}^{*0}\bigr\rangle\,$ derived in Ref. [@Dubach:1996dg]. Thus, employing the weak chiral Lagrangian $\,{\cal L}_{\rm w}^\varphi=\gamma_8^{}\,f^2\, \bigl\langle h\, \partial^\mu\Sigma\,\partial_\mu^{}\Sigma^\dagger \bigr\rangle +{\rm H.c.},\,$ with $\,\gamma_8^{}=7.8\times10^{-8}\,$ from $\,K\to\pi\pi\,$ data, we find $\,h_{\rm V}^{}=-4\gamma_8^{}\, m_K^2/\bigl(G_{\rm F}^{}m_{\pi^+}^2\bigr)=-0.34{\rm\,GeV}^2$. Putting things together and adopting ideal $\omega$-$\phi$ mixing, we then obtain $$\begin{aligned} \label{rec} {\rm Re}\,c &=& \frac{f_{\sf V}^{}\, \bigl({\cal G}_D^{}-{\cal G}_F^{}\bigr)\, m_{\pi^+}^2\, \bigl(h_D^{}-h_F^{}\bigr)}{6 \bigl(m_\Sigma^{}-m_N^{}\bigr)} \left(\frac{3}{q^2-m_\rho^2}-\frac{1}{q^2-m_\omega^2} - \frac{2}{q^2-m_\phi^2} \right) \nonumber \\ && +\,\, \frac{f_{\sf V}^{}\,\bigl({\cal G}_D^{}-{\cal G}_F^{}\bigr)\,m_{\pi^+}^2\,h_{\sf V}^{}} {12 \bigl(q^2-m_{K^*}^2\bigr)} \left(\frac{3}{q^2-m_\rho^2}-\frac{1}{q^2-m_\omega^2} + \frac{2}{q^2-m_\phi^2} \right) \,\,.\end{aligned}$$ The form factor $d$ can receive vector-meson contributions from the parity-violating Lagrangian $$\begin{aligned} \label{Lw^PV} {\cal L}_{\rm w}' \,\,=\,\, G_{\rm F}^{}m_{\pi^+}^2\, h_{\rm PV}^{}\, \bigl\langle h\, \xi\, \bigl\{ \bigl[ \bar{B},\gamma^\mu\gamma_5^{}{ B}\bigr],{\sf V}_\mu^{} \bigr\}\, \xi^\dagger \bigr\rangle \,\,+\,\, {\rm H.c.} \,\,,\end{aligned}$$ which are represented by the diagram in Fig. \[cd-poles\](b). The parameter $h_{\rm PV}^{}$ also cannot be fixed directly from data, and so we estimate it by adopting again the SU(6$)_w^{}$ results of Ref. [@Dubach:1996dg] to be $h_{\rm PV}^{}=2.41.$ It follows that $$\begin{aligned} \label{red} {\rm Re}\,d &=& \frac{f_{\sf V}^{}\, m_{\pi^+}^2\, h_{\rm PV}^{}}{6} \left(\frac{3}{q^2-m_\rho^2}-\frac{1}{q^2-m_\omega^2} + \frac{2}{q^2-m_\phi^2} \right) \,\,.\end{aligned}$$ [99]{} H. Park [*et al.*]{} \[HyperCP Collaboration\], Phys. Rev. Lett.  [**94**]{}, 021801 (2005) \[arXiv:hep-ex/0501014\]. L. Bergstrom, R. Safadi, and P. Singer, Z. Phys. C [**37**]{}, 281 (1988). G. Buchalla, A.J. Buras, and M.E. Lautenbacher, Rev. Mod. Phys.  [**68**]{}, 1125 (1996) \[arXiv:hep-ph/9512380\]. X.G. He and G. Valencia, Phys. Rev. D [**61**]{}, 075003 (2000) \[arXiv:hep-ph/9908298\]. M.A. Shifman, A.I. Vainshtein, and V.I. Zakharov, Phys. Rev. D [**18**]{}, 2583 (1978) \[Erratum-ibid. D [**19**]{}, 2815 (1979)\]. N. Cabibbo, Phys. Rev. Lett.  [**10**]{}, 531 (1963); M. Kobayashi and T. Maskawa, Prog. Theor. Phys.  [**49**]{}, 652 (1973). J.F. Donoghue, E. Golowich, and B.R. Holstein, [*Dynamics of the Standard Model*]{} (Cambridge University Press, Cambridge, 1992). S. Eidelman [*et al.*]{} \[Particle Data Group\], Phys. Lett. B [**592**]{}, 1 (2004). H. Neufeld, Nucl. Phys. B [**402**]{}, 166 (1993). E. Jenkins, M.E. Luke, A.V. Manohar, and M.J. Savage, Nucl. Phys. B [**397**]{}, 84 (1993) \[arXiv:hep-ph/9210265\]. J.W. Bos [*et al.*]{}, Phys. Rev. D [**54**]{}, 3321 (1996) \[arXiv:hep-ph/9601299\]; [*ibid.*]{} [**57**]{}, 4101 (1998) \[arXiv:hep-ph/9611260\]. G. Ang [*et al.*]{}, Z. Phys. [**228**]{}, 151 (1969). J. Bijnens, H. Sonoda, and M.B. Wise, Nucl. Phys. B [**261**]{}, 185 (1985). E. Jenkins and A.V. Manohar, Phys. Lett. B [**255**]{}, 558 (1991); in [*Effective Field Theories of the Standard Model*]{}, edited by U.-G. Meissner (World Scientific, Singapore, 1992). G. Ecker [*et al.*]{}, Phys. Lett. B [**223**]{}, 425 (1989); B. Borasoy and U.G. Meissner, Int. J. Mod. Phys. A [**11**]{}, 5183 (1996) \[arXiv:hep-ph/9511320\]. B. Kubis and U.G. Meissner, Nucl. Phys. A [**679**]{}, 698 (2001) \[arXiv:hep-ph/0007056\]. P. Mergell, U.G. Meissner, and D. Drechsel, Nucl. Phys. A [**596**]{}, 367 (1996) \[arXiv:hep-ph/9506375\]; A. Abd El-Hady and J. Tandean, Phys. Rev. D [**61**]{}, 114014 (2000) \[arXiv:hep-ph/9908498\]. J.F. Dubach, G.B. Feldman, and B.R. Holstein, Annals Phys.  [**249**]{}, 146 (1996) \[arXiv:nucl-th/9606003\]. [^1]: We have taken the nonzero elements of $\gamma_5^{}$ to be positive. [^2]: Our heavy-baryon expressions for ${\rm Im}\, a(0)$ and ${\rm Im}\, b(0)$ are identical to those in Ref. [@Jenkins:1992ab], except that their ${\rm Im}\, a(0)$ formula has one of the overall factors of $1/(m_\Sigma^{}-m_N^{})$ apparently coming from their approximating $\bigl[(m_\Sigma^{}-m_N^{})^2-m_\pi^2\bigr]{}^{1/2}$ as $m_\Sigma^{}-m_N^{}$. This is the main reason for the value of ${\rm Im}\, a(0)$ in Eq. (\[jenkins\]) being smaller than that in Eq. (\[imab\_hb\]). [^3]: We note that the upper limit of $7\times10^{-6}$ quoted in Ref. [@pdg] and obtained in Ref. [@ang] is for the presence of weak neutral currents in $\Sigma^+\to p e^+e^-$ and not for the branching ratio of this mode. [^4]: Under a chiral transformation, $\bar B\to U\bar B U^\dagger$, $B\to U B U^\dagger$, ${\cal V}^\mu\to U{\cal V}^\mu U^\dagger+i\partial^\mu U\, U^\dagger$, and ${\cal A}^\mu\to U{\cal A}^\mu U^\dagger$, where $U$ is defined by $\xi\to L\xi U^\dagger=U\xi R^\dagger$. [^5]: Under a chiral transformation, ${\sf V}\to U{\sf V}U^\dagger$ and $D^\mu{\sf V}^\nu\to U D^\mu{\sf V}^\nu U^\dagger$. [^6]: Although ${\cal G}_0^{}$ does not appear in our results, it enters the extraction of ${\cal G}_{D,F}^{}$. Writing the $pp\sf V$ part of ${\cal L}_{\rm s}'$ as $\frac{1}{2}\bar{p}\gamma^\mu p\, \bigl(g_{\rho NN}^{}\rho_\mu^0 + g_{\omega NN}^{}\omega_\mu^{} + g_{\phi NN}^{}\phi_\mu^{}\bigr)$, we have $g_{\rho NN}^{}={\cal G}_D^{}+{\cal G}_F^{}=4.0$, $g_{\omega NN}^{}={\cal G}_D^{}+{\cal G}_F^{}+2{\cal G}_0^{}=41.8$, and $g_{\phi NN}^{}=\sqrt2\, \bigl({\cal G}_D^{}-{\cal G}_F^{}+{\cal G}_0^{}\bigr)=-18.3$, where the numbers are from Ref. [@Kubis:2000zd; @Mergell:1995bf].
--- author: - bibliography: - 'IEEEfull.bib' - 'sigproc.bib' title: Contaminant Removal for Android Malware Detection Systems --- Mobile Security; Malware Detection; Noise Detection; Android Malware; PU Learning;
--- abstract: 'The aim of this paper is to describe how to use regularization and renormalization to construct a perturbative quantum field theory from a Lagrangian. We first define renormalizations and Feynman measures, and show that although there need not exist a canonical Feynman measure, there is a canonical orbit of Feynman measures under renormalization. We then construct a perturbative quantum field theory from a Lagrangian and a Feynman measure, and show that it satisfies perturbative analogues of the Wightman axioms, extended to allow time-ordered composite operators over curved spacetimes.' author: - | R. E. Borcherds[^1]\ Department of Mathematics\ University of California at Berkeley\ CA 94720-3840 USA\ [[*Email:* `[email protected]`]{}]{} title: Renormalization and quantum field theory --- Introduction ============ We give an overview of the construction of a perturbative quantum field theory from a Lagrangian. We start by translating some terms in physics into mathematical terminology. Spacetime is a smooth finite-dimensional metrizable manifold $M$, together with a “causality” relation $\leqslant$ that is closed, reflexive, and transitive. We say that two points are [[**spacelike separated**]{}]{} if they are not comparable, in other words neither $x \leqslant y$ nor $y \leqslant x$.   The causality relation $a \leqslant b$ means informally that $a$ occurs before $b$. The causality relation will often be constructed in the usual way from a Lorentz metric with a time orientation, but since we do not use the Lorentz metric for anything else we do not bother to give $M$ one. The Lorentz metric will later appear implicitly in the choice of a cut propagator, which is often constructed using a metric. [[****]{}]{} The sheaf of classical fields $\Phi$ is the sheaf of smooth sections of some finite dimensional super vector bundle over spacetime. When the sheaf of classical fields is a “super-sheaf”, one uses the usual conventions of superalgebra: in particular the symmetric algebras used later are understood to be symmetric algebras in the superalgebra sense, and the usual superalgebra minus signs should be inserted into formulas whenever the order of two terms is exchanged. As usual, a global section of a sheaf of things is called a thing, so a classical field $\varphi$ is a global section of the sheaf $\Phi$ of classical fields, and so on. (A subtle point is sometimes things called classical fields in the physics literature are better thought of as sections of the [[**dual**]{}]{} of the sheaf of classical fields; in practice this distinction does not matter because the sheaf of classical fields usually comes with a bilinear form giving a canonical isomorphism with its dual.) The sheaf of derivatives of classical fields or simple fields is the sheaf $J \Phi = {\ensuremath{\operatorname{Hom}}} (J, \Phi)$, where $J$ is the sheaf of jets of $M$ and the ${\ensuremath{\operatorname{Hom}}}$ is taken over the smooth functions on $M$, equal to the inverse limit of the sheaves of jets of finite order of $M$, as in [[@Grothendieck 16.3]]{}. The sheaf of (polynomial) Lagrangians or composite fields $SJ \Phi$ is the symmetric algebra of the sheaf $J \Phi$ of derivatives of classical fields. Its sections are (polynomial) Lagrangians, in other words polynomial in fields and their derivations, so for example $\lambda \varphi^4 + m^2 \varphi^2 + \varphi \partial_i^2 \varphi$ is a Lagrangian, but $\sin (\varphi)$ is not. Perturbative quantum field theories depend on the choice of a Lagrangian $L$, which is the sum of a free Lagrangian $L_F$ that is quadratic in the fields, and an interaction Lagrangian $L_I \in SJ \Phi \otimes {\ensuremath{\boldsymbol{C}}}[[\lambda_1, \ldots, \lambda_n]]$ whose coefficients are infinitesimal, in other words elements of a formal power series ring ${\ensuremath{\boldsymbol{C}}}[[\lambda_1, \ldots, \lambda_n]]$ over the reals with constant terms 0. The sheaf of Lagrangian densities or local actions  $\omega SJ \Phi = \omega \otimes SJ \Phi$ is the tensor product of the sheaf $SJ \Phi$ of Lagrangians and the sheaf $\omega$ of smooth densities (taken over smooth functions on $M$). For a smooth manifold, the (dualizing) sheaf $\omega$ of smooth densities (or smooth measures) is the tensor product of the orientation sheaf with the sheaf of differential forms of highest degree, and is non-canonically isomorphic to the sheaf of smooth functions. Densities are roughly “things that can be locally integrated”. For example, if $M$ is oriented, then  $(\lambda \varphi^4 + m^2 \varphi^2 + \varphi \partial_i^2 \varphi) d^n x$ is a Lagrangian density. We use $\Gamma$ and $\Gamma_c$ to stand for spaces of global and compactly supported sections of a sheaf. These will  usually be spaces of smooth functions (or compactly supported smooth functions) in which case they are topologized in the usual way so that their duals are compactly supported distributions (or distributions) taking values in some sheaf. A (non-local) action is a polynomial in local actions, in other words an element of the symmetric algebra $S \Gamma \omega SJ \Phi$ of the real vector space $\Gamma \omega SJ \Phi$ of  local actions. We do not complete the symmetric algebra, so expressions such as $e^{i \lambda L}$ are not in general non-local actions, unless we work over some base ring in which $\lambda$ is nilpotent. We will use $\ast$ for complex conjugation and for the antipode of a Hopf algebra and for the adjoint of an operator and for the anti-involution of a $*$-algebra. The use of the same symbol for all of these is deliberate and indicates that they are all really special cases of a universal “adjoint” or “antipode” operation that acts on everything: whenever two of these operations are defined on something they are equal, so can all be denoted by the same symbol. The quantum field theories we construct depend on the choice of a  cut propagator $\Delta$ that is essentially the same as the 2-point Wightman distribution $$\Delta (\varphi_1, \varphi_2) = \int_{x, y} \langle 0| \varphi_1 (x) \varphi_2 (y) |0 \rangle dx dy$$ A propagator $\Delta$ is a continuous bilinear map $\Gamma_c \omega \Phi \times \Gamma_c \omega \Phi \rightarrow {\ensuremath{\boldsymbol{C}}}$. $\Delta$ is called [[**local**]{}]{} if $\Delta (f, g) = \Delta (g, f)$ whenever the supports of $f$ and $g$ are spacelike separated. $\Delta$ is called [[**Feynman**]{}]{} if it is symmetric: $\Delta (f, g) = \Delta (g, f)$. $\Delta$ is called [[**Hermitian**]{}]{} if $\Delta^{\ast} = \Delta$, where $\Delta^{\ast}$ is defined by $\Delta^{\ast} (f^{\ast}, g^{\ast}) = \Delta (g, f)^{\ast}$ (with a change in order of $f$ and $g$). $\Delta$ is called [[**positive**]{}]{} if $\Delta (f^{\ast}, f) \geqslant 0$ for all $f$. $\Delta$ is called  [[**cut**]{}]{} if it satisfies the following “positive energy” condition: at each point $x$ of $M$ there is a partial order on the cotangent space defined by a proper closed convex cone $C_x$,  such that if $(p, q)$ is in the wave front set of $\Delta$ at some point $(x, y) \in M^2$  then $p \leqslant 0$ and $q \geqslant 0$. Also, as a distribution, $\Delta$ can be written in local coordinates as a boundary value of something in the algebra generated by smooth functions and powers and logarithms of polynomials (the boundary values taken so that the wave front sets lie in the regions specified above). Moreover if $x = y$ then $p + q = 0$. A propagator can also be thought of as  a complex distribution on $M \times M$ taking values in the dual of the external tensor product $J \Phi \boxtimes J \Phi$. In particular it has a wave front set (see Hörmander [[@Hormander]]{}) at each point of $M^2$, which is a cone in the imaginary cotangent space of that point. If $A$ and $B$ are in $\Gamma_c \Phi$, then $\Delta (A, B)$ is defined to be a compactly supported distribution on $M \times M$, defined by  $\Delta (A, B) (f, g) = \text{$\Delta (Af, Bf)$}$ for $f$ and $g$ in $\Gamma \omega$. The key point in the definition of a cut propagator is the condition on the wave front sets, which distinguishes the cut propagators from other propagators such as Feynman propagators or advanced and retarded propagators that can have more complicated wave front sets. For most common cut propagators in Minkowski space, this follows from the fact that their Fourier transforms have support in the positive cone. The condition about being expressible in terms of smooth functions and powers and logs of polynomials is a minor technical condition that is in practice satisfied by almost any reasonable example, and is used in the proof that Feynman measures exist. If $(p_1, \ldots p_n)$ is in the imaginary cotangent space of a point of $M^n$, then we write $(p_1, \ldots p_n) \geqslant 0$ if $p_j \geqslant 0$ for all $j$, and call it positive if it is not zero. Over Minkowski space, most of the usual cut propagators are positive (except for ghost fields), local, and Hermitian. Most of the ideas for the proof of this can be seen for the simplest case of the propagator for massive Hermitian scalar fields. Using translation invariance, we can write $\Delta (x, y) = \Delta (x - y)$ for some distribution $\Delta$ on Minkowski spacetime. Then the Fourier transform of this in momentum space is a rotationally invariant measure supported on one of the two components of vectors with $p^2 = m^2$. This propagator is positive because the measure in momentum space is positive. It satisfies the wave front set part of the cut condition because the Fourier transform has support in the positive cone, and explicit calculation shows that it can be written in terms of powers and logs of polynomials. It satisfies locality because it is invariant under rotations that preserve the direction of time, and under such rotations any space-like vector is conjugate to its negative, so $\Delta (x) = \Delta (- x)$ whenever $x$ is spacelike, in other words $\Delta (x, y) = \Delta (y, x)$ whenever $x$ and $y$ are spacelike separated. The corresponding Feynman propagator is given by $1 / (p^2 + m^2 + i \varepsilon)$ where the $i \varepsilon$ indicates in which direction one integrates around the poles, so the cut propagator is just the residue of the Feynman propagator along one of the 2 components of the 2-sheeted hyperboloid  $p^2 = m^2$. For other fields such as spinor fields in Minkowski space, the sheaf of classical fields will usually be some sort of spin bundle. The propagators can often be expressed in terms of the  the propagator for a scalar field by acting on it with polynomials in momentum multiplied by Dirac’s gamma matrices $\gamma^{\mu}$, for example $i (\gamma^{\mu} p_{\mu} + m) / (p^2 - m^2)$. Unfortunately there are a bewildering number of different notational and sign conventions for gamma matrices. Compactly supported actions give functions on the space $\Gamma \Phi$ of smooth fields, by integrating over spacetime $M$.  A Feynman measure is a sort of analogue of Haar measure on a finite dimensional real vector space. We can think of a Haar measure as an element of the dual of the space of continuous compactly supported functions. For infinite dimensional vector spaces there are usually not enough continuous compactly supported functions, but instead we can define a measure to be an element of the dual of some other space of functions. We will think of Feynman measures as something like elements of the dual of all functions that are given by free field Gaussians times a compactly supported action. In other words a Feynman measure should assign a complex number to each compactly supported action, formally representing the integral over all fields of this action times a Gaussian $e^{iL_F}$, where we think of the action as a function of classical fields (or rather sections of the dual of the space of classical fields, which can usually be identified with classical fields). Moreover the Feynman measure should satisfy some sort of analogue of translation invariance. The space $e^{iL_F} S \Gamma_c \omega SJ \Phi$ is a free rank 1 module over $S \Gamma_c \omega SJ \Phi$ generated by the basis element $e^{iL_F}$, which can be thought of either as a formal symbol or a formal power series. Its elements can be thought of as representing functions of classical fields that are given by a polynomial times the Gaussian  $e^{iL_F}$, and will be the functions that the Feynman measure is defined on. The symmetric algebra $S \Gamma_c \omega SJ \Phi$ is topologized as the direct sum of the spaces $S^n \Gamma_c \omega SJ \Phi$, each of which is toplogized by regarding it as a space of smooth test functions over $M^n$. For the definition of a Feynman measure we need to extend the propagator $\Delta$ to a larger space as follows. We think of the propagator $\Delta$ as a map taking $\Gamma_cJ\Phi\otimes \Gamma_cJ\Phi$ to distributions on $M\times M$. We then extend it a map from $\Gamma_c S J\Phi \times \Gamma_c S J \Phi $ to distributions on $M\times M$ by putting $\Delta(a_1\cdots a_n,b_1\cdots b_n) =\sum_{\sigma\in S_n} \Delta(a_1,b_{\sigma(1)})\times\cdots\times\Delta(a_1,b_{\sigma(n)})$ where the sum is over all elements of the symmetric group $S_n$ (and defining it to be 0 for arguments of different degrees). Finally we extend it to a map from $S^m\Gamma_c S J\Phi \times S^n\Gamma_c S J \Phi $ to distributions on $M^m\times M^n$ using the “bicharacter” property: in other words $\Delta(AB,C) = \sum\Delta(A,C')\Delta(B,C'')$ where the coproduct of $C$ is $\sum C'\otimes C''$, and similarly for $\Delta(A,BC)$. \[feynman measure\]A Feynman measure is a continuous linear map $\omega : e^{iL_F} S \Gamma_c \omega SJ \Phi \rightarrow {\ensuremath{\boldsymbol{C}}}$. The Feynman measure is said to be associated with the propagator $\Delta$ if it satisfies the following conditions: - Smoothness on the diagonal: Whenever $(p_1, \ldots, p_n)$ is in the wave front set of $\omega$ at the point $(x, \ldots, x)$ on the diagonal, then $p_1 + \ldots + p_n = 0$ - Non-degeneracy: there is a smooth nowhere-vanishing function $g$ so that $\omega (e^{iL_F} v)$ is  $\int_M gv$ for $v$ in $\Gamma_c \omega S^0 J \Phi = \Gamma_c \omega$. - Gaussian condition, or weak translation invariance: For  $A\in S^m\Gamma_c \omega SJ \Phi$, $B\in S^n\Gamma_c \omega SJ \Phi$, with both sides interpreted as distributions on $M^{m+n}$, $$\text{$\omega (AB) = \sum \omega (A') \Delta (A_{}'', B'') \omega (B')$}$$ whenever there is no element in the support of $A_{}$ that is $\leqslant$ some element of the support of  $B_{}$. Here $\sum A' \otimes A'' \in S \Gamma_c \omega SJ \Phi \otimes S\Gamma_c\text{$SJ \Phi$}$ is the image of $A$ under the map $S^m \Gamma_c \omega SJ \Phi \rightarrow S^m \Gamma_c \omega SJ \Phi \otimes S^m\Gamma_c\text{$SJ \Phi$}$ induced by the coaction $ \omega SJ \Phi \rightarrow \omega SJ \Phi \otimes \text{$SJ \Phi$}$ of $SJ \Phi$ on $ \omega SJ \Phi$, and similarly for $B$. The product on the right is a product of distributions, using the extended version of $\Delta$ defined just before this definition. We explain what is going on in this definition. We would like to define the value of the Feynman measure to be a sum over Feynman diagrams, formed by joining up pairs of fields in all possible ways by lines, and then assigning a propagator to each line and taking the product of all propagators of a diagram. This does not work because of ultraviolet divergences: products of propagators need not be defined when points coincide. If these products were defined then they would satisfy the Gaussian condition, which then says roughly that if the vertices are divided into two disjoint subsets $a$ and $b$, then a Feynman diagram can be divided into a subdiagram with vertices $a$, a subdiagram with vertices $b$, and some lines between $a$ and $b$. The value $\omega(AB)$ of the Feynman diagram would then be the product of its value $\omega(A')$ on $a$, the product $\Delta(A'',B'')$ of all the propagators of lines joining $a$ and $b$, and its value $\omega(B')$ on $b$. The Gaussian condition need not make sense if some point of $a$ is equal to some point of $b$ because if these points are joined by a line then the corresponding propator may have a bad singularity, but does make sense whenever all points of $a$ are not $\le$ all points of $b$. The definition above says that a Feynman measure should at least satisfy the Gaussian condition in this case, when the product is well defined. Unfortunately the standard notation $\omega$ for a dualizing sheaf, such as the sheaf of densities, is the same as the standard notation $\omega$ for a state in the theory of operator algebras, which the Feynman measure will be a special case of. It should be clear from the context which meaning of $\omega$ is intended. If $\omega$ is a Feynman measure and $A \in e^{iL_F} S^n \Gamma_c \omega SJ \Phi$ then $\omega (A)$ is a complex number, but can also be considered as the compactly supported density on $M^n$ taking a smooth $f$ to $\omega (A) (f) = \omega (Af)$. The integral of this density $\omega (A)$ over spacetime is just the complex number $\omega (A)$. Since $e^{iL_F} S \Gamma_c \omega SJ \Phi$ is a coalgebra (where elements of $\Gamma_c \omega SJ \Phi$ are primitive and $e^{iL_F}$ is group-like), the space of Feynman measures is an algebra, whose product is called convolution. The non-degeneracy condition just excludes some uninteresting degenerate cases, such as the measure that is identically zero, and the function $g$ appearing in it is usually normalized to be 1. The condition about smoothness on the diagonal implies that the product on the right in the Gaussian condition is defined. This is because $\omega$ has the property that if an element $(p_1, \ldots p_n)$ of the wave front set of some point is nonzero then its components cannot all be positive and cannot all be negative. This shows that the wave front sets are such that the product of distributions is defined. If $A$ is in $e^{iL_F} S \Gamma_c \omega SJ \Phi$, then $\omega (A)$ can be thought of as a Feynman integral $$\omega (A) = \int A (\varphi)\mathcal{D} \varphi$$ where $L_F$ is a quadratic action with cut propagator $\Delta$, and where $A$ is considered to be a function of fields $\varphi$. The integral is formally an integral over all classical fields. The Gaussian condition is a weak form of translation invariance of this measure under addition of classical fields. Formally, translation invariance is equivalent to the Gaussian condition with the condition about supports omitted and cut propagators replaced by Feynman propagators, but this is not well defined because the Feynman propagators can have such bad singularities that their products are sometimes not defined when two spacetime points coincide. The Feynman propagator $\Delta_F$ of a Feynman measure $\omega$ is defined to be the restriction of $\omega$ to $\Gamma_c \omega \Phi \times \Gamma_c \omega \Phi$. It is equal to the cut propagator at “time-ordered” points $(x, y) \in M^2$ where $x \nleqslant y$, but will usually differ if $x \leqslant y$. As it is symmetric, it is determined by the cut propagator except on the diagonal of $M \times M$. Unlike cut propagators, Feynman propagators may have singularities on the diagonal whose wave front sets are not contained in a proper cone, so that their products need not be defined. Any symmetric algebra ${\ensuremath{\operatorname{SX}}}$ over a module $X$ has a natural structure of a commutative and cocommutative Hopf algebra, with the coproduct defined by making all elements of $X$ primitive (in other words, $\Delta x = x \otimes 1 + 1 \otimes x$ for $x \in X$). In other words, ${\ensuremath{\operatorname{SX}}}$ is the coordinate ring of a commutative affine group scheme whose points form the dual of $X$ under addition.  For general results about Hopf algebra see Abe [[@Abe]]{}. Similarly $SJ \Phi$ is a sheaf of commutative cocommutative Hopf algebras, with a coaction on itself and the trivial coaction on $\omega$, and so has a coaction on $S \omega SJ \Phi$, preserving the coproduct of $S \omega SJ \Phi$. It corresponds to the sheaf of commutative affine algebraic groups whose points correspond to the sheaf $J \Phi$ under addition. A renormalization is an  automorphism of $S \omega SJ \Phi$ preserving its coproduct and the coaction of $SJ \Phi$. The group of renormalizations is called the ultraviolet group.  The justification for this rather mysterious definition is theorem \[transitive\], which shows that renormalizations act simply transitively on the Feynman measures associated to a given local cut propagator. In other words, although there is no canonical Feynman measure on the space of classical fields, there is a canonical orbit of such measures under renormalization. More generally, renormalizations are global sections of the sheaf of renormalizations (defined in the obvious way), but we will make no use of this point of view. The (infinite dimensional) ultraviolet group really ought to be called the “renormalization group”, but unfortunately this name is already used for a quite different 1-dimensional group. The “renormalization group” is the group of positive real numbers, together with an action on Lagrangians by “renormalization group flow”. The relation between the renormalization group and the ultraviolet group is that the renormalization group flow can be thought of as a non-abelian 1-cocycle of the renormalization group with values in the ultraviolet group, using the action of renormalizations on Lagrangians that will be constructed later. The ultraviolet group is indirectly related to the Hopf algebras of Feynman diagrams introduced by Kreimer [[@Kreimer]]{} and applied to renormalization by him and Connes [[@Connes]]{}, though this relation is not that easy to describe. First of all their Hopf algebras correspond to Lie algebras, and the ultraviolet group has a Lie algebra, and these two Lie algebras are related. There is no direct relation between Connes and Kreimer’s Lie agebras and the Lie algebra of the ultraviolet group, in the sense that there seems to be no natural homomorphism in either direction. However there seems to be a sort of intermediate Lie algebra that has homomorphisms to both. This intermediate Lie algebra (or group) can be defined using Feynman diagrams decorated with smooth test functions rather than the sheaf $S \omega SJ \Phi$ used here. Unfortunately all my attempts to explain the product of this Lie algebra explicitly have resulted in an almost incomprehensible combinatorial mess so complicated that it is unusable. Roughly speaking, the main differences between the ultraviolet group and the intermediate Lie algebra is that the Lie algebra of the ultraviolet group amalgamates all Feynman diagrams with the same vertices while the intermediate Lie algebra algebra keeps track of individual Feynman diagrams, and the main difference between the intermediate Lie algebra and Kreimer’s algebra is that  the intermediate Lie algebra is much fatter than Kreimer’s algebra because it has infinite dimensioinal spaces of smooth functions in it. In some sense Kreimer’s algebra could be thought of as a sort of skeleton of the intermediate Lie algebra. All reasonable Feynman measures for a given free field theory are equivalent up to renormalization, but it is not easy to show that at least one exists. We do this by following the usual method of constructing a perturbative quantum field theory in physics. We  first regularize the cut local propagator which produces a meromorphic family of Feynman measures, following Etingof [[@Etingof]]{} in using Bernstein’s theorem [[@Bernstein]]{} on the analytic continuation of powers of a polynomial to construct the regularization. We then use an infinite renormalization to eliminate the poles of the regularized Feynman measure in order of their complexity. A quantum field theory satisfying the Wightman axioms [[@Streater section 3.1]]{} is determined by its Wightman distributions, which are given by linear maps $\omega_n : T^n \Gamma_c \omega \Phi \rightarrow {\ensuremath{\boldsymbol{C}}}$ from the tensor powers of the space of test functions for each $n$. We will follow H. J. Borchers [[@Borchers]]{} in combining the Wightman distributions into a Wightman functional $\omega : \text{$T \Gamma_c \omega \Phi$} \rightarrow {\ensuremath{\boldsymbol{C}}}$ on the tensor algebra $T \Gamma_c \omega \Phi$ of the space $\Gamma_c \omega \Phi$ of test functions (which is sometimes called a Borchers algebra or Borchers-Uhlmann algebra or BU-algebra). In order to accommodate composite operators we extend the algebra $T \Gamma_c \omega \Phi$ to the larger algebra $T \Gamma_c \omega SJ \Phi$, and to accommodate time ordered operators we extend it further to ${\ensuremath{\operatorname{TS}}} \Gamma_c \omega SJ \Phi$. In this set up it is clear how to accommodate perturbative quantum field theories: we just allow $\omega$ to take values in a space of formal power series ${\ensuremath{\boldsymbol{C}}}[[{\ensuremath{\boldsymbol{\lambda}}}]] ={\ensuremath{\boldsymbol{C}}}[[\lambda_1, \lambda_2, \ldots]]$ rather than ${\ensuremath{\boldsymbol{C}}}$.  For regularization $\omega$ sometimes takes values in a ring of meromorphic functions. There is one additional change we need: it turns out that the elements of $\Gamma_c \omega SJ \Phi$ do not really represent operators on a space of physical states, but are better thought of as operators that map a space of incoming states to a space of outgoing states, and vice versa. If we identify the space of incoming states with the space of physical states, this means that only products of an even number of operators of $S \Gamma_c \omega SJ \Phi$ act on the space of physical states. So the functional defining a quantum field theory is really defined on the subalgebra $T_0 S \Gamma_c \omega SJ \Phi$ of even degree elements. So the main goal of this paper is to construct a linear map from $T_0 S \Gamma_c \omega SJ \Phi$ to ${\ensuremath{\boldsymbol{C}}}[{\ensuremath{\boldsymbol{\lambda}}}]$ from a given Lagrangian, and to check that it satisfies analogues of the Wightman axioms. The space of physical states of the quantum field theory can be reconstructed from $\omega$ as follows. Let  $\omega : T \rightarrow C$ be a ${\ensuremath{\boldsymbol{R}}}$-linear map between real $*$-algebras. $\omega$ is called Hermitian if $\omega^{\ast} = \omega$, where $\omega^{\ast} (a^{\ast}) = \omega (a)^{\ast}$ $\omega$ is called positive if it maps positive elements to positive elements, where an element of a $*$-algebra is called positive if it is a finite sum of elements of the form $a^{\ast} a$. $\omega$ is called a state if it is positive and normalized by $\omega (1) = 1$ The left, right, or 2-sided kernel of $\omega$ is the largest left, right or 2-sided ideal closed under \* on which $\omega$ vanishes. The space of physical states of $\omega$ is the quotient of $T$ by the left kernel of $\omega$. Its sesquilinear form is $\left\langle a, b \right\rangle = \omega (a^{\ast} b)$, and its vacuum vector is the image of $1$. The algebra of physical operators of $\omega$ is the quotient of $T$ by the 2-sided kernel of $\omega$. The algebra of physical operators is a $*$-algebra of operators with a left action on the physical states. If $\omega$ is positive or Hermitian then so is the sesquilinear form $\left\langle, \right\rangle$. When $\omega$ is Hermitian and positive and $C$ is the complex numbers the left kernel of $\omega$ is the set of vectors $a$ with $\omega (a^{\ast} a) = 0$, and the definition of the space of physical states is essentially the GNS construction and is also the main step of the Wightman reconstruction theorem. In this case the completion of the space of physical states is a Hilbert space. The maps $\omega$ we construct are defined on the real vector space $T_0 S \Gamma_c \omega SJ \Phi$ and will initially be ${\ensuremath{\boldsymbol{R}}}$-linear. It is often convenient to extend them to be ${\ensuremath{\boldsymbol{C}}}[[{\ensuremath{\boldsymbol{\lambda}}}]]$-linear maps defined on $T_0 S \Gamma_c \omega SJ \Phi \otimes {\ensuremath{\boldsymbol{C}}}[[{\ensuremath{\boldsymbol{\lambda}}}]]$, in which case the corresponding space of physical states will be a module over ${\ensuremath{\boldsymbol{C}}}[[{\ensuremath{\boldsymbol{\lambda}}}]]$ and its bilinear form will be sesquilinear over ${\ensuremath{\boldsymbol{C}}}[[{\ensuremath{\boldsymbol{\lambda}}}]]$. The machinery of renormalization and regularization has little to do with perturbation theory or the choice of Lagrangian: instead, it is needed even for the construction of free field theories if we want to include composite operators. The payoff for all the extra work needed to construct the composite operators in a free field theory comes when we construct interacting field theories from free ones. The idea for constructing an interacting field theory from a free one is simple: we just apply a suitable automorphism (or endomorphism) of the algebra $T_0 S \Gamma_c \omega SJ \Phi$ to the free field state $\omega$ to get a state for an interacting field. For example, if we apply an endomorphsim of the sheaf $\omega SJ \Phi$ then we get the usual field theories of normal ordered products of operators, which are not regarded as all that interesting. For any Lagrangian $L$ there is an infinitesimal automorphism of $T_0 S \Gamma_c \omega SJ \Phi$ that just multiplies elements of $S \Gamma_c \omega SJ \Phi$ by $iL$, which we would like to lift to an automorphism $e^{iL}$. The construction of an interacting quantum field theory from a Feynman measure $\omega$ and a Lagrangian $L$ is then given by the natural action $e^{- iL} \omega$ of the automorphism $e^{- iL}$ on the state $\omega$. The problem is that $e^{iL_I}$ is only defined if the interaction Lagrangian has infinitesimal coefficients, due to the fact that we only defined $\omega$ on polynomials times a Gaussian, so this construction only produces perturbative quantum field theories taking values in rings of formal power series.  This is essentially the problem of lifting a Lie algebra elements $L_I$ to a group element $e^{iL_I}$, which is trivial for operators on finite dimensional vector spaces, but a subtle and hard problem for unbounded operators such as $L_I$ that are not self adjoint. This construction works provided the interacting part of the Lagrangian not only has infinitesimal coefficients but also has compact support. We show that the more general case of Lagrangians without compact support can be reduced to the case of compact support up to inner automorphisms, at least on globally hyperbolic spacetimes, by showing that infra-red divergences cancel. Up to isomorphism, the quantum field theory does not depend on the choice of Feynman measure or Lagrangian, but only on the choice of propagator. In particular, the interacting quantum field theory is isomorphic to a free one. This does not mean that interacting quantum field theories are trivial, because this isomorphism does not preserve the subspace of simple operators, so if one only looks at the restriction to simple operators, as in the Wightman axioms, one no longer gets an isomorphism between free and interacting theories. The difference between interacting and free field theories is that one chooses a different set of operators to be the “simple” operators corresponding to physical fields. The ultraviolet group also has a non-linear action on the space of infinitesimal Lagrangians. A quantum field theory is determined by the choice of a Lagrangian and a Feynman measure, and this quantum field theory is unchanged if the Feynman measure and the Lagrangian are acted on by the same renormalization. This shows why the choice of Feynman measure is not that important: if one chooses a different Feynman measure, it is the image of the first by a unique renormalization, and by applying this renormalization to the Lagrangian one still gets the same quantum field theory.  Roughly speaking, we show that these quantum field theories $e^{iL_I} \omega$ satisfy the obvious generalizations of Wightman axioms whenever it is reasonable to expect them to do so. For example, we will show that locality holds by showing that the state vanishes on the “locality ideal” of definition \[localityideal\], the quantum field theory is Hermitian if we start with Hermitian cut propagators and Lagrangians, and we get a (positive) state if we start with a positive (non-ghost) cut propagator. We cannot expect to get Lorentz invariant theories in general as we are working over a curved spacetime, but if we work over Minkowski space and choose Lorentz invariant cut propagators  then we get Lorentz invariant free quantum field theories. In the case of interacting theories Lorentz invariance is more subtle, even if the Lagrangian is Lorentz invariant. Lorentz invariance depends on the cancellation of infra-red divergences as we have to approximate the Lorentz invariant Lagrangian by non Lorentz invariant Lagrangians with compact support, and we can only show that infra-red divergences cancel up to inner automorphisms. This allows for the possibility that the vacuum is not Lorentz invariant, in other words Lorentz invariance may be spontaneously broken by infra-red divergences, at least if the theory has massless particles. (It seems likely that if there are no massless particles then infra-red divergences cancel and we recover Lorentz invariance, but I have been too lazy to check this in detail.) In the final section we discuss anomalies. Fujikawa [[@Fujikawa]]{} observed that anomalies arise from the lack of invariance of Feynman measures under a symmetry group, and we translate his observation into mathematical language. The definitions above generalize to the relative case where spacetime is replaced by a morphism $X \rightarrow Y$, whose fibers can be thought of as spacetimes parameterized by $Y$. For example, the sheaf of densities $\omega$ is replaced by the dualizing sheaf or complex $\omega_{X / Y}$. We will make no serious use of this generalization, though the section on regularization could be thought of as an example of this where $Y$ is the spectrum of a ring of meromorphic functions. The ultraviolet group ===================== We describe the structure of the ultraviolet group, and show that it acts simply transitively on the Feynman measures associated with a given propagator. \[uvgroupstructure\]The map taking a renormalization $\rho : S \omega SJ \Phi \rightarrow S \omega SJ \Phi$ to its composition with the natural map  $S \omega SJ \Phi \rightarrow S^1 \omega S^0 J \Phi = \omega$ identifies renormalizations with the elements of ${\ensuremath{\operatorname{Hom}}} (S^{} \omega SJ \Phi, \omega)$ that vanish on $S^0 \omega SJ \Phi$ and that are isomorphisms when restricted to $\omega = S^1 \omega S^0 J \Phi$. This is a variation of the dual of the fact that endomorphisms $\rho$ of a polynomial ring $R [x]$ correspond to polynomials $\rho (x)$, given by the image of the polynomial $x$ under the endomorphism $\rho$. It is easier to understand the dual result first, so suppose that $C$ is a cocommutative Hopf algebra and $\omega$ is a vector space (with $C$ acting trivially on $\omega$). Then the symmetric algebra $S \omega C = S (\omega \otimes C)$ is a commutative algebra acted on by $C$, and its endomorphisms (as a commutative algebra) correspond exactly to elements of ${\ensuremath{\operatorname{Hom}}} (\omega, S \omega C)$ because any such map lifts uniquely to a $C$-invariant map from $\omega$ to $\omega C$, which in turn lifts to a unique algebra homomorphism from $S \omega C$ to itself by the universal property of symmetric algebras. This endomorphism is invertible if and only if the map from $\omega$ to $\omega = S^1 \omega C^0$ is invertible, where $C^0$ is the vector space generated by the identity of $C$. To prove the theorem, we just take the dual of this result, with $C$ now given by $SJ \Phi$. There is one small modification we need to make in taking the dual result: we need to add the condition that the element of ${\ensuremath{\operatorname{Hom}}} (S \omega C, \omega)$ vanishes on $S^0 \omega C$ in order to get an endomorphism of $S \omega C$; this is related to the fact that endomorphisms of the polynomial ring $R [x]$ correspond to polynomials, but continuous endomorphisms of the power series ring $R [[x]]$ correspond to power series with vanishing constant term. The ultraviolet group preserves the increasing filtration $S^{\leqslant m} \omega SJ \Phi$ and so has a natural decreasing filtration by the groups $G_{\geqslant n}$, consisting of the renormalizations that fix all elements of $S^{\leqslant n} \omega SJ \Phi$. The group $G = G_{\geqslant 0}$ is the inverse limit of the groups $G / G_{\geqslant n}$, and the commutator of $G_{\geqslant m}$ and $G_{\geqslant n}$ is contained in $G_{\geqslant m + n}$, so in particular $G_{\geqslant 1}$ is an inverse limit of nilpotent groups $G_{\geqslant 1} / G_{\geqslant n}$. The group $G_{\geqslant n}$ is a semidirect product $G_{\geqslant n + 1} G_n$ of its normal subgroup $G_{\geqslant n + 1}$ with the group $G_n$, consisting of elements represented by elements of ${\ensuremath{\operatorname{Hom}}} (S^{} \omega SJ \Phi, \omega)$ that are the identity on $S^1 \omega SJ \Phi$ if $n > 0$, and vanish on $S^m \omega SJ \Phi$ for $m > 1$, $m \neq n + 1$. The group $G$ is $\ldots G_2 G_1 G_0$ in the sense that any element of $G$ can be written uniquely as an infinite product $\ldots g_2 g_1 g_0$ with $g_i \in G_i$, and conversely any such infinite product converges to an element of $G$. The convergence of this product follows from the facts that all elements $g_i$ preserve any space $S^{\leqslant m} \omega SJ \Phi$, and all but a finite number act trivially on it. The fact that any element can be written uniquely as such an infinite product follows from the fact that $G / G_{\geqslant n}$ is essentially the product $G_{n - 1} \ldots G_2 G_1 G_0$. The natural map $$S \Gamma \omega SJ \Phi \rightarrow \Gamma S \omega SJ \Phi$$ is not an isomorphism, because on the left the symmetric algebra is taken over the reals, while on the right it is essentially taken over smooth functions on $M$. The action of renormalizations on $\Gamma S \omega SJ \Phi$ lifts to an action on $S \Gamma_c \omega SJ \Phi$ that preserves the coproduct, the coaction of $\Gamma SJ \Phi$, and the product of elements with disjoint support. A renormalization is given by a linear map from $\Gamma_c S \omega SJ \Phi$ to $\Gamma_c \omega$, which by composition with the map $S \Gamma_c \omega SJ \Phi \rightarrow \Gamma S \omega SJ \Phi$ and the “integration over $M$” map $\Gamma_c \omega \rightarrow {\ensuremath{\boldsymbol{R}}}$ lifts to a linear map from $S \Gamma_c \omega SJ \Phi$ to ${\ensuremath{\boldsymbol{R}}}$. This linear map has the special property that the product of any two elements with disjoint support vanishes, because it is multilinear over the ring of smooth functions. As in theorem \[uvgroupstructure\], the linear map gives an automorphism of $S \Gamma_c \omega SJ \Phi$ preserving the coproduct and the coaction of $\Gamma SJ \Phi$. As the linear map vanishes on products of disjoint support, the corresponding renormalization preserves products of elements with disjoint support. In general, renormalizations do not preserve products of elements of $S \Gamma_c \omega SJ \Phi$ that do not have disjoint support; the ones that do are those in the subgroup $G_0$. \[transitive\]The group of complex renormalizations acts simply transitively on the Feynman measures associated with a given cut local propagator. We first show that renormalizations $\rho$ act on Feynman measures $\omega$ associated with a given local cut propagator. We have to show that renormalizations preserve nondegeneracy, smoothness on the diagonal, and the Gaussian property. The first two of these are easy to check, because the value of $\rho (\omega)$ on any element is given by a finite sum of values of $\omega$ on other elements, so is smooth along the diagonal. To check that renormalizations preserve the Gaussian property $$\text{$\omega (AB) = \sum \omega (A') \Delta (A_{}'', B'') \omega (B')$}$$ we recall that renormalizations $\rho$ preserve products with disjoint support and also commute with the coaction of $SJ \Phi$. Since $A$ and $B$ have disjoint supports we have $\rho (AB) = \rho (A) \rho (B)$. Since $\rho$ commutes with the coaction of $SJ \Phi$, the image of $\rho (A)$ under the coaction of $SJ \Phi$ is $\sum \rho (A') \otimes A''$, and similarly for $B$. Combining these facts with the Gaussian property for $\rho (A) \rho (B)$ shows that $$\text{$\omega (\rho (AB)) = \sum \omega (\rho (A')) \Delta (A_{}'', B'') \omega (\rho (B'))$}$$ or in other words the renormalization $\rho$ preserves the Gaussian property. To finish the proof, we have to show that for any two normalized smooth Feynman measures $\omega$ and $\omega'$ with the same cut local propagator, there is a unique complex renormalization $g$ taking $\omega$ to $\omega'$. We will construct $g = \ldots g_2 g_1 g_0$ as an infinite product, with the property that $g_{n - 1} \ldots g_0 \omega$ coincides with $\omega'$ on $e^{iL_F} S^{\leqslant n} \Gamma_c \omega SJ \Phi$. Suppose that $g_0, \ldots, g_{n - 1}$ have already been constructed. By changing $\omega$ to  $g_{n - 1} \ldots g_0 \omega$ we may as well assume that they are all 1, and that $\omega$ and $\omega'$ coincide on  $e^{iL_F} S^{\leqslant n} \Gamma_c \omega SJ \Phi$. We have to show that there is a unique $g_n \in G_n$ such that $g_n \omega$ and $\omega'$ coincide on $e^{L_F} S^{n + 1} \Gamma_c \omega SJ \Phi$. The difference  $\omega - \omega'$, restricted to $e^{iL_F} S^{n + 1} \Gamma_c \omega SJ \Phi$, is a continuous linear function on $e^{iL_F} S^{n + 1} \Gamma_c \omega SJ \Phi \text{}$, which we think of as a distribution. Moreover, since both $\omega$ and $\omega'$ are determined off the diagonal by their values on elements of smaller degree by the Gaussian property, this distribution has support on the diagonal of $M^{n + 1}$. Since $\omega$ and $\omega'$ both have the property that their wave front sets on the diagonal are orthogonal to the diagonal, the same is true of their difference $\omega - \omega'$, so the distribution is given by a map $e^{iL_F} S^{n + 1} \Gamma_c \omega SJ \Phi \rightarrow \omega$.   By theorem \[uvgroupstructure\] this corresponds to some renormalization $g_n \in G_n$, which is the unique element of $G_n$ such that $g_n \omega {\ensuremath{\operatorname{and}}} \omega' {\ensuremath{\operatorname{coincide}}} {\ensuremath{\operatorname{on}}} e^{iL_F} S^{n + 1} \Gamma_c \omega SJ \Phi$ . Existence of Feynman measures ============================= We now prove theorem \[feynman existence\] showing the existence of at least one Feynman measure associated to any cut local propagator, by using regularization and renormalization. Regularization means that we construct a Feynman measure over a field of meromorphic functions, which will usually have poles at the point we are interested in, and renormalization means that we eliminate these poles by acting with a suitable meromorphic renormalization. \[bernsteinpolynomial\] If $f_1, \ldots, f_m$ are polynomials in several variables, then there are non-zero (Bernstein-Sato) polynomials $b_i$ and differential operators $D_i$ such that $$\text{ $b_i (s_1, \ldots, s_m)$} f_1 (z)^{s_1} \ldots f_m (z)^{s_m} = D_i (z) \left( f_i (z) f_1 (z)^{s_1} \ldots f_m (z)^{s_m}) \right.$$ Bernstein’s proof [[@Bernstein]]{} of this theorem for the case $m = 1$ also works for any $m$ after making the obvious minor changes, such as replacing the field of rational functions in one variable $s_1$ by the field of rational functions in $m$ variables. \[bernsteincontinuation\]If $f_1, \ldots, f_m$ are polynomials in several variables then for any choice of continuous branches of the multivalued functions, $f_1 (z)^{s_1} \ldots f_m (z)^{s_m}$ can be analytically continued from the region where all $s_j$ have positive real part to a meromorphic distribution-valued function for all complex values of  $s_1, \ldots, s_m$. This follows by using the functional equation of lemma \[bernsteinpolynomial\] to repeatedly decrease each $s_j$ by 1, just as in Bernstein’s proof [[@Bernstein]]{} for the case $m = 1$. \[regularization\]Any cut local propagator $\Delta$ has a regularization, in other words a Feynman measure with values in a ring of meromorphic functions whose cut propagator at some point is $\Delta$. The following argument is inspired by the one in Etingof [[@Etingof]]{}. By using a locally finite smooth partition of unity, which exists since we assume that spacetime is metrizable, we can reduce to showing that a regularization exists locally. If a local propagator is smooth, it is easy to construct a Feynman measure for it, just by defining it as a sum of products of Feynman propagators. Now suppose that we have a meromorphic family of local propagators $\Delta^{_{}}_d$ depending on real numbers $d_i$, given in local coordinates by a finite sum of boundary values of terms of the form $$s (x, y) p_1 (x, y)^{d_1} \ldots p_k (x, y)^{d_k} \log (p_{k + 1} (x, y)) \ldots$$ where $s$ is smooth in $x$ and $y$, and the $p_i$ are polynomials, and where we choose some branch of the powers and logarithms in each region where they are non-zero. In this case the Feynman measure can also be defined as a meromorphic function of $d$ for all real $d$. To prove this, we can forget about the smooth function $s$ as it is harmless, and we can eliminate the logarithmic terms by writing $\log (p)$ as $\frac{d}{dt} p^t {\ensuremath{\operatorname{at}}} t = 0$. For any fixed number of fields with derivatives of fixed order, the corresponding distribution is defined when all variables $d_i$ have sufficiently large real part, because the product of the propagators is smooth enough to be defined in this case. But this distribution is given in local coordinates by the product the $d_i$’th powers of  polynomials of $x$ and $y$. By Bernstein’s corollary \[bernsteincontinuation\] these products can be continued as a meromorphic distribution-valued function of the $d_i$ to all complex $d_i$. This gives a Feynman measure with values in the field of meromorphic functions in several variables, and by restricting functions to the diagonal we get a Feynman measure whose value are meromorphic functions in one variable. Dimensional regularization. Over Minkowski space of dimension $d$, there is a variation of the construction of a meromorphic Feynman measure, which is very similar to dimensional regularization. In dimensional regularization, one formally varies the dimension of spacetime, to get Feynman diagrams that are meromorphic functions of the dimension of spacetime. One way to make sense out of this is to keep the dimension of spacetime fixed, but vary the propagator of the free field theory, by considering it to be a meromorphic function of a complex number $d$. The propagator for a Hermitian scalar field, considered as a distribution of $z$ in Minkowski space, can be written as a linear combination of functions of the form $$K_{d / 2 - 1} (c \sqrt{(z, z)}) / \sqrt{(z, z)^{}}^{d / 2 - 1}$$ where $K_{\nu} (z)$ is a multi-valued modified Bessel function of the third kind, and where we take a suitable choice of branch (depending on whether we are considering a cut or a Feynman propagator). A similar argument using Bernstein’s theorem shows that this gives a Feynman measure that is analytic in $d$ for $d$ with large real part and that can be analytically continued as a meromorphic function to all complex $d$. This gives an explicit example of a meromorphic Feynman measures for the usual propagators in Minkowski space. \[renormalizemeasure\]Any meromorphic Feynman measure can be made holomorphic by acting on it with a meromorphic renormalization. This is essentially the result that a bare quantum field theory can be made finite by an infinite renormalization.  Suppose that $\omega$ is a meromorphic Feynman measure. Using the same idea as in theorem \[transitive\] we will construct a meromorphic renormalization $g = \text{$\ldots g_2 g_1 g_0$}$ as an infinite product, but this time we choose $g_n \in G_n$  to kill the singularities of order $n + 1$.  The key point is to prove that these lowest order singularities are “local”, meaning that they have support on the diagonal. (In the special case of translation-invariant theories on Minkowski spacetime  this becomes the usual condition that they are “polynomials in momentum”, or more precisely that their Fourier transforms are essentially polynomials in momentum on the subspace with total momentum zero).  The locality follows from the Gaussian property of $\omega$, which determines $\omega$ at each order in terms of smaller orders except on the diagonal. In particular if $\omega$ is nonsingular at all orders at most $n$, then the singular parts of the order $n + 1$ terms all have support on the diagonal. Since the difference is smooth along the diagonal, we can find some $g_n \in G_n$ that kills off the order $n + 1$ singularities,  as in  theorem \[transitive\]. Since renormalizations preserve the Gaussian property we can keep on repeating this indefinitely, killing off the singularities in order of their order. The famous problem of “overlapping divergences” is that the counter-terms for individual Feynman diagrams used for renormalization sometimes contain non-polynomial (logarithmic) terms in the momentum, which bring renormalization to a halt unless they miraculously cancel when summed over all Feynman diagrams. This problem is avoided in the proof above because by using the ultraviolet group we only need to handle the divergences of lowest order at each step, where it is easy to see that the  logarithmic terms cancel. \[feynman existence\]For any cut local propagator there is a Feynman measure associated to it. This follows from theorem \[regularization\], which uses regularization to show that there is a meromorphic Feynman measure, and theorem \[renormalizemeasure\] which uses renormalization to show that the poles of this can be eliminated. Subgroups of the ultraviolet group ================================== There are many additional desirable properties that one can impose on Feynman measures, such as being Hermitian, or Lorentz invariant, or normal ordered, and there is often a subgroup of the ultraviolet group that acts transitively on the measures with the given property. We give several examples of this. A Feynman measure can be normalized so that on $S^1 \Gamma_c \omega S^{^0} J \Phi = \Gamma_c \omega$ its value is given by integrating over spacetime (in other words $g = 1$ in definition \[feynman measure\]), by acting on it by a unique element of the ultraviolet group consisting of renormalizations in $G_0$ that are trivial on $\omega S^{> 0} J \Phi$. This group can be identified with the group of nowhere-vanishing smooth complex functions on spacetime. The complementary normal subgroup of the ultraviolet group consists of the renormalizations that fix all elements of  $\omega S^0 J \Phi = \omega$, and this acts simply transitively on the normalized Feynman measures. In practice almost any natural Feynman measure one constructs is normalized. Normal ordering. In terms of Feynman diagrams, “normal ordering” means roughly that Feynman diagrams with an edge from a vertex to itself are discarded. We say that a Feynman measure is normally ordered if it vanishes on $\Gamma_c \omega S^{> 0} J \Phi$. Informally,  $\omega S^{> 0} J \Phi$ corresponds to Feynman diagrams with just one point and edges from this point to itself. We will say that a renormalization is normally ordered if it fixes all elements of $\omega S^{> 0} J \Phi$. The subgroup of normally ordered renormalizations acts transitively on the normally ordered Feynman measures. The group of all renormalizations is the semidirect product of its normal subgroup $G_{> 0}$ of normally-ordered renormalizations with the subgroup $G_0$ preserving all products. For any renormalization, there is a unique element of $G_0$ that takes it to a normally ordered renormalization. The Feynman measures constructed by regularization (in particular those constructed by dimensional regularization) are usually normally ordered if the spacetime has positive dimension, but are usually not for 0-dimensional spacetimes. This is because the propagators tend to contain a factor such as $(x - y)^{- 2 d}$ which vanishes for large $- d$ when $x = y$, and so vanishes on Feynman diagrams with just one point for all $d$ by analytic continuation. So for most purposes we can restrict to normally-ordered Feynman measures and normally-ordered renormalizations, at least for spacetimes of positive dimension. Normalization of Feynman propagators. In general a renormalization fixes the cut propagator but  can change the Feynman propagator, by adding a distribution with support on the diagonal. However there is often a canonical choice of Feynman propagator: the one with a singularity on the diagonal of smallest possible order, which will often also be a Green function for some differential operator. We can add the condition that the Feynman propagator of a Feynman measure should be this canonical choice; the subgroup of renormalizations fixing the Feynman propagator, consisting of renormalizations fixing $S^2 \omega J \Phi$, acts simply transitively on these Feynman measures. \[simple operator\]Simple operators. More generally, there is a subgroup consisting of renormalizations $\rho$ such that $\rho (aB) = \rho (a) \rho (B)$ whenever $a$ is simple (involving only one field), but where $B$ is arbitrary. This stronger condition is useful because it says (roughly) that simple operators containing only one field do not get renormalized; see the discussion in section \[interactingqft\].  We can find a set of Feynman measures acted on simply transitively by this group by adding the condition that $$\omega (aB) = \sum \Delta_F (aB_1) \omega (B_2)$$ whenever $a$ is simple and $\sum B_1 \otimes B_2$ is the coproduct of $B$. This relation holds whenever $a$ and $B$ have disjoint supports by definition of a Feynman measure, so the extra condition says that it also holds even when they have overlapping supports. The key point is that the product of distributions above is always defined because any non-zero element of the wave front set of $\Delta_F$ is of the form $(p, - p)$. This would not necessarily be true if $a$ were not simple because we would get products of more than 1 Feynman propagator whose singularities might interfere with each other. In terms of Feynman diagrams, this says that vertices with just one edge are harmless: more precisely, with this normalization, adding a vertex with just one edge to a Feynman diagram has the effect of multiplying its value by the Feynman propagator of the edge. As this condition extends the Gaussian property to more Feynman diagrams, it can also be thought of as a strengthening of the translation invariance property of the Feynman measure. \[dyson\]Dyson condition. Classically, Lagrangians were called renormalizable if all their coupling constants have non-negative mass dimension. The filtration on Lagrangian densities by mass dimension induces a similar filtration on Feynman measures and renormalizations. The Feynman measures of mass dimension $\leqslant 0$ are acted on simply transitively by the renormalizations of mass dimension $\leqslant 0$. This is useful, because the renormalizations of mass dimension at most 0 act on the spaces of Lagrangian densities of mass dimension at most 0, and these often form finite dimensional spaces, at least if some other symmetry conditions such as Lorentz invariance are added. For example, in dimension 4 the density has dimension $- 4$, so the (Lorentz-invariant) terms of the Lagrangian density of mass dimension at most 0 are given by (Lorentz invariant) terms of the Lagrangian of mass dimension at most 4, such as $\varphi^4$, $\varphi^2$, $\partial \varphi \partial \varphi$, and so on: the usual Lorentz-invariant even terms whose coupling constants have mass dimension at least 0. For example, we get a three-dimensional space of theories of the form $\lambda \varphi^4 + m \varphi^2 + z \partial \varphi \partial \varphi$ in this way, giving the usual $\varphi^4$ theory in 4 dimensions. Boundary terms. The Feynman measures constructed in section 3 have the property that they vanish on “boundary terms”. This means that we quotient the space of local Lagrangians $\Gamma_c \omega SJ \Phi$ by its image under the action of smooth vector fields such as $\partial / \partial x_i$, or in other words we replace a spaces of $n$-forms by the corresponding de Rham cohomology group. These measures are acted on simply transitively by renormalizations corresponding to maps that vanish on boundary terms. This is useful in gauge theory, because some symmetries such as the BRST symmetry are only symmetries up to boundary terms. Symmetry invariance. Given a group (or Lie algebra) $G$ such as a gauge group acting on the sheaf $\Phi$ of classical fields and preserving a given cut propagator, the subgroup of $G$-invariant renormalizations acts simply transitively on the $G$-invariant Feynman measures with given cut propagator. In general there need not exist any $G$-invariant Feynman measure associated with a given cut local propagator, though if there is then $G$-invariant Lagrangians lead to $G$-invariant quantum field theories. The obstructions to finding a $G$-invariant measure are cohomology classes called anomalies, and are discussed further in section \[anomalies\]. Lorentz invariance. An important case of invariance under symmetry is that of Poincare invariance for flat Minkowski space. In this case the spacetime $M$ is Minkowski space, the Lie algebra $G$ is that of the Poincare group of spacetime translations and Lorentz rotations, and the cut propagator is one of the standard ones for free field theories of fields of finite spin. Then dimensional regularization is invariant under $G$, so we get a Feynman measure invariant under the Poincare group, and in particular there are no anomalies for the Poincare algebra. The elements of the ultraviolet group that are Poincare invariant act simply transitively on the Feynman measures for this propagator that are Poincare invariant. If we pick any such measure, then we get a map from invariant Lagrangians to invariant quantum field theories. Hermitian conditions. The group of complex renormalizations has a real form, consisting of the subgroup of (real) renormalizations. This acts simply transitively on the Hermitian Feynman measures associated with a given cut local propagator. The Hermitian Feynman measures (or propagators) are not the real-valued ones, but satisfy a more complicated Hermitian condition described in definition \[hermitianmeasure\]. \[freeqft\]The free quantum field theory ======================================== We extend the Feynman measure $\omega : e^{iL_F} S \Gamma_c \omega SJ \Phi \rightarrow {\ensuremath{\boldsymbol{C}}}$, which is something like a measure on classical fields, to $\omega : Te^{iL_F} S \Gamma_c \omega SJ \Phi \rightarrow {\ensuremath{\boldsymbol{C}}}$. This extension, restricted to the even degree subalgebra $T_0 e^{iL_F} S \Gamma_c \omega SJ \Phi$, is the free quantum field theory. We check that it satisfies analogues of the Wightman axioms. Formulas involving coproducts can be confusing to write down and manipulate. They are much simpler for the “group-like” elements $g$ satisfying $\Delta (g) = g \otimes g$, $\eta (g) = 1$, which form a group in any cocommutative Hopf algebra. One problem is that most of the Hopf algebras we use do not have enough group-like elements over fields: in fact for symmetric algebras the only group-like element is the identity. However they have plenty of group-like elements if we add some nilpotent elements to the base field, such as $\exp (\lambda a)$ for any primitive $a$ and nilpotent $\lambda$ (in characteristic 0). We will adopt the convention that when we talk about group-like elements, we are tacitly allowing extensions of the base ring by nilpotent elements. Recall that $Te^{iL_F} S \Gamma_c \omega SJ \Phi$ is the tensor algebra of $e^{iL_F} S \Gamma_c \omega SJ \Phi$, with the product denoted by $\otimes$ to avoid confusing it with the product of $S \Gamma_c SJ \Phi$. We denote the identity of $S \Gamma_c SJ \Phi$ by $1$, and the identity of ${\ensuremath{\operatorname{TS}}} \Gamma_c \omega SJ \Phi$ by $1_T$. The involution $\ast$ is defined by $(A_1 \otimes \ldots \otimes A_n)^{\ast} = A_n^{\ast} \otimes \ldots \otimes A_1^{\ast}$, and $\ast$ is $- 1$ on $\Gamma_c \omega SJ \Phi$. \[extendomega\]If $\omega : e^{iL_F} S \Gamma_c \omega SJ \Phi \rightarrow C$ is a Feynman measure then there is a unique extension of $\omega$ to $Te^{iL_F} S \Gamma_c \omega SJ \Phi$ such that Gaussian condition: if $A, B_1, \ldots, B_m$ are group-like then $$\begin{aligned} & & e^{- iL_F} \omega (A_{} \otimes B_m \otimes \ldots \otimes B_1)\\ & = & \sum e^{- iL_F} \omega (A_{} \otimes 1 \otimes \ldots \otimes 1) \Delta (A, B_m \ldots B_1) e^{- iL_F} \omega (B_m \otimes \ldots \otimes B_1) \end{aligned}$$   Both sides are considered as densities, as in definition \[feynman measure\]. $e^{- iL_F} \omega (A \otimes A \otimes 1 \otimes \ldots \otimes 1) = 1$  for $A$ group-like (Cutkosky condition; see[[@hooft section 6]]{}.) We first check that all the products of distributions are well defined by examining their wave front sets. All the distributions appearing have the property that their wave front sets have no positive or negative elements. This follows by induction on the complexity of an element: if all smaller elements have this property, it implies that the products defining it are well defined, and also implies that it has the same property. Existence and uniqueness of $\omega$ follows because the Cutkosky condition defines it on elements of the form $A \otimes 1 \otimes 1 \otimes \ldots \otimes 1$ in terms of those of the form $A \otimes 1 \otimes \ldots \otimes 1$, and the Gaussian condition then determines it on all elements.   We can also define $\omega$ directly as follows. When the propagator is sufficiently regular then the Gaussian condition means that we can write $\omega$ on $e^{iL_F} S \Gamma_c \omega SJ \Phi$ as a sum over all ways of joining up the  fields of an element of $e^{iL_F} S \Gamma_c \omega SJ \Phi$ in pairs, where we take the propagator of each pair and multiplying these together. This is of course essentially the usual sum over Feynman diagrams. A minor difference is that we do not distinguish between “internal” vertices associated with a Lagrangian and integrated over all spacetime, and “extenal” vertices associated with a field and integrated over a compact set: all vertices are associated with a composite operator that may be a Lagrangian or a simple field or a more general composite operator, and all vertices are integrated over compact sets as all coefficients are assumed to have compact support. Similarly we can  define the extension of $\omega$ to  $Te^{iL_F} S \Gamma_c \omega SJ \Phi$ by writing the distributions defining $\omega$  as a sum over more complicated Feynman diagrams whose vertices are in addition labeled by non-negative integers, such that - The propagators from $A_i$ to $A_i$ are Feynman propagators. <!-- --> - The propagators from $A_i$ to $A_j$ for $i < j$ are cut propagators $\Delta$, with positive wave front sets on $i$ and negative wave front sets on $j$. - The diagram is multiplied by a factor of $(- 1)^{\deg (A_2 A_4 A_6 \ldots)}$; in other words we apply $\ast$ to $A_2$, $A_4, \ldots$. In general if  the propagator is not sufficiently regular (so that products of propagators might not be defined when some points coincide) we can construct $\omega$ by regularization and renormalization as in section 3, which preserves the conditions defining $\omega$. Now we show that $\omega$ satisfies the locality property of quantum field theories (operators with spacelike-separated supports commute) by showing that it vanishes on the following locality ideal. \[localityideal\]$T_0 S \Gamma_c \omega SJ \Phi$ is the subalgebra of even degree elements of $T_{} S \Gamma_c \omega SJ \Phi$. The locality ideal is the 2-sided ideal of $T_0 S \Gamma_c \omega SJ \Phi$ spanned by the coefficients of elements of the form $$\ldots \otimes Y_1 \otimes ABD \otimes DBC \otimes X_n \otimes \ldots \otimes X_1 - \ldots \otimes Y_1 \otimes AD \otimes DC \otimes X_n \otimes \ldots \otimes X_1$$ (for $A, C \in S \Gamma_c \omega SJ \Phi$  and $B, D \in S \Gamma_c \omega SJ \Phi [[{\ensuremath{\boldsymbol{\lambda}}}]]$ with $B, D$ group-like) if $n$ is even and there are no points in the support of $B$ that are $\leqslant$any points in the support of $A$ or $C$, or if $n$ is odd and there are no points in the support of $B$ that are $\geqslant$any points in the support of $A$ or $C$. The algebra  $T_0 e^{iL_F} S \Gamma_c \omega SJ \Phi$ and its locality ideal are defined in the same way. The map $\omega$ on $T_0 e^{iL_F} S \Gamma_c \omega SJ \Phi$ depends on the choice of Feynman measure. We can define a canonical map independent of the choice of Feynman measure by taking the underlying $*$-algebra to have elements represented by pairs $(\omega, A)$ for a Gaussian measure $\omega$ and $A \in \text{$T_0 e^{iL_F} S \Gamma_c \omega SJ \Phi$}$, where we identify $(\omega, A)$ with $(\rho \omega, \rho A)$ for any renormalization $\rho$. The canonical state, also denoted by $\omega$, then takes an element represented by $(\omega, A)$ to $\omega (A)$. \[localideal\]$\omega$ vanishes on the locality ideal. We use the notation of  definition \[localityideal\]. We prove this for elements with $n$ even; the case $n$ odd is similar. We can assume that the propagator $\Delta$ is sufficiently regular, as we can obtain the general case from this by regularization and renormalization. We will first do the special case when $D = 1$. We can assume that $B = b_1 \ldots b_k$ is homogeneous of some order $k$ and write $B_I$ for $\prod_{j \in I} b_j$. If $k = 0$ then the result is obvious as $B$ is constant and both sides are the same, so we can assume that $k > 0$. We show that if $k > 0$ then $\omega$ vanishes on $$\sum_{I \cup J =\{1, \ldots k\}} (- 1)^{|I|} \ldots \otimes Y_1 \otimes AB_I \otimes B_J C \otimes X_n \otimes \ldots \otimes X_1$$ by showing that the  terms cancel out in pairs. This is because if $j$ is the index for which the support of $b_j$ is maximal then $\omega$ has the same value on $\ldots \otimes Y_1 \otimes AB_I b_j \otimes B_J C \otimes X_n \otimes \ldots \otimes X_1$ and $\ldots \otimes Y_1 \otimes AB_I \otimes b_j B_J C \otimes X_n \otimes \ldots \otimes X_1$ Now we do the case of general $D$. We can assume that the support of $D$ is either $\leqslant$ all points of the support of $B$ or there are no points of it that are $\leqslant$ any points in the support of $A$ or $C$. In the first case the result follows from the special case $D = 1$ by replacing $A$ and $C$ by $AD$ and $CD$. In the second case it follows from 2 applications of the special case $D = 1$, replacing $B$ by $D$ and $BD$, that both terms are equal to $\ldots \otimes Y_1 \otimes A \otimes C \otimes X_n \otimes \ldots \otimes X_1$ and are therefore equal. This proof, in the special case that $\omega$ vanishes on $B \otimes B - 1 \otimes 1$ for $B$ group-like, is more or less the proof of unitarity of the S-matrix using the “largest time equation” given in [[@hooft section 6]]{}. The locality ideal is not the largest ideal on which $\omega$ vanishes, as $\omega$ also vanishes on $A \otimes 1 \otimes 1 \otimes B - A \otimes B$; in other words we can cancel pairs $1 \otimes 1$ wherever they occur. \[commute mod local\]Elements of $T_0 S \Gamma_c SJ \Phi$ with spacelike-separated supports commute modulo the locality ideal. It is sufficient to prove this for group-like degree 2 elements, as if two even degree elements have spacelike-separated supports then they are polynomials in degree 2 elements with spacelike separated supports. We will work modulo the locality ideal. Suppose that the supports of the group-like elements $W \otimes X \otimes Z$ and $Y$ are spacelike-separated. Then applying  theorem \[localideal\] twice gives $$W \otimes X \otimes YZ = W Y \otimes X Y \otimes Y Z = W Y \otimes X \otimes Z$$ Applying this 4 times for various values of $W$, $X$, $Y$, and $Z$ shows that if $A \otimes B$ and $C \otimes D$ are group-like and have spacelike separated supports, then $$A \otimes B \otimes C \otimes D = AC \otimes B \otimes I \otimes D = AC \otimes I \otimes I \otimes BD = AC \otimes D \otimes I \otimes B = C \otimes D \otimes A \otimes B$$ ${\ensuremath{\operatorname{so}}} A \otimes B$ and $C \otimes D$ commute. Now we study when the quantum field theory $\omega$ is Hermitian, and show that we can find a Hermitian quantum field theory associated to any Hermitian local cut propagator, and show that the group of real renormalizations acts transitively on them. \[hermitianmeasure\]We say that a Feynman measure $\omega$ is Hermitian if its extension to $T_{} S \Gamma_c \omega SJ \Phi$ is Hermitian when restricted to the even subalgebra $T_0 S \Gamma_c \omega SJ \Phi$. If the local cut propagator $\Delta$ is Hermitian, then it has a Hermitian Feynman measure associated with it.   We can assume that the regularization of $\Delta$ is also Hermitian, by replacing it by the average of itself and its Hermitian conjugate. We can check directly that the meromorphic family of Feynman measures associated to this Hermitian regularization is Hermitian on $T_0 S \Gamma_c \omega SJ \Phi$ (but not on the whole of $T_{} S \Gamma_c \omega SJ \Phi$); in other words $\omega (A_n \otimes \ldots \otimes A_1) = \omega (A_1^{\ast} \otimes \ldots \otimes A_n^{\ast})^{\ast}$ if $n$ is even. For example, we get a sign factor of $- 1^{\deg (A_2) + \deg (A_4) + \ldots}$ in the definition of $\omega$ on the first term, a sign factor of  $- 1^{\deg (A_1) + \deg (A_3) + \ldots}$ form the definition of $\omega$ for the second term, whose quotient is the factor $- 1^{\deg (A_1) + \deg (A_2) + \ldots}$ coming from the action of $\ast$ on $A_n \otimes \ldots \otimes A_1$ because $n$ is even. We can then renormalize using real renormalizations to eliminate the poles, and the resulting Feynman measure will be Hermitian. If a Feynman measure $\omega$ is Hermitian and $\rho$ is a complex renormalization, then $\rho (\omega)$ is Hermitian if and only if $\rho$ is real. In particular the subgroup of (real) renormalizations acts simply transitively on the Hermitian Feynman measures associated with a given cut local propagator. This follows from $\rho (\omega)^{\ast} = \rho^{\ast} (\omega^{\ast})$, and the fact that complex renormalizations act simply transitively on Feynman measures associated with a given cut local propagator. Next we show that $\omega$ is a state  (in other words the space of physical states is positive definite) when the cut propagator $\Delta$ is positive, by using a representation of the physical states as a space of distributions. We define the space $H_n$ of $n$-particle states to be the space of continuous linear maps $S^n \Gamma \omega \Phi \rightarrow {\ensuremath{\boldsymbol{C}}}$ (considered as compactly supported symmetric distributions on $M^n$) whose wave front sets have no positive or negative elements, with a sesquilinear form given by $$\langle a, b \rangle = \int_{x, y \in M^n} a (x_1, \ldots) \prod_j \Delta (x_j, y_j) b (y_j, \ldots)^{\ast} dxdy.$$ This is similar to the usual definition of the inner product on the space of states of a free field theory, except that we are using distributions rather than smooth functions. We check this is well defined. To show the product of distributions in the integral is defined we need to check that no sum of non-zero elements of the wave front sets is zero, and this follows because nonzero elements of the wave front set of the product of propagators are of the form $(p, q)$ with $p > 0$ and $q < 0$, but $a$ and $b$ by assumption have no positive or negative elements in their wave front sets.  The integral over $M^n$ is well defined because $a$ and $b$ have compact support. There is a map $f$ from $T_0 S \Gamma_c \omega SJ \Phi$ to the orthogonal direct sum$\oplus H_{_n}$ with $$\omega (AB) = \langle f (A^{\ast}), f (B) \rangle .$$ By theorem \[extendomega\], $\omega (AB)$ is given by $$\sum \omega (A') \Delta (A'', B'') \omega (B')$$ where $\sum A' \otimes A''$ is the image of $A$ under the coaction of $\Gamma_c SJ \Phi$. This is equal to $\langle f (A^{\ast}), f (B) \rangle$ if we define $f (A)$ as follows. Suppose that $A = A_{11} A_{12} \ldots \otimes A_{21} A_{22} \ldots$., and let the image of $A_{jk}$ under the coaction of $\Gamma_c SJ \Phi$ be $\sum A_{jk}' \otimes A_{jk}''$. Then $\omega ( A'_{11} A_{12}' \ldots \otimes A_{21}' A_{22}' \ldots$.) can be regarded as a distribution on $M^n$, where $n$ is the total number of elements $A_{jk}$. On the other hand, $A''_{11} A''_{12} \ldots A''_{21} A''_{22} \ldots$ is a function on $M^m$, where $m$ is the sum of the degree of the elements $A''_{jk}$, in other words the number of fields occurring in them. There is also a map from $m$ to $n$, which induces a map from $M^n$ to $M^m$, and so by push-forward of densities a map from densities on $M^n$ to densities on $M^m$. The image $f (A)$ is then given by taking the push-forward from $M^n$ to $M^m$ of the compactly supported distribution $\omega ( A'_{11} A_{12}' \ldots \otimes A_{21}' A_{22}' \ldots$.) on $M^n$, multiplying  by the function $A''_{11} A''_{12} \ldots A''_{21} A''_{22} \ldots$ on $M^m$, symmetrizing the result,  and repeating this for each summand of  $\sum A_{jk}' \otimes A_{jk}''$. If the cut local propagator $\Delta$ is positive, then $\omega : Te^{iL_F} S \Gamma_c \omega SJ \Phi \rightarrow {\ensuremath{\boldsymbol{C}}}$ is a state. This follows from the previous lemma, because if $\Delta$ is positive then so is the sesquilinear form $\langle, \rangle$ on $H_n$, and therefore $\omega (A^{\ast} A) = \langle f (A), f (A) \rangle \geqslant 0$. \[interactingqft\]Interacting quantum field theories ==================================================== We construct the quantum field theory of a Feynman measure and a compactly supported Lagrangian, by taking the image of the free field theory $\omega$ under an automorphism $e^{iL_I}$ where $L_I$ is the interaction part of the Lagrangian. This automorphism is only well defined if the interaction Lagrangian $L_I$ has infinitesimal coefficients, so the interacting quantum field theories we construct are perturbative theories taking values in rings of formal power series ${\ensuremath{\boldsymbol{C}}}[{\ensuremath{\boldsymbol{\lambda}}}] ={\ensuremath{\boldsymbol{C}}}[\lambda_1, \ldots]$ in the coupling constants $\lambda_1, \ldots$. (By “infinitesimal” we mean elements of formal power series rings with vanishing constant term.) We then lift the construction to all actions (possible without compact support) by showing that infra-red divergences cancel up to inner automorphisms. The Hopf algebra $S \Gamma_c \omega SJ \Phi$ acts on  the algebra $\text{$T_0 S \Gamma_c \omega SJ \Phi$}$, and maps the locality ideal to itself. Group-like Hermitian elements of the Hopf algebra $S \Gamma_c \omega SJ \Phi [[{\ensuremath{\boldsymbol{\lambda}}}]]$ preserve the subset of positive elements, and therefore act on the space of states of $T_0 S \Gamma_c \omega SJ \Phi [[{\ensuremath{\boldsymbol{\lambda}}}]]$. Group-like elements are algebra automorphisms, and if they are also Hermitian they commute with the involution $\ast$. In particular group-like Hermitian elements preserve the set of positive elements (generated by positive linear combinations of elements of the form $a^{\ast} a$), and so map positive linear forms to positive linear forms. The quantum field theory of a Lagrangian $L = L_F + L_I$, where $L_I$ has compact support and infinitesimal coefficients, is $e^{- iL} \omega : T_0 S \Gamma_c \omega SJ \Phi \rightarrow {\ensuremath{\boldsymbol{C}}}[[{\ensuremath{\boldsymbol{\lambda}}}]]$. The Hopf algebra $S \Gamma_c \omega SJ \Phi$ acts on the vector space $S \Gamma_c \omega SJ \Phi$ by multiplication, so group-like elements of the form $e^{iL_F + iL_I}$ take $S \Gamma_c \omega SJ \Phi$ to $e^{iL_F} S \Gamma_c \omega SJ \Phi$ and $T_0 S \Gamma_c \omega SJ \Phi$ to $T_0 e^{iL_F} S \Gamma_c \omega SJ \Phi$. Since $\omega$ is in the dual of $T_0 e^{iL_F} S \Gamma_c \omega SJ \Phi$, this shows that $e^{- iL} \omega$ is in the dual of $T_0 S \Gamma_c \omega SJ \Phi$. (Locality) Elements of $T_0 S \Gamma_c \omega SJ \Phi$ with spacelike-separated supports commute when acting on the space of physical states of $e^{- iL} \omega$. By theorem \[localideal\] the operators of the locality ideal act trivially on the space of physical states of $\omega$. Since $e^{- iL}$ preserves the locality ideal, the locality ideal also acts trivially on the space of physical states of $e^{- iL} \omega$.  By lemma \[commute mod local\] this implies that operators with spacelike separated supports commute on this space. This constructs the quantum field theory of a Lagrangian whose interaction part has compact support (and is infinitesimal). We now extend this to the case when the interaction part need not have compact support. We do this by using a cutoff function to give the Lagrangian compact support, and then we then try to show that the result is independent of the choice of cutoff function, provided it is 1 in a sufficiently large region. To do this we need to assume that spacetime is globally hyperbolic, and we also find that the result is not quite independent of the choice of cutoff. If $f$ is a smooth function on $M$ then multiplication by $f$ is a linear transformation of $\Gamma \omega SJ \Phi$ and therefore induces a homomorphism of $S \Gamma \omega SJ \Phi$, denoted by $A \rightarrow A^f$. If $A = e^{iL}$ is group-like, then $A^f = e^{iLf}$. If $f$ has compact support then so does $A^f$ so that $A^f \omega$ is defined. We try to extend the definition of $A^f \omega$ to more general functions $f$ in the hope that we can take $f$ to be close to 1. Suppose that $f$ and $g$ are compactly supported smooth functions on $M$ and $n$ is even. If $f = g$ on the past of $A_1 \ldots A_n$ then (modulo the locality ideal) $$\begin{array}{lll} e^{- iL_F} A^f \omega (A_n \otimes \ldots \otimes A_1) & = & e^{- iL_F} A^g \omega (A_n \otimes \ldots \otimes A_1) \end{array}$$ If  $f = g$ on the future of $A_1 \ldots A_n$ then $$\begin{array}{lll} e^{- iL_F} A^f \omega (A_n \otimes \ldots \otimes A_1) & = & e^{- iL_F} A^g \omega (A^{g - f} \otimes 1 \otimes A_n \otimes \ldots \otimes A_1 \otimes 1 \otimes A^{g - f}) \end{array}$$ We work modulo the locality ideal. The first equality follows from $$A^{- f} A_n \otimes \ldots \otimes A^{- f} A_1 = A^{- g} A_n \otimes \ldots \otimes A^{- g} A_1$$ which in turn follows from theorem \[localideal\] by repeatedly inserting $A^{f - g} \otimes A^{f - g}$ (using the fact that $n$ is even). The second equality follows in the same way from $$\begin{aligned} & & A^{- f} \otimes A^{- f} \otimes A^{- f} A_n \otimes \ldots \otimes A^{- f} A_1 \otimes A^{- f} \otimes A^{- f}\\ & = & A^{- f} \otimes A^{- g} \otimes A^{- g} A_n \otimes \ldots \otimes A^{- g} A_1 \otimes A^{- g} \otimes A^{- f} \end{aligned}$$ This lemma shows that the restriction of $A^f \omega$ to arguments with support in some fixed compact subset of $M$ is almost independent of the choice of $f$ provided that $f$ is 1 on the convex hull of the argument: different choices of $f$ are related by a locally inner automorphism of $T_0 S \Gamma_c \omega SJ \Phi$, given by conjugation by elements of the form $1 \otimes A^h$. If the spacetime is globally hyperbolic in the sense that the convex hull of a compact set is contained in a compact set, then we can always find a suitable $f$ that is 1 on the convex hull $X$ of the argument, so we can construct the interacting quantum field theory. The result does not depend on the choice of cutoff $f$ on the future of $X$, but does depend slightly on the choice of cutoff in the past of $X$. The choice of cutoff in the past corresponds to choices of the vacuum: roughly speaking, we turn off the interaction in the distant past, which gives different vacuums. More precisely, if we have two different cutoffs $f$ and $g$ then their vacuums, which are the images of $e^{i (L_F + fL_I)}$ and $e^{i (L_F + gL_I)}$ will differ by a factor of $e^{i (f - g) L_I}$. This does not change the observable physics, beause all these choices of cutoffs give isomorphic quantum field theories. However it does cause difficulties in constructing a Lorentz invariant theory, because the choice of cutoff in the past is not Lorentz invariant, so the vacuums are also not Lorentz invariant, or in other words Lorentz invariance may be spontaneously broken.  Presumably in theories with a mass gap one can take the limit as the cutoff in the past tends to time $- \infty$ and get a Lorentz invariant vacuum, but in theories with massless particles such as QED there is an obstruction to constructing a Lorentz invariant vacuum: Lorentz invariance might be spontaneously broken by infrared divergences. This is a well known problem, which is not worth worrying about too much, because the physical universe is not globally Lorentz invariant. The time-ordered operator $T (A)$ of an element $A \in S \Gamma_c \omega SJ \Phi$ is  defined to be $1 \otimes A$. This has the property that $$T (A_n \ldots A_1) = 1 \otimes A_n \ldots A_1 = 1 \otimes A_n \otimes \ldots \otimes 1 \otimes A_1 = T (A_n) \ldots T (A_1)$$ whenever the composite fields $A_i \in \Gamma_c \omega SJ \Phi$ are in order of increasing time of their supports. This formula is sometimes used as a “definition” of the time-ordered product $T (A_n \ldots A_1)$, though this does not define it when some of the factors have overlapping supports, and in general the time-ordered product depends on the choice of Feynman measure $\omega$. The scattering matrix $S$ of the quantum field theory is $S = T (e^{iL_I}) = 1 \otimes e^{iL_I}$; this is essentially the LSZ  reduction formula of Lehmann, Symanzik, and Zimmermann [[@Lehmann]]{}. We now show that if we change the Feynman measure, then we still get an isomorphic quantum field theory provided we make a suitable change in the Lagrangian. If we change $\omega$ to a different Feynman measure for the same cut local propagator, these will differ by a unique renormalization $\rho$; in other words the other Feynman measure will be $\rho \omega$. The quantum field theory $e^{- iL} \omega$ changes under this renormalization of $\omega$ by $$\begin{aligned} e^{- iL} \omega (A_1 \otimes \ldots) & = & \omega (e^{iL} A_1 \otimes \ldots)\\ & = & \rho (\omega) (\rho (e^{iL} A_1) \otimes \ldots)\\ & = & \rho (e^{- iL}) \rho (\omega) (\rho (e^{- iL}) \rho (e^{iL} A_1) \otimes \ldots)\end{aligned}$$ so the quantum field theory stays the same under renormalization by $\rho$ if we transform the Lagrangian by $$iL \rightarrow \log (\rho (\exp (iL)),$$ which is a nonlinear transformation because renormalizations need not commute with products or exponentiation, and change the operators $A_n$ by $$A_n \rightarrow \rho (e^{- iL}) \rho (e^{iL} A_n) .$$ If $A_n$ is a simple operator and $\rho$ satisfies the condition of example \[simple operator\] then $\rho (e^{iL} A_n) = \rho (e^{iL}) \rho (A_n) = \rho (e^{iL}) A_n$, so in this special case $A_n$ is unchanged, or in other words simple operators are not renormalized. The behavior of composite operators under renormalization can be quite complicated when expanded out in terms of fields. The usual Wightman distributions used to construct a quantum field theory use only simple operators, so the only effect of renormalization on Wightman distributions comes from the nonlinear transformation of the Lagrangian. This nonlinear transformation of Lagrangians is the usual action of renormalizations on Lagrangians used in physics texts to convert an infinite “bare” Lagrangian $L$ to a finite physical one $L_0$; the bare and physical Lagrangians are related by $iL_0 = \log (\rho (\exp (iL))$, where $\rho$ is an infinite renormalization taking an infinite Feynman measure, such as the one given by dimensional regularization, to a finite one. The orbit of a Lagrangian under this nonlinear action of the ultraviolet group is in general infinite dimensional. It can sometimes be cut down to a finite dimensional space as follows. As in example \[dyson\], we cut down to the group of renormalizations of mass dimension at most 0, which acts on the space of Lagrangians whose coupling constants all have mass dimension at least 0. If we also add the condition that the Lagrangian is Lorentz invariant, then we sometimes get finite dimensional spaces of Lagrangians. The point is that the classical fields themselves tend to have positive mass dimension, so if the coupling constants all have non-negative mass dimension then the fields appearing in any term of the Lagrangian have total mass at most $d$ (cancelling out the $- d$ coming from the density) which severely limits the possibilities. At one time the Lagrangians with all coupling constants of non-negative mass dimension were called renormalizable Lagrangians, though now all Lagrangians are regarded as renormalizable in a more general sense where one allows an infinite number of terms in the Lagrangian. \[anomalies\]Gauge invariance and anomalies =========================================== If a Lagrangian is invariant under some group, this does not imply that the quantum field theories we construct from it are also invariant, because as Fujikawa [[@Fujikawa]]{} pointed out we also need to choose a Feynman measure and there may not be an invariant way of doing this. The obstructions to finding an invariant quantum field theory lie inside certain cohomology groups and are called anomalies. We show that if these anomalies vanish then we can construct invariant quantum field theories. Suppose that a group $G$ acts on $SJ \Phi$ and preserves the set of Feynman measures with given cut local propagator, and suppose that we have chosen one such Feynman measure $\omega$. In practice we often start with an action of a Lie algebra or superalgebra, such as that generated by the BRST operator, which can be turned into a group action in the usual way by working over a ring with nilpotent elements. If $g \in G$ then $g \omega$ is another Feynman measure with the same propagator, so $$\omega = \rho_g g \omega$$ for a unique renormalization $\rho_g$. This defines a non-abelian 1-cocycle: $\rho_{gh} = \rho_g g (\rho_h)$, where $g (\rho_h) = g \rho_h g^{- 1}$. Since $\omega$ is invariant under $\rho_g g$, we find that $$\omega (e^{iL} A_1) = \omega (\rho_g g (e^{iL} A_1)) = \omega (e^{iL} e^{- iL} \rho_g g (e^{iL} A_1))$$ so that $e^{- L} \omega$ is invariant under the transformation taking arguments $A_1$ to $e^{- iL} \rho_g g (e^{iL} A_1)$. This transformation fixes $1$ if $e^{iL}$ is fixed by $\rho_g g$. If in addition $\rho_g g (e^{iL} A_1) = \rho_g g (e^{iL}) \rho_g g (A_1)$ (which is not automatic as $\rho_g$ need not preserve products) then $A_1$ is taken to $\rho_g g (A_1)$ by this transformation. This shows that we really want a Lagrangian $L$ such that $e^{iL}$ is invariant under the modified action $e^{iL} \rightarrow \rho_g g (e^{iL})$. This is not the same as asking for $\rho_g g (iL) = iL$ because $\rho_g$ need not preserve products (although $g$ usually does). In practice we usually have a Lagrangian $L$ with $L$ (and $e^{iL}$) invariant under $G$, and the problem is whether it can be modified to $L'$ so that $e^{iL'}$ is invariant under the twisted action. The powers of $L$ span a coalgebra all of whose elements are $G$-invariant. Conversely, given a coalgebra $C$ all of whose elements are invariant under some group action, there is a canonical $G$-invariant group-like element associated to this coalgebra with coefficients in the dual algebra of $C$. So a fundamental question is whether the maximal coalgebra in the space of $G$-invariant classical actions is isomorphic to the maximal coalgebra in the space of actions invariant under the twisted action of $G$. The simplest case is when one can find a $G$-invariant Feynman measure, in which case the cocycle is trivial and the twisted action of $G$ is the same as the untwisted action. In terms of the cocycle  above, $\rho \omega$ is invariant for some renormalization $\omega$ if and only if $\rho_g = \rho^{- 1} g (\rho)$ for all $g$ (where $g (\rho) = g \rho g^{- 1}$), in other words there is an invariant measure $\omega$ if and only if the cocycle is a coboundary.  This case happens, for example, when spacetime $M$ is Minkowski space and $G$ is the Lorentz or Poincare group (or one of their double covers). Dimensional regularization in this case is automatically $G$-invariant, and so gives a $G$-invariant Feynman measure. In the case of BRST operators, there need not be any $G$-invariant Feynman measure. In this case the following theorem shows that one can find suitable coalgebras provided that certain obstructions, called anomalies, all vanish. The renormalizations $\rho_g$ need not preserve products in $S \Gamma \omega SJ \Phi$, but do preserve the coproduct and also fix all elements of $\Gamma \omega SJ \Phi$ if they are normalized as in example \[simple operator\]. So we have an action of $G$ on the space $V = \Gamma \omega SJ \Phi$, which lifts to two different actions of the coalgebra ${\ensuremath{\operatorname{SV}}}$, the first $\sigma_1 (g)$ preserving the product, and the second $\sigma_2 (g) = \rho_g \sigma_1 (g)$ given by twisting the first by the cocycle $\rho_g$. Suppose that $V$ is a real vector space acted on by a group $G$, and there are two extensions $\sigma_1$. $\sigma_2$ of this action to the coalgebra $SV$.  If the cohomology group $H^1 (G, V)$ vanishes then the maximal coalgebras in $SV^{}$ whose elements are fixed by these 2 actions of $G$ are isomorphic under an isomorphism fixing the elements of $V$. We construct an isomorphism $f$ from the maximal coalgebra in the space  of $\sigma_1$-invariant elements to the maximal coalgebra in the space of $\sigma_2$-invariant elements by induction on the degree of elements. We start by taking $f$ to be the identity map on elements of degree at most 1. We can assume that the 2 actions coincide on elements of degree less than $n$, and have to find an isomorphism $f$ making them the same on elements of degree $n$, which we will do by adding elements of $V$ to a basis of the elements of degree $n$. Suppose that $a$ is an element of degree $n > 1$ contained in a coalgebra of $G$-invariant elements. We want to find $v \in V$ so that $$\sigma_1 (g) (a + v) = \sigma_2 (g) (a) + v$$ or equivalently $$\sigma_1 (g) (v) - v = \sigma_2 (g) (a) - a.$$ The right hand side, as a function of $g$, is a 1-coboundary of an element $a \in SV$, and therefore a 1-cocycle. We show that the right hand side is in $V$. We have $$\Delta (a) = a \otimes 1 + 1 \otimes a + \sum_i b_i \otimes c_i$$ for some  elements $b_i$ and $c_i$ of degrees less than $n$ invariant under $G$ (for both actions, which coincide on elements of degree less than $n$). Applying  $\sigma_2$ we find that $\Delta (\sigma_2 (g) a) = \sigma_2 (g) a \otimes 1 + 1 \otimes \sigma_2 (g) a + \sum_i b_i \otimes c_i$, so subtracting these two identities shows that $\sigma_2 (g) (a) - a$ is a primitive element of ${\ensuremath{\operatorname{SV}}}$ and therefore in $V$. Therefore the right hand side, as a function of $g$, is a 1-cocycle with values in $V$.  The solvability of the condition for $v$ says exactly that this expression is the  coboundary of some element $v \in V$. In other words the obstruction to finding a suitable $v$ is exactly an element of the cohomology group $H^1 (G, V)$, so as we assume this group vanishes we can always solve for $v$. We take $V$ to be $\Gamma \omega SJ \Phi$, and $G$ to be some group acting on $V$. Then the spaces of classical and quantum actions are coalgebras acted on by $G$, whose primitive elements can be identified with $V$. If $H^1 (G, \Gamma \omega SJ \Phi)$ vanishes, then the maximal $G$-invariant coalgebra in the coalgebra of classical actions is isomorphic to the maximal $G$-invariant coalgebra in the coalgebra of quantum actions. So if $L$ is a $G$-invariant classical Lagrangian, then $e^L$ is a $G$-invariant classical action, so gives a $G$-invariant quantum action. One cannot get a $G$-invariant quantum action by exponentiating a $G$-invariant quantum Lagrangian because the space of quantum actions does not in general have a $G$-invariant product. Sometimes the group $G$ only fixes classical Lagrangians up to boundary terms, in other words the Lagrangian is a $G$-invariant element of $\Gamma \omega SJ \Phi / D$. In this case one replaces the cohomology group $H^1 (G, \Gamma \omega SJ \Phi)$ by $H^1 (G, \Gamma \omega SJ \Phi / D)$. The element $e^{iL_F}$ lies in the completion of $S \Gamma \omega SJ \Phi$ and is fixed by the zeroth order part of the BRST operator. So the BRST operator acts on $e^{iL_F} S \Gamma \omega SJ \Phi$. The groups $H^1 (G, \Gamma \omega SJ \Phi) {\ensuremath{\operatorname{and}}} H^1 (G, \Gamma \omega SJ \Phi / D)$ (and their variations for Poincare invariant Lagrangians) for the BRST operators of gauge theories have been calculated in many cases, at least for the case of Minkowski space  (see for example Barnich, Brandt, and Henneaux [[@Barnich]]{}) and are sometimes zero, in which case  corresponding invariant  quantum field theories exist. [10]{} Eiichi Abe. [[*[Hopf algebras]{}*]{}]{}, volume 74 of [[*[Cambridge Tracts in Mathematics]{}*]{}]{}. Cambridge University Press, Cambridge, 1980. Glenn Barnich, Friedemann Brandt, and Marc Henneaux. Local BRST cohomology in gauge theories. [[*[Phys. Rep.]{}*]{}]{}, 338(5):439–569, 2000. I. N. Bernstein. Analytic continuation of generalized functions with respect to a parameter. [[*[Funkcional. Anal. i Priloºen.]{}*]{}]{}, 6(4):26–40, 1972. H.-J. Borchers. On structure of the algebra of field operators. [[*[Nuovo Cimento (10)]{}*]{}]{}, 24:214–236, 1962. Alain Connes and Dirk Kreimer. Renormalization in quantum field theory and the Riemann-Hilbert problem. I. The Hopf algebra structure of graphs and the main theorem. [[*[Comm. Math. Phys.]{}*]{}]{}, 210(1):249–273, 2000. Pavel Etingof. Note on dimensional regularization. In [[*[Quantum fields and strings: a course for mathematicians, Vol. 1 (Princeton, NJ, 1996/1997)]{}*]{}]{}, pages 597–607. Amer. Math. Soc., Providence, RI, 1999. Kazuo Fujikawa. Path-integral measure for gauge-invariant fermion theories. [[*[Phys. Rev. Lett.]{}*]{}]{}, 42(18):1195–1198, Apr 1979. A. Grothendieck. Éléments de géométrie algébrique. IV. Étude locale des schémas et des morphismes de schémas IV. [[*[Inst. Hautes Études Sci. Publ. Math.]{}*]{}]{}, (32):361, 1967. G. ’t Hooft and M. Veltman. Diagrammar. In [[*[Under the spell of the gauge principle]{}*]{}]{}, pages 28–173. World Scientific, 1994. Lars Hormander. [[*[The analysis of linear partial differential operators. I]{}*]{}]{}. Classics in Mathematics. Springer-Verlag, Berlin, 2003. Dirk Kreimer. On the Hopf algebra structure of perturbative quantum field theories. [[*[Adv. Theor. Math. Phys.]{}*]{}]{}, 2(2):303–334, 1998. H. Lehmann, K. Symanzik, and W. Zimmermann. On the formulation of quantized field theories. [[*[Nuovo Cimento]{}*]{}]{}, 1:1425, 1955. R. F. Streater and A. S. Wightman. [[*[PCT, spin and statistics, and all that]{}*]{}]{}. Princeton Landmarks in Physics. Princeton University Press, Princeton, NJ, 2000. Corrected third printing of the 1978 edition. [^1]: This research was supported by a Miller professorship and an NSF grant. I thank the referees for suggesting many improvements.
--- author: - | [^1]\ Instituto de Astronomía, Universidad Nacional Aut[ó]{}noma de M[é]{}xico\ E-mail: - | José Ignacio Cabrera\ Facultad de Ciencias, Universidad Nacional Aut[ó]{}noma de M[é]{}xico\ E-mail: - | Erika Ben[í]{}tez\ Instituto de Astronomía, Universidad Nacional Aut[ó]{}noma de M[é]{}xico\ E-mail: - | David Hiriart\ Instituto de Astronomía, Universidad Nacional Aut[ó]{}noma de M[é]{}xico\ E-mail: title: 'Flaring activity of Mrk 421 in 2012 and 2013: orphan flare and multiwavelength analysis' --- Introduction ============ At a distance of 134.1 Mpc, the BL Lac object Mrk 421 (z=0.03) [@2005ApJ...635..173S] is one of the closest sources in the extragalactic sky. In the MeV - TeV energy range, simultaneous observations have been carried out with the Fermi-LAT satellite [^2]. This blazar was also studied using different telescopes based on Imaging Atmospheric Cherenkov Techniques (e.g. VERITAS, MAGIC, H.E.S.S.) and air shower arrays (e.g. ARGO-YBJ, HAWC). In X-rays, this object has been observed with Swift-XRT and Swift-BAT instruments for almost 10 years [^3]. In the optical R-band, this source has been monitored since 2008 at San Pedro M[á]{}rtir Observatory as part of the GASP-WEBT program [^4] . On the other hand, VERITAS observatory and the Whipple 10m Cherenkov telescope observed the flaring activity in TeV $\gamma$-rays and studied the possible correlations with X-rays, optical and radio wavelengths between 2006 January and 2008 June [@2011ApJ...738...25A]. Acciari et al. (2011) reported no significant flux correlations between the TeV $\gamma$-rays and the optical/radio bands. However, an enhanced active phase in the X-ray and in TeV $\gamma$-ray bands was observed. During this active phase, they found strong X-ray activity with no increased TeV emission. Later, a TeV $\gamma$-ray activity lasting two days was detected without activity in X-rays. Therefore, the later was associated with two “orphan” flares. At the end of this active state, the source showed a significant correlation between the X-ray and TeV bands. In general, no correlation in TeV has been found with optical and/or radio fluxes [@2011ApJ...738...25A]. However, correlations among X-ray and TeV bands have been reported in [@1995ApJ..449..99; @2008ApJ..677..906; @2009ApJ..695..596]. Although most of these correlations are interpreted through the standard one-zone SSC model (synchrotron self-Compton; [@2008ApJ...686..181F]), other correlations suggest serious deviations from this leptonic model. The Spectral Energy Distribution (SED) of Mrk 421 presents a double-humped shape; a low energy hump at energies $\simeq$ 1 keV and the second hump at hundreds of GeV. Abdo et al. (2011) found that both leptonic and hadronic scenarios are able to fit the Mrk 421 SED reasonably well, implying comparable jet powers but very different characteristics for the blazar emitting region. In the leptonic scenario, a one-zone SSC with three accelerated electron populations (through diffusive relativistic shocks with a randomly oriented magnetic field) has been used [@2011ApJ...736..131A]. In the hadronic scenario [@2014arXiv1411.7354F], the peak at low energies is explained by electron synchrotron radiation whereas the high-energy peak is explained invoking the Synchrotron Proton Blazar (SPB) model [@2001APh....15..121M; @2003APh....18..593M]. In this work, we show the multiwavelength observations carried out on Mrk 421 during the active states displayed in 2012 and 2013 in the GeV energy range. A brief discussion related to the theoretical interpretations of both flares will be given. Multiwalength Light Curves ========================== Multiwavelength light curves of the 2012 and 2013 flares of Mrk 421 are shown in Figure 1. The GeV $\gamma$- ray data shown corresponds to the 200 MeV to 300 GeV band. Details on the reduction procedure applied to these data-set can be found in [@2013MNRAS.434L...6C]. The X-ray data were obtained with both Swift-BAT (15 - 50 keV) and Swift-XRT (0.2 - 10 keV) instruments. The optical R-band observations were carried out with the 0.84 m f/15 Ritchey-Chretien telescope and the instrument POLIMA [^5]. It is worth noting that the optical R-band magnitudes are not corrected for the contribution of the host galaxy of Mrk 421. Additionally, and only for comparison, we have included a few optical R-band data collected by the American Association of Variable Star Observers (AAVSO) [^6]. 2012 Flare ---------- Mrk 421 was detected very active in the GeV energy range in July 16 with a daily flux of (1.4$\pm$0.2) $\times10^{-6}$ ph cm$^{-2}$ s$^{-1}$, see ref. [@2012ATel.4261....1D]. The source continued to be detected between July 17 and 21, with a daily flux between (0.4$\pm$0.1)$\times~10^{-6}$ ph cm$^{-2}$ s$^{-1}$ and (0.9$\pm$0.2) $\times10^{-6}$ ph cm$^{-2}$ s$^{-1}$. In fig. 1 we highlight the active state using a red vertical bar. It is clear that $\gamma$-ray flare was detected without activity in the hard X-ray band. Unfortunately, data were collected neither in the soft-X rays nor the optical R-band. 2013 Flare ---------- Fermi-LAT reported high activity of Mrk 421 in 2013 April 9 to 12, with a daily flux between (0.4$\pm$0.1)$\times~10^{-6}$ ph cm$^{-2}$ s$^{-1}$ and (0.8$\pm$0.2)$\times~10^{-6}$ ph cm$^{-2}$ s$^{-1}$, see ref. [@2013ATel.4977....1P]. In fig. 1, the states of high activity have been divided in two vertical color bands, purple and green. The purple one marks the high activity observed in GeV $\gamma$-rays, X-rays (XRT and BAT) and optical bright R-band (April 9, R =11.74 $\pm$ 0.04) . The green color vertical band marks the second bright optical R-band point (May 12, R=11.62 $\pm$ 0.04). The second optical bright point seems to be anti-correlated with the higher energy bands. Discussion ========== From the multiwavelength light curves it is clear that the $\gamma$-ray flare observed in 2012 was detected without any strong activity in the hard-X rays. In 2012 July 16, a TeV $\gamma$-ray flare without any increased activity in other wavelengths was reported [@2012ATel.4272....1B]. The so-called "orphan” flares have been previously observed in Mrk 421 [@2005ApJ...630..130B; @2011ApJ...738...25A], and also in the blazar 1ES 1959+650 [@2003ApJ...583L...9H; @2004ApJ...601..151K]. In general, most of the flaring activity in this source occurs quasi-simultaneously with $\gamma$-ray and X-ray emission. Therefore, this atypical flaring event observed in the TeV/GeV $\gamma$-rays along with the absence of activity in the X-rays is very difficult to reconcile with the SSC model. Orphan flares have been usually explained as due to neutral pion decays from proton-photon interactions [@2005ApJ...621..176B; @2013PhRvD..87j3015S; @2015arXiv150104165F]. It is worth mentioning that a radio flare with a delay of $\sim$ 60 days was detected by the Owens Valley Radio Observatory (OVRO) 40-m Telescope [@2012ATel.4451....1H; @2015arXiv150107407H].\ Based on the multiwavelength light curves shown in fig. 1, and the emission in TeV and X-ray reported by , it is clear that Mrk 421 flared in TeV/GeV $\gamma$-rays, in X-rays (BAT and XRT) and in optical R-band in 2013 April 9 - 12. The results obtained from the discrete correlation function (DCF) calculated using all the light curves in this work are presented in Figure 2. Left panel shows the correlation between GeV $\gamma$-rays and hard-X rays, and the right panel shows the correlation between GeV $\gamma$-rays and the optical R-band. In both panels the DCF show that there are no lags between the GeV $\gamma$-rays and hard-X rays, and also with the optical R-band. Therefore, the multiwavelength emission seems to take place simultaneously in all bands, which favors a one-zone SSC model. In the framework of this SSC model, the electrons within the emitting region are moving at ultra-relativistic velocities in a collimated jet. The Fermi-accelerated electrons injected into the emitting region are confined by a magnetic field. Then, photons are radiated via synchrotron emission and up-scattered to higher energies. The low energy emission, from radio to X-rays, is produced by synchrotron radiation. The high energy emission (MeV - TeV) $\gamma$-rays, is due to Compton scattering. This leptonic model depends basically on the bulk Lorentz factor, the size of emitting region, the electron number density and the strength of the magnetic field. It is possible to find a set of parameters that can describe the states of low or high activity [@2011ApJ...738...25A].\ It is worth noting that the maximum brightness in the optical R-band observed in May 12 is anti-correlated with the other bands. This result poses a challenge for the theoretical models proposed for this blazar. In a forthcoming paper, we will present a more detailed analysis of these active states. ![image](Lightcurve_mrk421_V2.pdf){width="95.00000%"} ![image](Cor_gamVsX_gamVsOp.pdf){width="\textwidth"} [99]{} B. Sbarufatti, A. Treves and R. Falomo, *Imaging Redshifts of BL Lacertae Objects*, *ApJ* [**635**]{} (2005) 173 M. Villata et al., *Multifrequency monitoring of the blazar 0716+714 during the GASP-WEBT-AGILE campaign of 2007*, *A&A* [**481**]{} (2008) L79 V. A. Acciari et al., *TeV and Multi-wavelength observations of Mrk 421 in 2006 - 2008*, *ApJ* [**738**]{} (2011) 25 D. Macomb, C. Akerlof and H. D. [Aller]{}, *Multiwavelength Observations of Markarian 421 During a TeV/X-Ray Flare*, *ApJ* [**449**]{} (1995) L99 J. Kildea and et al., *Multiwavelength observations of Markarian 421 in 2001 March: An unprecedented view on the X-ray/TeV correlated variability*, *ApJ* [**677**]{} (2008) 906 D. Horan, V. A. Acciari and S. M. Bradbury, *Multiwavelength Observations of Markarian 421 in 2005-2006*, *ApJ* [**695**]{} (2009) 596 J. D. Finke, C. D. Dermer, and M. B[ö]{}ttcher, *Synchrotron Self-Compton Analysis of TeV X-Ray-Selected BL Lacertae Objects*, *ApJ* [**686**]{} (2008) 181 A. A. Abdo and et al., *Fermi Large Area Telescope Observations of Markarian 421: The Missing Piece of its Spectral Energy Distribution*, *ApJ* [**736**]{} (2011) 131 N. Fraija and A. Marinelli, *TeV $\gamma$-ray fluxes from the long campaigns on Mrk421 as constraints on the emission of TeV-PeV Neutrinos and UHECRs*, *Astroparticle Physics* [**70**]{} (2015) 54 A. M[ü]{}cke and R. J. Protheroe, *A proton synchrotron blazar model for flaring in Markarian 501*, *Astroparticle Physics* [**15**]{} (2001) 121 A. M[ü]{}cke and et al., *BL Lac objects in the synchrotron proton blazar model*, *Astroparticle Physics* [**18**]{} (2003) 593 J. I. Cabrera et al., *A hydrodynamical model for the Fermi-LAT [$\gamma$]{}-ray light curve of blazar PKS 1510-089*, *MNRAS*, [**434**]{} (2013) L6 F. D’Ammando and M. Orienti, *Fermi LAT detection of a GeV flare from the BL Lac object Mrk 421*, *The Astronomer’s Telegram* [**4261**]{} (2012) 1 D. Paneque et al., *Fermi-LAT and Swift-XRT observe exceptionally high activity from the nearby TeV blazar Mrk421*, *The Astronomer’s Telegram* [**4977**]{} (2013) 1 B. Bartoli and et al., *TeV flare from the blazar Mrk421 observed by ARGO-YBJ*, *The Astronomer’s Telegram*, [**4272**]{} (2012) 1 M. B [ł]{}a[ż]{}ejowski et al., *A Multiwavelength View of the TeV Blazar Markarian 421: Correlated Variability, Flaring, and Spectral Evolution*, *ApJ* [**630**]{} (2005) 130 J. Holder and et al., *Detection of TeV Gamma Rays from the BL Lacertae Object 1ES 1959+650 with the Whipple 10 Meter Telescope*, *ApJ* [**583**]{} (2003) L9 H. Krawczynski and et al., *Multiwavelength Observations of Strong Flares from the TeV Blazar 1ES 1959+650*, *ApJ* [**601**]{} (2004) 151 M. B[ö]{}ttcher, *A Hadronic Synchrotron Mirror Model for the “Orphan” TeV Flare in 1ES 1959+650*, *ApJ* [**621**]{} (2005) 176 S. Sahu, A. F. Oliveros and J. C. Sanabria, *Hadronic-origin orphan TeV flare from 1ES 1959+650*, *PRD* [**87**]{} (2013) 10 N. Fraija, *Could a plasma in quasi-thermal equilibrium be associated to the ”orphan” TeV flares ?*, *Astroparticle Physics* [**71**]{} (2015) 1 T. Hovatta et al., *A major 15 GHz radio flare in the blazar Mrk 421*, *The Astronomer’s Telegram* [**4451**]{} (2012) 1 T. Hovatta et al., *A combined radio and GeV gamma-ray view of the 2012 and 2013 flares of Mrk 421*, \[[hep-th/1501.07407]{}\] J. Cortina and J. Holder, *MAGIC and VERITAS detect an unprecedented flaring activity from Mrk 421 in very high energy gamma-rays*, *The Astronomer’s Telegram* [**4976**]{} (2013) 1 E. Pian et al., *An active state of the BL Lacertae object Markarian 421 detected by INTEGRAL in April 2013*, *A&A* [**570**]{} (2014) 77 [^1]: Luc Binette postdoctoral scholarship. [^2]: http://fermi.gsf.nasa.gov/ssc/data/ [^3]: http://swift.gsfc.nasa.gov/cgi-bin/sdc/ql? [^4]: http://www.oato.inaf.it/blazars/webt/ [^5]: A detailed description of our photopolarimetric monitoring program on TeV blazars can be found in http://www.astrossp.unam.mx/blazars [^6]: http://www.aavso.org/observing-campaigns
--- abstract: 'We compute the cycle index sum of the symmetric group action on the homology of the configuration spaces of points in a Euclidean space with the condition that no $k$ of them are equal.' address: | Department of Mathematics\ Kansas State University\ Manhatan, KS 66506, USA author: - Keely Grossnickle and Victor Turchin bibliography: - 'CycleIndexSumBib.bib' nocite: '[@*]' title: 'Cycle Index Sum for Non-$k$-Equal Configurations' --- Introduction {#section1} ============ Let $\mathcal{M}_{d}^{(k)}(n)$ be the configuration space of $n$ labeled points in $\mathbb{R}^{d}$ with the $non-k-equal$ $condition$: no $k$ points coincide. For example, $\mathcal{M}_{d}^{(2)}(n)$ is the usual configuration space of $n$ distinct points in $\mathbb{R}^{d}$. Björner and Welker in [@B_W] first computed the homology of $\mathcal{M}_{d}^{(k)}(n)$ for $k \geq 3$. Sundaram and Wachs in [@S_W] later computed the symmetric group action on the homology of the intersection lattice corresponding to $\mathcal{M}_{d}^{(k)}(n)$; their computations imply the following isomorphism of symmetric sequences: $$H_{\ast}\mathcal{M}_{d}^{(k)} \simeq Com \circ (\mathbb{1} \oplus (\mathcal{L}ie \circ \mathcal{H}_{1}^{(k)}) \{d-1\} ). \tag{1.1} \label{eq:iso}$$ where $\circ$ is the graded composition product for symmetric sequences [@fresse Section 2.2.2].[^1] Recall also that $H_{\ast}\mathcal{M}_{d}^{(k)}(n)$ is torsion free. The isomorphism $\eqref{eq:iso}$ holds integrally for $d \geq 2$, $k\geq 3$ and rationally for $d\geq 1$, $k \geq 2 $. $Com$ and $\mathcal{L}ie$ are the underlying symmetric sequences of the commutative and Lie operads and $\mathcal{H}_{1}^{(k)}$ is the symmetric sequence of hook representations that we describe in section 3. The notation $\{d-1\}$ is the operadic degree $(d-1)$ suspension of symmetric sequences. The symmetric sequence $\mathbb{1}$ is the unit with respect to the composition product. It is a one dimensional space concentrated in arity 1. The cycle index sum of the symmetric group action on $H_{\ast}\mathcal{M}_{d}^{(2)}$, the usual configuration space, was computed in [@lehrer; @A_T] to be: $$Z_{H_{\ast}\mathcal{M}_{d}^{(2)}} = \prod_{m=1}^{\infty}\left(1+(-1)^{d}(-q)^{m(d-1)}p_{m}\right)^{(-1)^{d}E_{m}\left(\frac{1}{(-q)^{d-1}}\right)} . \tag{1.2} \label{k2 iso}$$ From , in [@D_T] the exponential generating function of Poncairé polynomials for the sequence $H_{\ast}\mathcal{M}_{d}^{(k)}$ is computed to be: $$F_{H_{\ast}\mathcal{M}_{d}^{(k)}}(x)=\sum_{n=0}^{\infty}P_{H_{\ast}\mathcal{M}_{d}^{(k)}(n)}(q)\frac{x^{n}}{n!}=e^{x}\left(1-(-q)^{k-2}+(-q)^{k-2}\left(\sum_{j=0}^{k-1}\frac{(-q^{d-1}x)^{j}}{j!}\right)e^{q^{d-1}x} \right)^{-\frac{1}{q^{d-1}}} .\tag{1.3} \label{dim iso}$$ The main result of this paper describes the cycle index sum of the symmetric sequence $H_{\ast}\mathcal{M}_{d}^{(k)}$ obtained from the isomorphism . For $k \geq 2$, $d\geq 1$ $$\begin{gathered} \label{equ:bigthm} \tag{1.4} Z_{H_{\ast}\mathcal{M}_{d}^{(k)}}(q;p_{1}, p_{2}, p_{3},...) = \\ =e^{(\sum_{l=1}^{\infty}\frac{p_l}{l})}\prod_{m=1}^{\infty}\Bigg(1-(-q)^{m(k-2)}+\\(-q)^{m(k-2)}\Big(e^{-\sum_{j=1}^{\infty}\frac{(-1)^{d-1}(-q)^{mj(d-1)}p_{mj}}{j}}\Big)_{\leq m(k-1)}\Big(e^{\sum_{j=1}^{\infty}\frac{(-1)^{d-1}(-q)^{mj(d-1)}p_{mj}}{j}}\Big)\Bigg)^{(-1)^{d}E_{m}\Big(\frac{1}{(-q)^{d-1}}\Big)} ,\end{gathered}$$ where $\leq m(k-1)$ denotes the truncation with respect to the cardinality degree ($|p_i|=i$) and $E_{m}(y)=\frac{1}{m}\sum_{i\mid m}\mu(i)y^{\frac{m}{i}}$, where $\mu(i)$ is the usual Möbius function. Most of the computations are straightforward. The main difficult part is computing the cycle index sum for $\mathcal{H}_{1}^{(k)}$, which is done in Section 3. It is easy to see that from one can recover by setting $k=2$. Similarly, can be recovered from by setting $p_{1}=x$ and $p_{i}=0$ for $i\geq 2$. We also establish a refinement of Theorem 1. The homology groups of $H_{\ast}\mathcal{M}_{d}^{(k)}$ can be described as linear combinations of certain products of iterated brackets[@D_T]. These brackets are of two types: long or short. The number of long, respectively short, brackets represent two additional gradings on the space. The cycle index sum of $H_{\ast}\mathcal{M}_{d}^{(k)}$ can be adjusted with the use of two additional variables to account for these two additional gradings. See Theorem 2 in Section 5. The formula was used in [@A_T; @ST_T; @T] to compute the generating functions for the Euler characteristics of the terms of the Hodge splitting in the rational homology of the spaces of higher dimensional long knots and string links. In the same way, the results of this paper can be used to compute the Euler characteristics of the Hodge splitting in the second term of the Goodwillie-Weiss or Vassiliev spectral sequences for spaces of long non-$k$-equal (string) immersions[@D_T]. The differential $d_{1}$ of the above spectral sequences preserves the number of long and short brackets used in the refinement, which is our motivations for Theorem 2. Acknowledgments {#acknowledgments .unnumbered} --------------- The authors are thankful to Frédéric Chapoton, Vladimir Dotsenko and Anton Khoroshkin for communication. Notation and Basic Facts about the Cycle Index Sum {#section 2} ================================================== In this paper we will use the variable $q$ for the formal variable responsible for the homological degree. For $\sigma \in \Sigma_{n}$ we will denote the number of its cycles of length $j$ by $\ell_{j}(\sigma)$. Let $\rho:\Sigma_{n}\rightarrow GL(V)$ be a representation of the symmetric group $\Sigma_{n}$, where $V$ is a graded vector space, and let $(p_{1}, p_{2}, p_{3},...)$ be a family of infinite commuting variables. Then the cycle index sum of $\rho$, denoted $Z_{V}(q;p_{1}, p_{2}, p_{3},...)$, is defined by $$\label{def:cycle} Z_{V}(q;p_{1}, p_{2}, p_{3},....) = \frac{1}{|\Sigma_{n} |} \sum_{\sigma \in \Sigma_{n}}tr(\rho(\sigma))\prod_{j}p_{j}^{\ell_{j}(\sigma)},\tag{2.1}$$ where $tr(\rho(\sigma))$ is the graded trace that is a polynomial of $q$ obtained as the generating function of traces on each component. There is also an auxiliary *cardinality degree* given by $p_{i}$’s where each $p_{i}$ is said to have cardinality degree $i$. Below we recall some facts about the cycle index sum. Let $V$ be a $\Sigma_{k}$-module and $W$ be a $\Sigma_{n}$-module. Then from [@bergeron; @B_T_T; @macdonald section 6.1; section 3.1, proposition 8, part c; 7.3, respectively], $$Z_{\mathrm{Ind}_{\Sigma_{k}\times\Sigma_{n}}^{\Sigma_{k+n}}(V \otimes W)} = Z_{V} \cdot Z_{W}. \tag{2.2} \label{cis:prod}$$ For a symmetric sequence $M(\bullet) = \{M(n), n \geq 0\}$, one defines its cycle index sum as $$Z_{M}(q;p_{1}, p_{2}, p_{3},...)=\sum_{n=0}^{\infty}Z_{M(n)}(q;p_{1}, p_{2}, p_{3},...) .\tag{2.3} \label{cis:sum}$$ For the proof of Theorem 1, we will need the formula for the graded plethysm: $$Z_{M \circ N} = Z_{M} \ast Z_{N} = Z_{M}(q;p_{i} \mapsto p_{i} \ast Z_{N}) ,\tag{2.4.1}\label{plethysm:1}$$ where $$p_{i}\ast Z_{N} = Z_{N}(q \mapsto (-1)^{i-1}q^{i}; p_{j}\mapsto p_{ij}). \tag{2.4.2}\label{plethysm:2}$$ The usual plethysm without the grading can be found in [@bergeron; @B_T_T; @macdonald equation 3.25, section 3.8; definition 3, section 1.4; equation 8.1-8.2, section 8, respectively]. For the graded case, it is done when $q=-1$ in [@G_K section 7.20]. Unfortunately, the graded version of this formula doesn’t seem to appear in the literature, though it is known to experts, [@C_D_K; @mathoverflow]. To prove our formula, we notice that the sign convention is correct and holds when $q=-1$ and the $q$-grading contribution is correct by the same argument as in [@D_K Section 3.5, definition 3]. To recall the operadic suspension $\mathcal{M}\{1\}$ of the symmetric sequence $\mathcal{M}$ is defined as $$\mathcal{M}\{1\}(n) = s^{n-1}\mathcal{M}(n)\otimes V_{(1^{n})},$$ where $s^{n-1}$ is the degree $(n-1)$ suspension and $V_{(1^{n})}$ is the sign representation. One can easily see that $$Z_{\mathcal{M}\{1\}} = \frac{1}{q}Z_{\mathcal{M}}(q;p_{i}\mapsto (-1)^{i-1}q^{i}p_{i}).$$ We will use the formula for the $\{d-1\}$ operadic suspension, which is an easy formula to obtain from the above: $$Z_{\mathcal{M}\{d-1\}}=(q)^{1-d}Z_{\mathcal{M}}(q;p_{i}\mapsto (-1)^{(i-1)(d-1)}q^{i(d-1)}p_{i}). \tag{2.5} \label{graded:susp}$$ Lastly, we will need the cycle index sums of $Com$ and $\mathcal{L}ie$. From [@D_K; @G_K], the cycle index sum for $Com$ is $$Z_{Com} = \exp\left(\sum_{i=1}^{\infty} \frac{p_{i}}{i}\right); \tag{2.6}\label{cis:com}$$ and from [@brandt; @D_K; @G_K] , the cycle index sum for $\mathcal{L}ie$ is $$Z_{\mathcal{L}ie} = \sum_{i=1}^{\infty}\frac{-\mu(i)\ln(1-p_{i})}{i}, \tag{2.7}\label{cis:Lie}$$ where, as before and throughout this paper, $\mu(i)$ is the usual Möbius function. We will also use the notation $V_{\lambda}$ to denote the irreducible $\Sigma_{n}$-representation corresponding to the partition $\lambda$, see [@fulton]. Cycle Index Sum for the Sequence of Hooks $\mathcal{H}_{1}^{(k)}$ {#section3} ================================================================= We define $\mathcal{H}_{1}^{(k)}(n)$ as a graded $\Sigma_{n}$ module, which is trivial if $n<k$ and $\mathcal{H}_{1}^{(k)}(n) = s^{k-2}V_{(n-k+1, 1^{k-1})}$ otherwise, where $V_{(n-k+1, 1^{k-1})}$ is the hook representation corresponding to the partition $\lambda = (n-k+1, 1^{k-1})$ and $s^{k-2}$ is the $(k-2)$-suspension. The space $\mathcal{H}_{1}^{(k)}(n)$ is some natural subspace of $H_{k-2}\mathcal{M}_{1}^{(k)}(n)$, which explains why $\mathcal{H}_{1}^{(k)}(n)$ lies in degree $k-2$, see [@D_T]. \[prop3.1\] For $k \geq 2$, $$Z_{\mathcal{H}_{1}^{(k)}}(q;p_{1}, p_{2}, p_{3}, ... )=(-q)^{k-2} - (-q)^{k-2}\left(\exp\left(-\sum_{i=1}^{\infty}\frac{p_{i}}{i}\right)\right)_{\leq k-1}\left(\exp\left(\sum_{i=1}^{\infty}\frac{p_{i}}{i}\right)\right), \tag{3.1}\label{hookprop}$$ where $\leq k-1$ is the truncation with respect to the cardinality degree. We will prove this proposition with the following well-known facts and lemmas. First, let $W_{n}=\mathbb{Q}[\underline{n}]$, where $\underline{n}=\{1,2,...,n\}$, be the canonical $n$-dimensional representation of $\Sigma_{n}$. Then $W_{n}$ can be decomposed in the following way: $W_{n}=V_{(n-1,1)}\oplus V_{(n)}$ where $V_{(n-1,1)}$ is the $(n-1)$ dimensional representation and $V_{(n)} = \mathbb{Q}$ is the one-dimensional trivial representation. \[lemma3.2\] For $n\geq k \geq 0$, $\wedge^{k}W_{n}= \mathrm{Ind}_{\Sigma_{k}\times \Sigma_{n-k}}^{\Sigma_{n}} V_{(1^{k})} \otimes V_{(n-k)}$. First recall that $V_{(1^{k})}$ is the sign representation of $ \Sigma_{k}$ and that $V_{(n-k)}$ is the trivial representation of $ \Sigma_{n-k} $. Also, note that $\wedge^{k} W$ and $\mathrm{Ind}_{\Sigma_{k}\times \Sigma_{n-k}}^{\Sigma_{n}} V_{(1^{k})} \otimes V_{(n-k)}$ have the same dimension, namely $\binom{n}{k}$. To start, let $e_{1}, e_{2},...,e_{n}$ be the usual basis of $W_{n}$. Now examine how $\Sigma_{n}$ acts on a vector $e_{i_{1}}\wedge e_{i_{2}}\wedge ... \wedge e_{i_{k}} \in W_{n}$. For $\sigma \in \Sigma_{n}$, $\sigma(e_{i_{1}}\wedge e_{i_{2}}\wedge ... \wedge e_{i_{k}}) = e_{\sigma(i_{1})}\wedge e_{\sigma(i_{2})}\wedge ... \wedge e_{\sigma(i_{k})}$. By definition, $\mathrm{Ind}_{\Sigma_{k}\times \Sigma_{n-k}}^{\Sigma_{n}} V_{(1^{k})} \otimes V_{(n-k)} = \mathbb{Q}[\Sigma_{n}] \otimes_{\mathbb{Q}[\Sigma_{k}\times \Sigma_{n-k}]} V_{(1^{k})} \otimes V_{(n-k)}$. Define $$I_{(k,n-k)}:\mathbb{Q}[\Sigma_{n}] \otimes_{\mathbb{Q}[\Sigma_{k}\times \Sigma_{n-k}]} V_{(1^{k})} \otimes V_{(n-k)} \rightarrow \wedge^{k} W_{n}$$ by $I_{(k,n-k)}(\sigma \otimes \mathbb{1}) \mapsto \sigma (e_{1}\wedge ... \wedge e_{k}) = e_{\sigma(1)}\wedge e_{\sigma(2)}\wedge...\wedge e_{\sigma(k)}$. We claim this is the desired isomorphism. First, we will show that it is well defined. Let $(\alpha, \beta) \in \Sigma_{k} \times \Sigma_{n-k} $. Then $I_{(k,n-k)}(\sigma \cdot (\alpha, \beta) \otimes \mathbb{1}) = e_{\sigma(\alpha(1))} \wedge e_{\sigma(\alpha(2))} \wedge ... \wedge e_{\sigma(\alpha(k))} = (-1)^{\mid \alpha \mid} e_{\sigma(1)} \wedge e_{\sigma(2)} \wedge .... \wedge e_{\sigma(k)} = (-1)^{\mid \alpha \mid} \sigma \otimes \mathbb{1}$. On the other hand, $I(\sigma \otimes (\alpha,\beta) \cdot \mathbb{1}) = \sigma \otimes (-1)^{\mid \alpha \mid} \mathbb{1} = (-1)^{\mid \alpha \mid} \sigma \otimes \mathbb{1}$. Therefore $I$ is well defined. As previously mentioned, these two spaces have the same dimension and by construction $I_{(k,n-k)}$ is surjective and therefore $I_{(k,n-k)}$ is bijective. \[lemma3.3\] For $n > k$, one has an isomorphism of $\Sigma_{n}$-modules: $V_{(n-k, 1^{k})}=\wedge^{k} V_{(n-1,1)}$. This lemma is a standard exercise in representation theory [@fulton Exercise 4.6]. \[cor3.4\] One has an isomorphism of $\Sigma_{n}$-modules: $\wedge^{k}W_{n} = \wedge^{k} V_{(n-1, 1)} \oplus \wedge^{k-1} V_{(n-1, 1)}$. $\wedge ^{k}W_{n} = \wedge^{k}(V_{(n-1,1)} \oplus V_{(n)}) = \wedge^{k}(V_{(n-1,1)}) \oplus \wedge^{k-1}(V_{(n-1,1)}) \otimes V_{(n)}$, where $V_{(n)}$ is just the trivial representation and thus we have our desired isomorphism. \[3.5\] One has an isomorphism of virtual $\Sigma_{n}$-modules: $\wedge^{k}V_{(n-1,1)} = \sum_{i=0}^{k}(-1)^{i}\wedge^{k-i}W_{n}$ $ \wedge^{k}V_{(n-1,1)} = \wedge^{k}W_{n} - \wedge^{k-1}V_{(n-1,1)}$ by Corollary 3.4. We apply the same corollary to $\wedge^{k-1}V_{(n-1,1)}$ again and we have $ \wedge^{k}V_{(n-1,1)} = \wedge^{k}W_{n} - \wedge^{k-1}V_{(n-1,1)} = \wedge^{k}W_{n} - \wedge^{k-1}W_{n} + \wedge^{k-2}V_{(n-1,1)}$. We can apply Corollary 3.4 iteratively to obtain the desired isomorphism. Now we are ready to prove Proposition 3.1. Let $$\mathcal{H}(n) = \begin{cases} 0, & n<k; \\ V_{(n-k+1,1^{k-1})}, &\text{otherwise.} \end{cases}$$ In order to prove Proposition 3.1, it is sufficient to show that $$Z_{\mathcal{H}}(p_{1}, p_{2}, p_{3},...) = (-1)^{k-2} - (-1)^{k-2}\left(\exp\left(-\sum_{i=1}^{\infty}\frac{p_{i}}{i}\right)\right)_{\leq k-1}\left(\exp\left(\sum_{i=1}^{\infty}\frac{p_{i}}{i}\right)\right). \tag{3.2}\label{hooknodeg}$$ For $n \geq k$, one has $$\begin{aligned} \mathcal{H}(n) &= V_{(n-k+1,1^{k-1})}&&\text{(by definition)}\\ &= \wedge^{k-1}V_{(n-1,1)}&&\text{(by Lemma \ref{lemma3.3})}\\ &= \sum_{i=0}^{k-1}(-1)^{i}\wedge^{k-1-i}W_{n}&&\text{(by Corollary \ref{3.5})}\\ &= \sum_{i=0}^{k-1}(-1)^{i}\mathrm{Ind}_{\Sigma_{k-1-i} \times \Sigma_{n-k+1+i}}^{\Sigma_{n}}V_{(1^{k-1-i})}\otimes V_{(n-k+1+i)} &&\text{(by Lemma 3.2)}\\ &= (-1)^{k-1}\sum_{j=0}^{k-1} \mathrm{Ind}_{\Sigma_{j} \times \Sigma_{n-j}}^{\Sigma_{n}} V_{(1)^{j}} \otimes V_{(n-j)}. &&\text{(by taking $j = k-i-1$)}\end{aligned}$$ Next we apply . $$Z_{\mathcal{H}(n)}=Z_{V_{(n-k+1,1^{k-1})}} = (-1)^{k-1}\sum_{j=0}^{k-1}(-1)^{j}Z_{V_{(1^{j})}}\cdot Z_{V_{(n-j)}}. \tag{3.3} \label{3.3}$$ Thus, $$Z_{\mathcal{H}}=(-1)^{k-1} \sum_{n\geq k} \sum_{j=0}^{k-1}(-1)^{j}Z_{V_{(1^{j})}} \cdot Z_{V_{(n-j)}}.\tag{3.4} \label{propproofequ}$$ Note that $Z_{V_{(1^{j})}}$ is the cycle index sum for the sign representation and $Z_{V_{(n-j)}}$ is the cycle index sum for the trivial representation. We claim that is equal to . We will prove this claim in two cases: when cardinality $n<k$ and when cardinality $n \geq k$. We will first do the case when $n<k$. Clearly is equal to 0 when $n<k$ as the sum starts when $n \geq k$ and thus has no terms. When $n<k$, is also 0 since the exponentials are inverses to one another: $$\begin{gathered} (-1)^{k-2} - (-1)^{k-2}\left(\exp\left(-\sum_{i=1}^{\infty}\frac{p_{i}}{i}\right)\right)_{\leq k-1}\left(\exp\left(\sum_{i=1}^{\infty}\frac{p_{i}}{i}\right)\right) =_{\leq k-1}\\ (-1)^{k-2} - (-1)^{k-2}\left(\exp\left(-\sum_{i=1}^{\infty}\frac{p_{i}}{i}\right)\right)\left(\exp\left(\sum_{i=1}^{\infty}\frac{p_{i}}{i}\right)\right) = 0.\end{gathered}$$ Now we look at the case when the cardinality degree $n\geq k$. It follows from that $$\sum_{n=0}^{\infty}Z_{V_{(1^{n})}} = \exp\left(\sum_{i=0}^{\infty}\frac{(-1)^{i-1}p_{i}}{i}\right).$$ By replacing $p_{i} \mapsto (-1)^{i}p_{i}$, we get $$\sum_{n=0}^{\infty}(-1)^{n}Z_{V_{(1^{n})}} = \exp\left(\sum_{i=0}^{\infty}\frac{-p_{i}}{i}\right).$$ Then, $$\sum_{n=0}^{k-1}(-1)^{n}Z_{V_{(1^{n})}} = \exp\left(\sum_{i=0}^{\infty}\frac{-p_{i}}{i}\right)_{\leq k-1}.$$ We also know that $$\sum_{n=0}^{\infty}Z_{V_{(n)}} = \exp\left(\sum_{i=0}^{\infty}\frac{p_{i}}{i}\right).$$ From these formulas, one can easily see that in cardinality $n \geq k$, and are both equal to . Thus in arity $n$, is equal to , completing the proof. Proof of Theorem 1 {#section4} ================== First we compute the plethsym of $\mathcal{L}$ie and $\mathcal{H}_{1}^{(k)}$ using and . $$\begin{gathered} Z_{\mathcal{L}ie \circ \mathcal{H}_{1}^{(k)}}(q;p_{1}, p_{2},p_{3},...)= \\ \sum_{i=1}^{\infty}\frac{-\mu(i)}{i}\ln\left(1-(-q)^{i(k-2)}+(-q)^{i(k-2)}\left[\exp\left(-\sum_{j=1}^{\infty}\frac{p_{ij}}{j}\right)\right]_{\leq i(k-1)}\left[\exp\left(\sum_{j=1}^{\infty}\frac{p_{ij}}{j}\right)\right]\right) .\tag{4.1}\label{lie:hook}\end{gathered}$$ Next we use to compute the $\{d-1\}$ suspension of : $$\begin{gathered} Z_{(\mathcal{L}ie \circ \mathcal{H}_{1}^{(k)})\{d-1\}}(q;p_{1}, p_{2},p_{3},...)=\\ q^{1-d}\sum_{i=1}^{\infty} \frac{-\mu(i)}{i}\ln\Bigg(1-(-q)^{i(k-2)}+\\ (-q)^{i(k-2)}\bigg[e^{-\sum_{i=1}^{\infty}\frac{(-1)^{d-1}(-q)^{ij(d-1)}p_{ij}}{j}}\bigg]_{\leq i(k-1)}\bigg[e^{\sum_{i=1}^{\infty}\frac{(-1)^{d-1}(-q)^{ij(d-1)}p_{ij}}{j}}\bigg]\Bigg). \tag{4.2} \label{lie,hook,susp}\end{gathered}$$ Now we will simply add the $\mathbb{1}$, the trivial representation of $\Sigma_{1}$, to : $$\begin{gathered} Z_{\mathbb{1}\oplus(\mathcal{L}ie \circ \mathcal{H}_{1}^{(k)})\{d-1\}}(q;p_{1}, p_{2},p_{3},...)=\\p_{1} + q^{1-d}\sum_{i=1}^{\infty} \frac{-\mu(i)}{i}\ln\Bigg(1-(-q)^{i(k-2)}+\\ (-q)^{i(k-2)}\bigg[e^{-\sum_{i=1}^{\infty}\frac{(-1)^{d-1}(-q)^{ij(d-1)}p_{ij}}{j}}\bigg]_{\leq i(k-1)}\bigg[e^{\sum_{i=1}^{\infty}\frac{(-1)^{d-1}(-q)^{ij(d-1)}p_{ij}}{j}}\bigg]\Bigg). \tag{4.3} \label{lie,hook,susp,id}\end{gathered}$$ Finally, we again use and to compute the graded composition product of $Com$ with to get an explicit formula. $$\begin{gathered} Z_{Com \circ (\mathbb{1}\oplus(\mathcal{L}ie \circ \mathcal{H}_{1}^{(k)})\{d-1\})}(q;p_{1}, p_{2},p_{3},...)=\\ \exp\Bigg[\sum_{l=1}^{\infty}\frac{1}{l}\Bigg(p_{l}+(-1)^{d-1}(-q)^{l(1-d)}\sum_{i=1}^{\infty}\frac{-\mu(i)}{i}\ln\bigg(1-(-q)^{li(k-2)}\\+(-q)^{li(k-2)}\bigg[e^{-\sum_{j=1}^{\infty}\frac{(-1)^{d-1}(-q)^{lij(d-1)}p_{lij}}{j}}\bigg]_{\leq li(k-1)}\bigg[e^{\sum_{j=1}^{\infty}\frac{(-1)^{d-1}(-q)^{lij(d-1)}p_{lij}}{j}}\bigg]\bigg)\Bigg)\Bigg]. \tag{4.4}\label{com,lie,hook,susp}\end{gathered}$$ Recall that $E_{m}(y) = \frac{1}{m} \sum_{i\mid m}\left(\mu(i)y^{\frac{m}{i}}\right)$. Using the substitution $m=li$ and the fact that the exponential function and the natural logarithm are inverses to one another, one obtains . $\Box$ Refinement ========== In [@D_T], the homology groups $H_{\ast}\mathcal{M}_{d}^{(k)}(n)$ are described as linear combinations of certain products of iterated brackets, where there are two types of brackets, long and short. Long brackets have exactly $k$ inputs and cannot have any other brackets as elements inside them. Short bracket have exactly 2 inputs, either of which may be a long or short bracket. $Example$: Let $n=7$, $k=3$. Two examples of homology classes one has are: $[\{x_{1},x_{3},x_{6}\},\{x_{2}, x_{4},x_{5}\}]\cdot x_{7}\in H_{5d-3}\mathcal{M}_{d}^{(3)}(7)$, and $[[[\{x_{1},x_{3},x_{5}\},x_{2}],x_{4}],x_{6}]\cdot x_{7} \in H_{5d-4}\mathcal{M}_{d}^{(3)}(7)$. Geometrically, these classes can be viewed as products of spheres. For example, when $n=4$ and $k=3$, one homology class is $[\{x_{1}, x_{2}, x_{4}\},x_{3}] \in H_{3d-2}\mathcal{M}_{d}^{(3)}(4)$. This class is represented by $S^{2d-1}\times S^{d-1}$: $$|x_{1}|^{2} + |x_{2}|^{2} + |x_{4}|^{2} = \epsilon^{2}, \quad x_{1}+x_{2}+x_{4}=0, \quad |x_{3}|^{2}=1, \quad (x_{1},x_{2},x_{3},x_{4}) \in (\mathbb{R}^{d})^{\times 4},$$ where $\epsilon \ll 1$. The symmetric sequence $H_{\ast}\mathcal{M}_{d}^{(k)}$ has a left module structure over the homology of the little $d$-disks operad, which is the operad of Poisson algebras [@fresse]. The short bracket is the Lie operation in this operad.[^2] The number of long and short brackets are additional gradings that we consider on $H_{\ast}\mathcal{M}_{d}^{(k)}(n)$. We add the variable $u$ to be responsible for the number of short brackets grading and the variable $w$ to be responsible for the number of long brackets grading in the graded trace used for the cycle index sum . The sequence $Com$ does not contribute to these additional gradings and thus remains unchanged in the refinement. The graded suspension does not interact with the long and short brackets and thus also remains unchanged. However, there are short brackets in $\mathcal{L}ie$. In cardinality k, there are always $k-1$ (short) brackets, which is why we divide by $u$ and replace $p_{i}$ by $u^{i}p_{i}$ in the formula below. By abuse of notation, we also denote by $Z_{\mathcal{L}ie}$ the cycle index sum of $\mathcal{L}ie$ with this refinement: $$Z_{\mathcal{L}ie}(u,q;p_{1},p_{2},p_{3},...) = \sum_{i=1}^{\infty}\frac{-\mu(i)}{u}\frac{\ln(1-u^{i}p_{i})}{i}.$$ The space $\mathcal{H}_{1}^{(k)}(n)$ is a subspace of $H_{k-2}\mathcal{M}_{1}^{(k)}(n)$, defined as a subspace spanned by iterated brackets that have exactly one long bracket [@D_T]. This explains why we multiply by $w$ in the formula below. However, the iterated brackets of $\mathcal{H}_{1}^{(k)}(n)$ have exactly $n-k$ short brackets, which explains in the formula below why we divide by $u^{k}$ and replace $p_{i}$ by $u^{i}p_{i}$ in the refinement. Similarly we abuse notation to denote the cycle index sum of $\mathcal{H}_{1}^{(k)}(n)$ with the refinement as before by $Z_{\mathcal{H}_{1}^{(k)}}$. $$\begin{gathered} Z_{\mathcal{H}_{1}^{(k)}}(u,w,q;p_{1},p_{2},p_{3},...) =\\ \frac{w}{u^{k}}\left((-q)^{k-2}-(-q)^{k-2}\left(\exp\left(-\sum_{i=1}^{\infty}\frac{u^{i}p_{i}}{i}\right)\right)_{ \leq k-1}\left(\exp\left(\sum_{i=1}^{\infty}\frac{u^{i}p_{i}}{i}\right)\right)\right). \end{gathered}$$ The plethysm also affects the long and short brackets and is now defined as: $$Z_{M \circ N} = Z_{M} \ast Z_{N} = Z_{M}(u;w;q;p_{i} \mapsto p_{i} \ast Z_{N}),$$ where $$p_{i}\ast Z_{N} = Z_{N}(u \mapsto u^{i}; \; w \mapsto w^{i}; \; q \mapsto (-1)^{i-1}q^{i};\; p_{j}\mapsto p_{ij}).$$ For $k\geq 3$ and $d\geq 2$, $$\begin{gathered} $$Z_{H_{\ast}\mathcal{M}_{d}^{(k)}}(u,w,q;p_{1},p_{2},p_{3},...) = \\ e^{(\sum_{l=1}^{\infty}\frac{p_l}{l})}\prod_{m=1}^{\infty}\Bigg(1-\frac{w^{m}}{u^{m(k-1)}}\Bigg[(-q)^{m(k-2)} - \\ (-q)^{m(k-2)} \left(e^{-\sum_{j=1}^{\infty}\frac{(-1)^{d-1}(-q)^{mj(d-1)}u^{mj}p_{mj}}{j}}\right)_{\leq m(k-1)}\left(e^{\sum_{j=1}^{\infty}\frac{(-1)^{d-1}(-q)^{mj(d-1)}u^{mj}p_{mj}}{j}}\right)\Bigg]\Bigg)^{(-1)^{d}E_{m}\left(\frac{1}{(-q)^{d-1}u}\right)}.$$ \tag{5.1} \label{refinement}\end{gathered}$$ Note that for $k=2$ or $d=1$ we do not get a splitting but rather a filtration. The formula can still be applied, and it computes the cycle index sum of the symmetric group action on the associated graded factor [@D_T]. The proof of Theorem 2 follows the same steps as the proof of Theorem 1. [^1]: Explicitly this formula appears in [@D_T] [^2]: For the notion of operad and left module over an operad, see for example [@fresse]
--- abstract: 'A spectrahedron is the feasible set of a semidefinite program, [**SDP**]{}, i.e., the intersection of an affine set with the positive semidefinite cone. While strict feasibility is a generic property for random problems, there are many classes of problems where strict feasibility fails and this means that strong duality can fail as well. If the minimal face containing the spectrahedron is known, the [**SDP**]{}can easily be transformed into an equivalent problem where strict feasibility holds and thus strong duality follows as well. The minimal face is fully characterized by the range or nullspace of any of the matrices in its relative interior. Obtaining such a matrix may require many *facial reduction* steps and is currently not known to be a tractable problem for spectrahedra with *singularity degree* greater than one. We propose a *single* parametric optimization problem with a resulting type of *central path* and prove that the optimal solution is unique and in the relative interior of the spectrahedron. Numerical tests illustrate the efficacy of our approach and its usefulness in regularizing [**SDPs**]{}.' author: - '[Stefan Sremac](https://uwaterloo.ca/combinatorics-and-optimization/about/people/ssremac)[^1]' - '[Hugo Woerdeman](http://people.orie.cornell.edu/dd379)[^2]' - '[Henry Wolkowicz](http://www.math.uwaterloo.ca/~hwolkowi/) [^3]' bibliography: - '.bib' - '.bib' - '.bib' - '.bib' - '.bib' title: Complete Facial Reduction in One Step for Spectrahedra --- [**Keywords:**]{} Semidefinite programming, SDP, facial reduction, singularity degree, maximizing $\log \det$. [**AMS subject classifications:**]{} 90C22, 90C25 Introduction {#sec:intro} ============ A [*spectrahedron*]{} is the intersection of an affine manifold with the positive semidefinite cone. Specifically, if [*$\Sn$*]{} denotes the set of $n\times n$ symmetric matrices, [*$\Snp$*]{}$ \subset \Sn$ denotes the set of positive semidefinite matrices, ${{\mathcal A}}:\Sn \rightarrow {{\R^m\,}}$ is a linear map, and $b \in {{\R^m\,}}$, then $$\label{eq:feasset} {\textit{${\mathcal{F}}={\mathcal{F}}({{\mathcal A}},b)$}\index{${\mathcal{F}}={\mathcal{F}}({{\mathcal A}},b)$}} := \{ X\in \Snp : {{\mathcal A}}(X) = b \}$$ is a spectrahedron. We emphasize that ${\mathcal{F}}$ is given to us as a function of the algebra, the data ${{\mathcal A}},b$, rather than the geometry. Our motivation for studying spectrahedra arises from [*semidefinite programs, [**SDPs**]{}*]{}, where a linear objective is minimized over a spectrahedron. In contrast to [*linear programs*]{}, strong duality is not an inherent property of [**SDPs**]{}, but depends on a [*constraint qualification (CQ)*]{} such as the Slater CQ. For an [**SDP**]{}not satisfying the Slater CQ, the central path of the standard interior point algorithms is undefined and there is no guarantee of strong duality or convergence. Although instances where the Slater CQ fails are pathological, see e.g. [@MR3622250] and [@Pataki2017], they occur in many applications and this phenomenon has lead to the development of a number of regularization methods, [@RaTuWo:95; @Ram:95; @lusz00; @int:deklerk7; @LuoStZh:97]. In this paper we focus on the [*facial reduction*]{} method, [@bw1; @bw2; @bw3], where the optimization problem is restricted to the minimal face of $\Snp$ containing ${\mathcal{F}}$, denoted $\operatorname{face}({\mathcal{F}})$. We note that the different regularization methods for [**SDP**]{}are not fundamentally unrelated. Indeed, in [@RaTuWo:95] a relationship between the extended dual of Ramana, [@Ram:95], and the facial reduction approach is established and in[@MR3063940] the authors show that the dual expansion approach, [@lusz00; @LuoStZh:97] is a kind of ‘dual’ of facial reduction. When knowledge of the minimal face is available, the optimization problem is easily transformed into one for which the Slater CQ holds. Many of the applications of facial reduction to [**SDP**]{}rely on obtaining the minimal face through analysis of the underlying structure. See, for instance, the recent survey [@DrusWolk:16] for applications to hard combinatorial optimization and matrix completion problems. In this paper we are interested in instances of [**SDP**]{}where the minimal face can not be obtained analytically. An algorithmic approach was initially presented in [@bw3] and subsequent analyses of this algorithm as well as improvements, applications to [**SDP**]{}, and new approaches may be found in [@MR3108446; @MR3063940; @ScTuWonumeric:07; @perm; @permfribergandersen; @2016arXiv160802090P; @waki_mur_sparse]. While these algorithms differ in some aspects, their main structure is the same. At each iteration a subproblem is solved to obtain an *exposing vector* for a face (not necessarily minimal) containing ${\mathcal{F}}$. The [**SDP**]{}is then reduced to this smaller face and the process repeated until the [**SDP**]{}is reduced to $\operatorname{face}({\mathcal{F}})$. Since at each iteration, the dimension of the ambient face is reduced by one, at most $n-1$ iterations are necessary. We remark that this method is a kind of ‘dual’ approach, in the sense that the exposing vector obtained in the subproblem is taken from the dual of the smallest face available at the current iteration. We highlight two challenges with this approach: (1) each subproblem is itself an [**SDP**]{}and thereby computationally intensive and (2) at each iteration a decision must be made regarding the rank of the exposing vector. With regard to the first challenge, we note that it is really two-fold. The computational expense arises from the complexity of an individual subproblem and also from the number of such problems to be solved. The subproblems produced in [@ScTuWonumeric:07] are ‘nice’ in the sense that strong duality holds, however, each subproblem is an [**SDP**]{}and its computational complexity is comparable to that of the original problem. In [@perm] a relaxation of the subproblem is presented that is less expensive computationally, but may require more subproblems to be solved. The number of subproblems needed to solve depends of course on the structure of the problem but also on the method used to determine that facial reduction is needed. For algorithms using the theorem of the alternative, [@bw1; @bw2; @bw3], a theoretical lower bound, called the *singularity degree*, is introduced in [@S98lmi]. In [@MR2724357] an example is constructed for which the singularity degree coincides with the upper bound of $n-1$, i.e., the worst case exists. In [@permfribergandersen], the [*self-dual embedding*]{} algorithm of [@int:deklerk7] is used to determine whether facial reduction is needed. This approach may require fewer subproblems than the singularity degree. The second challenge is to determine which eigenvalues of the exposing vector obtained at each iteration are identically zero, a classically challenging problem. If the rank of the exposing vector is chosen too large, the problem may be restricted to a face which is smaller than the minimal face. This error results in losing part of the original spectrahedron. If on the other hand, the rank is chosen too small, the algorithm may require more iterations than the singularity degree. The algorithm of [@ScTuWonumeric:07] is proved to be backwards stable only when the singularity degree is one, and the arguments can not be extended to higher singularity degree problems due to possible error in the decision regarding rank. Our main contribution in this paper is a ‘primal’ approach to facial reduction, which does not rely on exposing vectors, but instead obtains a matrix in the relative interior of ${\mathcal{F}}$, denoted $\operatorname{{relint}}({\mathcal{F}})$ Since the minimal face is characterized by the range of any such matrix, we obtain a facially reduced problem in just one step. As a result, we eliminate costly subproblems and require only one decision regarding rank. While our motivation arises from [**SDPs**]{}, the problem of characterizing the relative interior of a spectrahedron is independent of this setting. The problem is formally stated below. \[prob:main\] Given a spectrahedron ${\mathcal{F}}({{\mathcal A}},b) \subseteq \Sn$, find $\bar{X}\in \operatorname{{relint}}({\mathcal{F}})$. This paper is organized as follows. In Section \[sec:prelim\] we introduce notation and discuss relevant material on [**SDP**]{}strong duality and facial reduction. We develop the theory for our approach in Section \[sec:paramprob\], prove convergence to the relative interior, and prove convergence to the analytic center under a sufficient condition. In Section \[sec:projGN\], we propose an implementation of our approach and we present numerical results in Section \[sec:numerics\]. We also present a method for generating instances of [**SDP**]{}with varied singularity degree in Section \[sec:numerics\]. We conclude the main part of the paper with an application to matrix completion problems in Section \[sec:psdcyclecompl\]. Notation and Background {#sec:prelim} ======================= Throughout this paper the ambient space is the Euclidean space of $n\times n$ real symmetric matrices, $\Sn$, with the standard [*trace inner product*]{} $$\langle X,Y \rangle := \operatorname{{trace}}(XY) = \sum_{i=1}^n \sum_{j=1}^n X_{ij}Y_{ij},$$ and the induced [*Frobenius norm*]{} $$\lVert X \rVert_F := \sqrt{\langle X, X\rangle }.$$ In the subsequent paragraphs, we highlight some well known results on the cone of positive semidefinite matrices and its faces, as well other useful results from convex analysis. For proofs and further reading we suggest [@SaVaWo:97; @MR2724357; @con:70]. The dimension of $\Sn$ is the triangular number $n(n+1)/2=: t(n)$. We define [*$\operatorname{{svec}}$*]{}$: \Sn \rightarrow {{\R^{\scriptsize{t(n)}}\,}}$ such that it maps the upper triangular elements of $X \in \Sn$ to a vector in ${{\R^{\scriptsize{t(n)}}\,}}$ where the off-diagonal elements are multiplied by $\sqrt{2}$. Then $\operatorname{{svec}}$ is an isometry and an isomorphism with [*$\operatorname{{sMat}}$*]{}$ := \operatorname{{svec}}^{-1}$. Moreover, for $X,Y \in \Sn$, $$\langle X,Y \rangle = \operatorname{{svec}}(X)^T \operatorname{{svec}}(Y).$$ The eigenvalues of any $X \in \Sn$ are real and indexed so as to satisfy, $$\lambda_1(X) \ge \lambda_2(X) \ge \cdots \ge \lambda_n(X),$$ and $\lambda(X) \in \Rn$ is the vector consisting of all the eigenvalues. In terms of this notation, the operator 2-norm for matrices is defined as $\lVert X \rVert_2 := \max_i \lvert \lambda_i(X) \rvert$. When the argument to $\| \cdot \|_2$ is a vector, this denotes the usual Euclidean norm. The Frobenius norm may also be expressed in terms of eigenvalues: $\lVert X \rVert_F= \lVert \lambda(X) \rVert_2$. The set of [*positive semidefinite (PSD)*]{} matrices, $\Snp$, is a closed convex cone in $\Sn$, whose interior consists of the [*positive definite (PD)*]{} matrices, [*$\Snpp$*]{}. The cone $\Snp$ induces the [*Löwner partial order*]{} on $\Sn$. That is, for $X,Y \in \Sn$ we write $X\succeq Y$ when $X-Y \in \Snp$ and similarly $X\succ Y$ when $X-Y \in \Snpp$. For $X,Y \in \Snp$ the following equivalence holds: $$\label{eq:innerprodmatrixprod} \langle X, Y \rangle =0 \ \iff \ XY = 0.$$ \[def:face\] A closed convex cone $f \subseteq \Snp$ is a [*face*]{} of $\Snp$ if $$X,Y \in \Snp, \ X+Y \in f \ \implies \ X,Y \in f.$$ A nonempty face $f$ is said to be *proper* if $f \ne \Snp$ and $f \ne 0$. Given a convex set $C \subseteq \Snp$, the [*minimal face*]{} of $\Snp$ containing $f$, with respect to set inclusion, is denoted $\operatorname{face}(C)$. A face $f$ is said to be *exposed* if there exists $W \in \Snp \setminus \{0\}$ such that $$f = \{X \in \Snp : \langle W, X \rangle = 0\}.$$ Every face of $\Snp$ is exposed and the vector $W$ is referred to as an [*exposing vector*]{}. The faces of $\Snp$ may be characterized in terms of the range of any of its maximal rank elements. Moreover, each face is isomorphic to a smaller dimensional positive semidefinite cone, as is seen in the subsequent theorem. \[thm:face\] Let $f$ be a face of $\Snp$ and $X \in f$ a maximal rank element with rank $r$ and orthogonal spectral decomposition $$X=\begin{bmatrix} V & U \end{bmatrix} \begin{bmatrix} D & 0 \cr 0 & 0 \end{bmatrix} \begin{bmatrix} V & U \end{bmatrix}^T \in \Snp, \quad D\in \Srpp.$$ Then $f = V \Srp V^T$ and $\operatorname{{relint}}(f) = V \Srpp V^T$. Moreover, $W \in \Snp$ is an exposing vector for $f$ if and only if $W \in U\Snrpp U^T$. We refer to $U\Snrp U^T$, from the above theorem, as the [*conjugate face*]{}, denoted $f^c$. For any convex set $C$, an explicit form for $\operatorname{face}(C)$ and $\operatorname{face}(C)^c$ may be obtained from the orthogonal spectral decomposition of any of its maximal rank elements as in Theorem \[thm:face\]. For a linear map ${{\mathcal A}}: \Sn \rightarrow {{\R^m\,}}$, there exist $S_1, \dotso,S_m \in \Sn$ such that $$\begin{pmatrix} {{\mathcal A}}(X)\end{pmatrix}_i = \langle X,S_i \rangle, \quad \forall i \in \{1,\dotso,m\}.$$ The [*adjoint*]{} of ${{\mathcal A}}$ is the unique linear map ${{\mathcal A}}^* : {{\R^m\,}}\rightarrow \Sn$ satisfying $$\langle {{\mathcal A}}(X),y \rangle = \langle X,{{\mathcal A}}^*(y) \rangle, \quad \forall X \in \Sn, \, y \in {{\R^m\,}},$$ and has the explicit form ${{\mathcal A}}^*(y) = \sum_{i=1}^m y_i S_i$, i.e., $\operatorname{range}({{\mathcal A}}^*)=\operatorname{{span}}\{S_1,\ldots,S_m\}$. We define $A_i\in \Sn$ to form a basis for the nullspace, $\operatorname{null}({{\mathcal A}})=\operatorname{{span}}\{ A_1,\dotso,A_q\}$. For a non-empty convex set $C \subseteq \Sn$ the [*recession cone*]{}, denoted $C^{\infty}$, captures the directions in which $C$ is unbounded. That is $$\label{eq:recession} C^{\infty} := \{Y \in \Sn : X + \lambda Y \in C, \ \forall \lambda \ge 0, \ X \in C \}.$$ Note that the recession directions are the same at all points $X \in C$. For a non-empty set $S \subseteq \Sn$, the [*dual cone*]{} (also referred to as the positive polar) is defined as $$\label{eq:dualcone} S^+ := \{ Y \in \Sn : \langle X, Y \rangle \ge 0, \ \forall X \in S\}.$$ A useful result regarding dual cones is that for cones $K_1$ and $K_2$, $$\label{eq:dualintersection} (K_1 \cap K_2)^+ = \operatorname{{cl}}(K_1^+ + K_2^+),$$ where [*$\operatorname{{cl}}(\cdot)$*]{} denotes set closure. Strong Duality in Semidefinite Programming and Facial Reduction {#sec:sdpstrongduality} --------------------------------------------------------------- Consider the standard primal form SDP $$\label{prob:sdpprimal} {\textbf{SDP}\,}\qquad \qquad {\textit{$p^{\star}$}\index{$p^{\star}$}}:=\min \{ \langle C,X\rangle : {{\mathcal A}}(X)=b, X\succeq 0\},$$ with Lagrangian dual $$\label{prob:sdpdual} {\textbf{D-SDP}\,}\qquad \qquad {\textit{$d^{\star}$}\index{$d^{\star}$}}:=\min \{ b^Ty : {{\mathcal A}}^*(y) \preceq C \}.$$ Let ${\mathcal{F}}$ denote the spectrahadron defined by the feasible set of ${\textbf{SDP}\,}$. One of the challenges in semidefinite programming is that strong duality is not an inherent property, but depends on a constraint qualification, such as the Slater CQ. \[thm:strongduality\] If the primal optimal value $p^{\star}$ is finite and ${\mathcal{F}}\cap \Snpp \ne \emptyset$, then the primal-dual pair ${\textbf{SDP}\,}$ and ${\textbf{D-SDP}\,}$ have a [*zero duality gap*]{}, $p^{\star}=d^{\star}$, and $d^{\star}$ is attained. Since the Lagrangian dual of the dual is the primal, this result can similarly be applied to the dual problem, i.e., if the primal-dual pair both satisfy the Slater CQ, then there is a zero duality gap and both optimal values are attained. Not only can strong duality fail with the absence of the Slater CQ, but the standard central path of an interior point algorithm is undefined. The facial reduction regularization approach of [@bw1; @bw2; @bw3] restricts [**SDP**]{}to the minimal face of $\Snp$ containing ${\mathcal{F}}$: $$\label{eq:sdpr} {\textbf{SDP-R}\,}\qquad \qquad \min \{\langle C,X \rangle : {{\mathcal A}}(X) = b,\, X \in \operatorname{face}({\mathcal{F}}) \}.$$ Since the dimension of ${\mathcal{F}}$ and $\operatorname{face}({\mathcal{F}})$ is the same, the Slater CQ holds for the facially reduced problem. Moreover, $\operatorname{face}({\mathcal{F}})$ is isomorphic to a smaller dimensional positive semidefinite cone, thus ${\textbf{SDP-R}\,}$ is itself a semidefinite program. The restriction to $\operatorname{face}({\mathcal{F}})$ may be obtained as in the results of Theorem \[thm:face\]. The dual of [**SDP-R**]{}restricts the slack variable to the dual cone $$Z=C-{{\mathcal A}}^*(y)\in \operatorname{face}({\mathcal{F}})^+.$$ Note that ${\mathcal{F}}^+=\operatorname{face}({\mathcal{F}})^+$. If we have knowledge of $\operatorname{face}({\mathcal{F}})$, i.e., we have the matrix $V$ such that $\operatorname{face}({\mathcal{F}}) = V\Srp V^T$, then we may replace $X$ in [**SDP**]{}by $VRV^T$ with $R \succeq 0$. After rearranging, we obtain [**SDP-R**]{}. Alternatively, if our knowledge of the minimal face is in the form of an exposing vector, say $W$, then we may obtain $V$ so that its columns form a basis for $\operatorname{null}(W)$. We see that the approach is straightforward when knowledge of $\operatorname{face}({\mathcal{F}})$ is available. In instances where such knowledge is unavailable, the following theorem of the alternative from [@bw3] guarantees the existence of exposing vectors that lie in $\operatorname{range}({{\mathcal A}}^*)$. \[thm:alternative\] Exactly one of the following systems is consistent: 1. ${{\mathcal A}}(X) = b$, $X\succ 0$, 2. $0 \ne {{\mathcal A}}^*(y) \succeq 0$, $b^Ty = 0$. The first alternative is just the Slater CQ, while if the second alternative holds, then ${{\mathcal A}}^*(y)$ is an exposing vector for a face containing ${\mathcal{F}}$. We may use a basis for $\operatorname{null}({{\mathcal A}}^*(y))$ to obtain a smaller [**SDP**]{}. If the Slater CQ holds for the new [**SDP**]{}we have obtained [**SDP-R**]{}, otherwise, we find an exposing vector and reduce the problem again. We outline the facial reduction procedure in Algorithm \[algo:fr\]. At each iteration, the dimension of the problem is reduced by at least one, hence this approach is bound to obtain [**SDP-R**]{}in at most $n-1$ iterations, assuming that the initial problem is feasible. If at each iteration the exposing vector obtained is of maximal rank then the number of iterations required to obtain [**SDP-R**]{}is referred to as the *singularity degree*, [@S98lmi]. For a non-empty spectrahedron, ${\mathcal{F}}$, we denote the singularity degree as $\operatorname{sd}=\operatorname{sd}({\mathcal{F}})$. \[algo:fr\] Initialize $S_i$ so that $({{\mathcal A}}(X))_i = \langle S_i,X \rangle$ for $i \in \{1,\dotso,m\}$ We remark that any algorithm pursuing the minimal face through exposing vectors of the form ${{\mathcal A}}^*(\cdot)$, must perform at least as many iterations as the singularity degree. The singularity degree could be as large as the trivial upper bound $n-1$ as is seen in the example of [@MR2724357]. Thus facial reduction may be very expensive computationally. On the other hand, from Theorem \[thm:face\] we see that $\operatorname{face}({\mathcal{F}})$ is fully characterized by the range of any of its relative interior matrices. That is, from any solution to Problem \[prob:main\] we may obtain the regularized problem [**SDP-R**]{}. A Parametric Optimization Approach {#sec:paramprob} ================================== In this section we present a parametric optimization problem that solves Problem \[prob:main\]. \[assump:main\] We make the following assumptions: 1. ${{\mathcal A}}$ is surjective, 2. ${\mathcal{F}}$ is non-empty, bounded and contained in a proper face of $\Snp$. The assumption on ${{\mathcal A}}$ is a standard regularity assumption and so is the non-emptiness assumption on ${\mathcal{F}}$. The necessity of ${\mathcal{F}}$ to be bounded will become apparent throughout this section, however, our approach may be applied to unbounded spectrahedra as well. We discuss such extensions in Section \[sec:unbounded\]. The assumption that ${\mathcal{F}}$ is contained in a proper face of $\Snp$ restricts our discussion to those instances of [**SDP**]{}that are interesting with respect to facial reduction. In the following lemma are stated two useful characterizations of bounded spectrahedra. \[lem:boundedchar\] The following holds: $${\mathcal{F}}\text{ is bounded} \ \iff \ \operatorname{null}({{\mathcal A}}) \cap \Snp= \{0\} \ \iff \operatorname{range}({{\mathcal A}}^*) \cap \Snpp \ne \emptyset.$$ For the first equivalence, ${\mathcal{F}}$ is bounded if and only if ${\mathcal{F}}^{\infty} = \{0\}$ by Theorem 8.4 of [@con:70]. It suffices, therefore, to show that ${\mathcal{F}}^{\infty} = \operatorname{null}({{\mathcal A}}) \cap \Snp$. It is easy to see that $(\Snp)^{\infty} = \Snp$ and that the recession cone of the affine manifold defined by ${{\mathcal A}}$ and $b$ is $\operatorname{null}({{\mathcal A}})$. By Corollary 8.3.3 of [@con:70] the recession cone of the intersection of convex sets is the intersection of the respective recession cones, yielding the desired result. Now let us consider the second equivalence. For the forward direction, observe that $$\begin{aligned} \operatorname{null}({{\mathcal A}}) \cap \Snp = \{0\} \ &\iff \ \left( \operatorname{null}({{\mathcal A}}) \cap \Snp \right)^+ = \{0 \}^+, \\ & \iff \ \operatorname{null}({{\mathcal A}})^{\perp} + \Snp = \Sn, \\ & \iff \ \operatorname{range}({{\mathcal A}}^*) + \Snp = \Sn.\end{aligned}$$ The second inequality is due to and one can verify that in this case $\operatorname{null}({{\mathcal A}})^{\perp} \cap \Snp$ is closed. Thus there exists $X \in \operatorname{range}({{\mathcal A}}^*)$ and $Y \in \Snp$ such that $X+Y=-I$. Equivalently, $-X = I + Y \in \Snpp$. For the converse, let $X \in \operatorname{range}({{\mathcal A}}^*) \cap \Snpp$ and suppose $0\ne S \in \operatorname{null}({{\mathcal A}}) \cap \Snp$. Then $\langle X,S \rangle = 0$ which implies, by , that $XS = 0$. But then $\operatorname{null}(X) \ne \{0\}$, a contradiction. Let $r$ denote the maximal rank of any matrix in $\operatorname{{relint}}({\mathcal{F}})$ and let the columns of $V \in \R^{n\times r}$ form a basis for its range. In seeking a relative interior point of ${\mathcal{F}}$ we define a specific point from which we develop a parametric optimization problem. \[def:analytic\] The analytic center of ${\mathcal{F}}$ is the unique matrix $\hat{X}$ satisfying $$\label{eq:analytic} \hat{X} = \arg \max \{ \log \det (V^TXV) : X \in {\mathcal{F}}\}.$$ Under Assumption \[assump:main\] the analytic center is well-defined and this follows from the proof of Theorem \[thm:maxdet\], below. It is easy to see that the analytic center is indeed in the relative interior of ${\mathcal{F}}$ and therefore a solution to Probelm \[prob:main\]. However, the optimization problem from which it is derived is intractable due to the unknown matrix $V$. If $V$ is simply removed from the optimization problem (replaced with the identity), then the problem is ill-posed since the objective does not take any finite values over the feasible set as it lies on the boundary of the [**SDP**]{}cone. To combat these issues, we propose replacing $V$ with $I$ and also perturbing ${\mathcal{F}}$ so that it intersects $\Snpp$. The perturbation we choose is that of replacing $b$ with ${b(\alpha) }:= b+ \alpha {{\mathcal A}}(I), \ \alpha>0$, thereby defining a family of spectrahedra $${{\mathcal{F}}(\alpha)}:= \{X \in \Snp : {{\mathcal A}}(X) = {b(\alpha) }\}.$$ It is easy to see that if ${\mathcal{F}}\ne \emptyset$ then ${{\mathcal{F}}(\alpha)}$ has postive definite elements for every $\alpha >0$. Indeed ${\mathcal{F}}+ \alpha I \subset {{\mathcal{F}}(\alpha)}$. Note that the affine manifold may be perturbed by any positive definite matrix and $I$ is chosen for simplicity. We now consider the family of optimization problems for $\alpha > 0$: $$\label{eq:Palpha} {{\bf P(\alpha)}}\qquad \qquad \max \{ \log \det ( X) : X\in {{\mathcal{F}}(\alpha)}\}.$$ It is well known that the solution to this problem exists and is unique for each $\alpha > 0$. We include a proof in Theorem \[thm:maxdet\], below. Moreover, since $\operatorname{face}({{\mathcal{F}}(\alpha)}) = \Snp$ for each $\alpha > 0$, the solution to ${{\bf P(\alpha)}}$ is in $\operatorname{{relint}}({{\mathcal{F}}(\alpha)})$ and is exactly the analytic center of ${{\mathcal{F}}(\alpha)}$. The intuition behind our approach is that as the perturbation gets smaller, i.e., $\alpha \searrow 0$, the solution to ${{\bf P(\alpha)}}$ approaches the relative interior of ${\mathcal{F}}$. This intuition is validated in Section \[sec:convergence\]. Specifically, we show that the solutions to ${{\bf P(\alpha)}}$ form a smooth path that converges to $\bar{X} \in \operatorname{{relint}}({\mathcal{F}})$. We also provide a sufficient condition for the limit point to be $\hat{X}$ in Section \[sec:analyticcenter\]. We note that our approach of perturbing the spectrahedron in order to use the $\log \det(\cdot)$ function is not entirely new. In [@fazelhindiboyd:01], for instance, the authors perturb a convex feasible set in order to approximate the rank function using $\log \det(\cdot)$. Unlike our approach, their perturbation is constant. Optimality Conditions --------------------- We choose the strictly concave function $\log \det (\cdot)$ for its elegant optimality conditions, though the maximization is equivalent to maximizing only the determinant. We treat it as an [*extended valued*]{} concave function that takes the value $-\infty$ if $X$ is singular. For this reason we refer to both functions $\det(\cdot)$ and $\log \det (\cdot)$ equivalently throughout our discussion. Let us now consider the optimality conditions for the problem ${{\bf P(\alpha)}}$. Similar problems have been thoroughly studied throughout the literature in matrix completions and [**SDP**]{}, e.g., [@GrJoSaWo:84; @MR2807419; @SaVaWo:97; @MR1614078]. Nonetheless, we include a proof for completeness and to emphasize its simplicity. \[thm:maxdet\] For every $\alpha >0$ there exists a unique ${{X(\alpha)}}\in {{\mathcal{F}}(\alpha)}\cap \Snpp$ such that $$\label{eq:maxlogdet} {{X(\alpha)}}=\arg \max \{ \log \det (X) : X \in {{\mathcal{F}}(\alpha)}\}.$$ Moreover, ${{X(\alpha)}}$ satisfies if, and only if, there exists a unique ${{y(\alpha)}}\in {{\R^m\,}}$ and a unique ${{Z(\alpha)}}\in \Snpp$ such that $$\label{eq:optimalsystem} \begin{bmatrix} {{\mathcal A}}^*({{y(\alpha)}})-{{Z(\alpha)}}\\ {{\mathcal A}}({{X(\alpha)}}) - {b(\alpha) }\\ {{Z(\alpha)}}{{X(\alpha)}}- I \end{bmatrix} = 0.$$ By Assumption \[assump:main\], ${\mathcal{F}}\ne \emptyset$ and bounded and it follows that ${{\mathcal{F}}(\alpha)}\cap \Snpp \ne \emptyset$ and by Lemma \[lem:boundedchar\] it is bounded. Moreover, $\log \det (\cdot)$ is a strictly concave function over ${{\mathcal{F}}(\alpha)}\cap \Snpp$ (a so-called barrier function) and $$\lim_{\det(X)\to 0} \log \det (X) = -\infty.$$ Thus, we conclude that the optimum ${{X(\alpha)}}\in {{\mathcal{F}}(\alpha)}\cap \Snpp$ exists and is unique. The Lagrangian of problem is $$\begin{aligned} {{\mathcal L} }(X,y) &= \log \det(X) - \langle y, {{\mathcal A}}(X) - b\rangle \\ &= \log \det(X) - \langle {{\mathcal A}}^*(y), X \rangle + \langle y, b\rangle.\end{aligned}$$ Since the constraints are linear, stationarity of the Lagrangian holds at ${{X(\alpha)}}$. Hence there exists ${{y(\alpha)}}\in {{\R^m\,}}$ such that $({{X(\alpha)}})^{-1} = {{\mathcal A}}^*({{y(\alpha)}}) =: {{Z(\alpha)}}$. Clearly ${{Z(\alpha)}}$ is unique, and since ${{\mathcal A}}$ is surjective, we conclude in addition that ${{y(\alpha)}}$ is unique. The Unbounded Case {#sec:unbounded} ------------------ Before we continue with the convergence results, we briefly address the case of unbounded spectrahedra. The restriction to bounded spectrahedra is necessary in order to have solutions to . There are certainly large families of [**SDPs**]{}where the assumption holds. Problems arising from liftings of combinatorial optimization problems often have the diagonal elements specified, and hence bound the corresponding spectrahedron. Matrix completion problems are another family where the diagonal is often specified. Nonetheless, many [**SDPs**]{}have unbounded feasible sets and we provide two methods for reducing such spectrahedra to bounded ones. First, we show that the boundedness of ${\mathcal{F}}$ may be determined by solving a projection problem. \[prop:boundtest\] Let ${\mathcal{F}}$ be a spectrahedron defined by the affine manifold ${{\mathcal A}}(X) = b$ and let $$P := \arg \min \ \{\lVert X - I \rVert_F : X\in \operatorname{range}({{\mathcal A}}^*) \}.$$ Then ${\mathcal{F}}$ is bounded if $P \succ 0$. First we note that $P$ is well defined and a singleton since it is the projection of $I$ onto a closed convex set. Now $P\succ 0$ implies that $\operatorname{range}({{\mathcal A}}^*) \cap \Snpp \ne \emptyset$ and by Lemma \[lem:boundedchar\] this is equivalent to ${\mathcal{F}}$ bounded. The proposition gives us a sufficient condition for ${\mathcal{F}}$ to be bounded. Suppose this condition is not satisfied, but we have knowledge of some matrix $S \in {\mathcal{F}}$. Then for $t > 0$, consider the spectrahedron $${\mathcal{F}}' := \{ X \in \Sn : X\in {\mathcal{F}}, \ \operatorname{{trace}}(X) = \operatorname{{trace}}(S) + t \}.$$ Clearly ${\mathcal{F}}'$ is bounded. Moreover, we see that ${\mathcal{F}}' \subset {\mathcal{F}}$ and contains maximal rank elements of ${\mathcal{F}}$, hence $\operatorname{face}({\mathcal{F}}') = \operatorname{face}({\mathcal{F}})$. It follows that $\operatorname{{relint}}({\mathcal{F}}') \subset \operatorname{{relint}}({\mathcal{F}})$ and we have reduced the problem to the bounded case. Now suppose that the sufficient condition of the proposition does not hold and we do not have knowledge of a feasible element of $ F$. In this case we detect recession directions, elements of $\operatorname{null}({{\mathcal A}}) \cap \Snp$, and project to the orthogonal complement. Specifically, if ${\mathcal{F}}$ is unbounded then ${{\mathcal{F}}(\alpha)}$ is unbounded and problem is unbounded. Suppose, we have detected unboundedness, i.e., we have $X \in {\mathcal{F}}(\alpha)\cap \Snp$ with large norm. Then $X = S_0 + S$ with $S \in \operatorname{null}({{\mathcal A}}) \cap \Snp$ and $\lVert S \rVert \gg \lVert S_0 \rVert$. We then restrict ${\mathcal{F}}$ to the orthogonal complement of $S$, that is, we consider the new spectrahedron $${\mathcal{F}}' := \{X\in \Sn : X\in {\mathcal{F}}, \ \langle S,X\rangle = 0\}.$$ By repeated application, we eliminate a basis for the recession directions and obtain a bounded spectrahedron. From any of the relative interior points of this spectrahedron, we may obtain a relative interior point for ${\mathcal{F}}$ by adding to it the recession directions obtained throughout the reduction process. Convergence to the Relative Interior and Smoothness {#sec:convergence} --------------------------------------------------- By simple inspection it is easy to see that $({{X(\alpha)}},{{y(\alpha)}},{{Z(\alpha)}})$, as in , does not converge as $\alpha \searrow 0$. Indeed, under Assumption \[assump:main\], $$\lim_{\alpha \searrow 0} \lambda_n({{X(\alpha)}}) \rightarrow 0 \ \implies \ \lim_{\alpha \searrow 0} \lVert {{Z(\alpha)}}\rVert_2 \rightarrow +\infty.$$ It is therefore necessary to scale ${{Z(\alpha)}}$ so that it remains bounded. Let us look at an example. Consider the matrix completion problem: find $X \succeq 0$ having the form $$\begin{pmatrix} 1 & 1 & ? \cr 1 & 1 & 1 \cr ? & 1 & 1 \end{pmatrix}.$$ The set of solutions is indeed a spectrahedron with ${\mathcal A}$ and $b$ given by $${\mathcal A} \left( \begin{bmatrix} x_{11} & x_{12} & x_{13} \cr x_{12} & x_{22} & x_{23} \cr x_{13} & x_{23} & x_{33} \end{bmatrix} \right) := \begin{pmatrix} x_{11} \cr x_{12} \cr x_{22} \cr x_{23} \cr x_{33} \end{pmatrix},\ b := \begin{pmatrix} 1 \cr 1 \cr 1 \cr 1 \cr 1\end{pmatrix}.$$ In this case, it is not difficult to obtain $$X(\alpha ) = \begin{pmatrix}1+\alpha & 1 & \frac{1}{1+\alpha} \cr 1 & 1+\alpha & 1\cr \frac{1}{1+\alpha} & 1 & 1+\alpha \end{pmatrix},$$ with inverse $$X(\alpha)^{-1} =\frac{1}{\alpha(2+\alpha)} \begin{pmatrix}1+\alpha & -1 & 0 \cr -1 & \frac{\alpha^2+2\alpha+2}{ 1+\alpha} & -1\cr 0 & -1 & 1+\alpha \end{pmatrix}.$$ Clearly $ \lim_{\alpha \searrow 0} \lVert X(\alpha)^{-1} \rVert_2 \rightarrow +\infty.$ However, when we consider $\alpha X(\alpha)^{-1}$, and take the limit as $\alpha$ goes to 0 we obtain the bounded limit $$\bar{Z} = \begin{pmatrix} \frac12 & - \frac12 & 0 \cr - \frac12 & 1 & - \frac12 \cr 0 &- \frac12 & \frac12 \end{pmatrix}.$$ Note that $\bar{X}= X(0)$ is the $3\times 3$ matrix with all ones, ${\rm rank} \bar{X}+ {\rm rank} \bar{Z}= 3$, and $\bar{X} \bar{Z} = 0$. It turns out that multiplying ${{X(\alpha)}}^{-1}$ by $\alpha$ always bounds the sequence $({{X(\alpha)}},{{y(\alpha)}},{{Z(\alpha)}})$. Therefore, we consider the scaled system $$\label{eq:scaledoptimality} \begin{bmatrix} {{\mathcal A}}^*(y) - Z \\ {{\mathcal A}}(X) - {b(\alpha) }\\ ZX - \alpha I \end{bmatrix} = 0, \ X \succ 0, \ Z \succ 0, \ \alpha > 0,$$ that is obtained from by multiplying the last equation by $\alpha$. Abusing our previous notation, we let $({{X(\alpha)}},{{y(\alpha)}},{{Z(\alpha)}})$ denote a solution to *this* system and we refer to the set of all such solutions as the [*parametric path*]{}. The parametric path has clear parallels to the *central path* of [**SDP**]{}, however, it differs in one main respect: it is not contained in the relative interior of ${\mathcal{F}}$. In the main theorems of this section we prove that the parametric path is smooth and converges as $\alpha\searrow 0$ with the primal limit point in $\operatorname{{relint}}({\mathcal{F}})$. We begin by showing that the primal component of the parametric path has cluster points. \[lem:primalconverge\] Let $\bar{\alpha}> 0$. For every sequence $\{\alpha_k\}_{k\in {{\mathbb N}}} \subset (0,\bar{\alpha}]$ such that $\alpha_k \searrow 0$, there exists a subsequence $\{\alpha_l \}_{l\in {{\mathbb N}}}$ such that $X(\alpha_l) \rightarrow \bar{X} \in {\mathcal{F}}$. Let $\bar{\alpha}$ and $\{\alpha_k\}_{k\in {{\mathbb N}}}$ be as in the hypothesis. First we show that the sequence $X(\alpha_k)$ is bounded. For any $k \in {{\mathbb N}}$ we have $$\lVert X(\alpha_k) \rVert_2 \le \lVert X(\alpha_k)+ (\bar{\alpha} - \alpha_k) I \rVert_2 \le \max_{X\in {\mathcal{F}}(\bar{\alpha})} \lVert X\rVert_2 < +\infty.$$ The second inequality is due to $X(\alpha_k) + (\bar{\alpha} - \alpha_k)I \in {\mathcal{F}}(\bar{\alpha})$ and the third inequality holds since ${\mathcal{F}}(\bar{\alpha})$ is bounded. Thus there exists a convergent subsequence $\{\alpha_l\}_{l\in {{\mathbb N}}}$ with $X(\alpha_l) \rightarrow \bar{X}$, that clearly belongs to ${\mathcal{F}}$. For the dual variables we need only prove that $Z(\alpha)$ converges (for a subseqence) since this implies that $y(\alpha)$ also converges, by the assumption that ${{\mathcal A}}$ is surjective. As for ${{X(\alpha)}}$, we show that the tail of the parametric path corresponding to $Z(\alpha)$ is bounded. To this end, we first prove the following technical lemma. Recall that $\hat{X}$ is the analytic center of Definition \[def:analytic\]. \[lem:technicalbounded\] Let $\bar{\alpha} > 0$. There exists $M > 0$ such that for all $ \alpha \in (0,\bar{\alpha}]$, $$0 < \langle X(\alpha)^{-1}, \hat{X} + \alpha I \rangle \le M.$$ Let $\bar{\alpha}$ be as in the hypothesis and let $\alpha \in (0,\bar{\alpha}]$. The first inequality is trivial since both of the matrices are positive definite. For the second inequality, we have, $$\label{eq:boundednessfirst} \begin{split} \langle X(\bar{\alpha})^{-1} - X(\alpha)^{-1}, \hat{X} + \bar{\alpha}I - X(\alpha) \rangle &= \langle \frac{1}{\bar{\alpha}}{{\mathcal A}}^*(y(\bar{\alpha})) - \frac{1}{\alpha}{{\mathcal A}}^*(y(\alpha)), \hat{X} + \bar{\alpha}I - X(\alpha) \rangle, \\ &= \langle \frac{1}{\bar{\alpha}}y(\bar{\alpha}) - \frac{1}{\alpha}y(\alpha), {{\mathcal A}}(\hat{X} + \bar{\alpha}I) - {{\mathcal A}}(X(\alpha)) \rangle, \\ &= \langle \frac{1}{\bar{\alpha}}y(\bar{\alpha}) - \frac{1}{\alpha}y(\alpha), (\bar{\alpha} - \alpha) {{\mathcal A}}(I) \rangle, \\ &= \langle X(\bar{\alpha})^{-1} - X(\alpha)^{-1}, (\bar{\alpha} - \alpha) I \rangle, \\ &= (\bar{\alpha} - \alpha)\operatorname{{trace}}(X(\bar{\alpha})^{-1}) - \langle X(\alpha)^{-1}, (\bar{\alpha} - \alpha) I \rangle. \end{split}$$ On the other hand, $$\label{eq:boundednesssecond} \begin{split} \langle X(\bar{\alpha})^{-1} - X(\alpha)^{-1}, \hat{X} + \bar{\alpha}I - X(\alpha) \rangle &= n + \langle X(\bar{\alpha})^{-1}, \hat{X} \rangle + \bar{\alpha}\operatorname{{trace}}(X(\bar{\alpha})^{-1}) \\ & \qquad \qquad - \langle X(\bar{\alpha})^{-1}, X(\alpha) \rangle - \langle X(\alpha)^{-1}, \hat{X} + \bar{\alpha} I \rangle. \end{split}$$ Combining and we get $$\begin{aligned} (\bar{\alpha} - \alpha)\operatorname{{trace}}(X(\bar{\alpha})^{-1}) - \langle X(\alpha)^{-1}, (\bar{\alpha} - \alpha) I \rangle &= n + \langle X(\bar{\alpha})^{-1}, \hat{X} \rangle + \bar{\alpha}\operatorname{{trace}}(X(\bar{\alpha})^{-1}) \\ & \qquad \qquad - \langle X(\bar{\alpha})^{-1}, X(\alpha) \rangle - \langle X(\alpha)^{-1}, \hat{X} + \bar{\alpha} I \rangle.\end{aligned}$$ After rearranging, we obtain $$\label{eq:boundednessthird} \begin{split} \langle X(\alpha)^{-1}, \hat{X} + \alpha I \rangle &= n + \langle X(\bar{\alpha})^{-1}, \hat{X} \rangle + \bar{\alpha}\operatorname{{trace}}(X(\bar{\alpha})^{-1})- \langle X(\bar{\alpha})^{-1}, X(\alpha) \rangle \\ & \qquad \qquad - (\bar{\alpha} - \alpha)\operatorname{{trace}}(X(\bar{\alpha})^{-1}), \\ &= n + \alpha \operatorname{{trace}}(X(\bar{\alpha})^{-1}) + \langle X(\bar{\alpha})^{-1}, \hat{X} \rangle - \langle X(\bar{\alpha})^{-1}, X(\alpha) \rangle. \end{split}$$ The first and the third terms of the right hand side are positive constants. The second term is positive for every value of $\alpha$ and is bounded above by $\bar{\alpha}\operatorname{{trace}}(X(\bar{\alpha})^{-1})$ while the fourth term is bounded above by 0. Applying these bounds as well as the trivial lower bound on the left hand side, we get $$\label{eq:boundednessfourth} 0 < \langle X(\alpha)^{-1}, \hat{X} + \alpha I \rangle \le n + \bar{\alpha}\operatorname{{trace}}(X(\bar{\alpha})^{-1})+ \langle X(\bar{\alpha})^{-1}, \hat{X} \rangle =: M.$$ We need one more ingredient to prove that the parametric path corresponding to ${{Z(\alpha)}}$ is bounded. This involves bounding the trace inner product above and below by the [*maximal and minimal scalar products*]{} of the eigenvalues, respectively. \[lem:eigenvaluebound\] If $A,B \in \Sn$, then $$\sum_{i=1}^n \lambda_i(A)\lambda_{n+1-i}(B) \le \langle A, B \rangle \le \sum_{i=1}^n \lambda_i(A)\lambda_i(B).$$ We now have the necessary tools for proving boundedness and obtain the following convergence result. \[thm:2paramcluster\] Let $\bar{\alpha} >0$. For every sequence $\{ \alpha_{k} \}_{k \in {{\mathbb N}}} \subset (0,\bar{\alpha}]$ such that $\alpha_k \searrow 0$, there exists a subsequence $\{\alpha_{\ell}\}_{\ell \in {{\mathbb N}}}$ such that $$(X(\alpha_{\ell}),y(\alpha_{\ell}),Z(\alpha_{\ell})) \rightarrow (\bar{X},\bar{y},\bar{Z}) \in \{\Snp \times {{\R^m\,}}\times \Snp \}$$ with $\bar{X} \in \operatorname{{relint}}({\mathcal{F}})$ and $\bar{Z} = {{\mathcal A}}^*(\bar{y})$. Let $\bar{\alpha} > 0$ and $\{\alpha_k\}_{k\in {{\mathbb N}}}$ be as in the hypothesis. We may without loss of generality assume that $X(\alpha_k) \rightarrow \bar{X} \in {\mathcal{F}}$ due to Lemma \[lem:primalconverge\]. Let $k \in {{\mathbb N}}$. Combining the upper bound of Lemma \[lem:technicalbounded\] with the lower bound of Lemma \[lem:eigenvaluebound\] we have $$\sum_{i=1}^n \lambda_i(X(\alpha_{k})^{-1}) \lambda_{n+1-i}(\hat{X}+\alpha_{k} I) \le M.$$ Since the left hand side is a sum of positive terms, the inequality applies to each term: $$\lambda_i(X(\alpha_{k})^{-1}) \lambda_{n+1-i}(\hat{X}+\alpha_{k} I) \le M, \quad \forall i \in \{1,\dotso,n\}.$$ Equivalently, $$\label{eq:dualconverge} \lambda_i(X(\alpha_{k})^{-1}) \le \frac{M}{ \lambda_{n+1-i}(\hat{X}) + \alpha_{k}}, \quad \forall i \in \{1,\dotso,n\}.$$ Now exactly $r$ eigenvalues of $\hat{X}$ are positive. Thus for $i \in \{n-r+1,\dotso,n\}$ we have $$\lambda_i(X(\alpha_{k})^{-1}) \le \frac{M}{ \lambda_{n+1-i}(\hat{X}) + \alpha_{k}} \le \frac{M}{ \lambda_{n+1-i}(\hat{X})},$$ and we conclude that the $r$ smallest eigenvalues of $X(\alpha_{k})^{-1}$ are bounded above. Consequently, there are at least $r$ eigenvalues of $X(\alpha_{k})$ that are bounded away from 0 and $\operatorname{{rank}}(\bar{X}) \ge r$. On the other hand $\bar{X} \in {\mathcal{F}}$ and $\operatorname{{rank}}(\bar{X}) \le r$ and it follows that $\bar{X} \in \operatorname{{relint}}({\mathcal{F}})$. Now we show that $Z(\alpha_{k})$ is a bounded sequence. Indeed, from we have $$\lVert Z(\alpha_{k}) \rVert_2 = \alpha_{k}\lambda_1(X(\alpha_{k})^{-1}) \le \alpha_{k}\frac{M}{ \lambda_n(\hat{X}) + \alpha_{k}} = \alpha_{k}\frac{M}{ \alpha_{k}} = M.$$ The second to last equality follows from the assumption that $\hat{X} \in \Snp \setminus \Snpp$, i.e. $\lambda_n(\hat{X}) = 0$. Now there exists a subsequence $\{\alpha_{\ell}\}_{\ell \in {{\mathbb N}}}$ such that $$Z(\alpha_{\ell}) \rightarrow \bar{Z}, \ X(\alpha_{\ell}) \rightarrow \bar{X}.$$ Moreover, for each $\ell$, there exists a unique $y(\alpha_{\ell})\in {{\R^m\,}}$ such that $Z(\alpha_{\ell}) = {{\mathcal A}}^*(y(\alpha_{\ell}))$ and since ${{\mathcal A}}$ is surjective, there exists $\bar{y} \in {{\R^m\,}}$ such that $y(\alpha_{\ell}) \rightarrow \bar{y}$ and $\bar{Z} = {{\mathcal A}}^*(\bar{y})$. Lastly, the sequence $Z(\alpha_{\ell})$ is contained in the closed cone $\Snp$ hence $\bar{Z} \in \Snp$, completing the proof. We conclude this section by proving that the parametric path is smooth and has a limit point as $\alpha \searrow 0$. Our proof relies on the following lemma of Milnor and is motivated by an analogous proof for the central path of [**SDP**]{}in [@Halicka:01; @HalickaKlerkRoos:01]. Recall that an *algebraic set* is the solution set of a system of finitely many polynomial equations. \[lem:milnor\] Let ${{\mathcal V} }\subseteq \Rk$ be an algebraic set and ${{\mathcal U} }\subseteq \Rk$ be an open set defined by finitely many polynomial inequalities. Then if $0 \in \operatorname{{cl}}({{\mathcal U} }\cap {{\mathcal V} })$ there exists $\varepsilon > 0$ and a real analytic curve $p :[0,\varepsilon) \rightarrow \Rk$ such that $p(0)=0$ and $p(t) \in {{\mathcal U} }\cap {{\mathcal V} }$ whenever $t > 0$. \[thm:2paramconverge\] There exists $(\bar{X},\bar{y},\bar{Z}) \in \Snp \times {{\R^m\,}}\times \Snp$ with all the properties of Theorem \[thm:2paramcluster\] such that $$\lim_{\alpha \searrow 0} (X(\alpha),y(\alpha),Z(\alpha)) = (\bar{X},\bar{y},\bar{Z}).$$ Let $(\bar{X},\bar{y},\bar{Z})$ be a cluster point of the parametric path as in Theorem \[thm:2paramcluster\]. We define the set ${{\mathcal U} }$ as $${{\mathcal U} }:= \{(X,y,Z, \alpha) \in \Sn \times {{\R^m\,}}\times \Sn \times \R : \bar{X} + X \succ 0, \ \bar{Z} + Z \succ 0, \ Z = {{\mathcal A}}^*(y), \ \alpha > 0 \}.$$ Note that each of the positive definite constraints is equivalent to $n$ strict determinant (polynomial) inequalities. Therefore, ${{\mathcal U} }$ satisfies the assumptions of Lemma \[lem:milnor\]. Next, let us define the set ${{\mathcal V} }$ as, $${{\mathcal V} }:= \left \{ (X,y,Z,\alpha) \in \Sn \times {{\R^m\,}}\times \Sn \times \R: \begin{bmatrix} {{\mathcal A}}^*(y) - Z \\ {{\mathcal A}}(X) + \alpha{{\mathcal A}}(I) \\ (\bar{Z}+Z)(\bar{X} + X) - \alpha I \end{bmatrix} = 0 \right \},$$ and note that ${{\mathcal V} }$ is indeed a real algebraic set. Next we show that there is a one-to-one correspondance between ${{\mathcal U} }\cap {{\mathcal V} }$ and the parametric path without any of its cluster points. Consider $(\tilde{X},\tilde{y},\tilde{Z},\tilde{\alpha}) \in {{\mathcal U} }\cap {{\mathcal V} }$ and let $(X(\tilde{\alpha}),y(\tilde{\alpha}),Z(\tilde{\alpha}))$ be a point on the parametric path. We show that $$\label{eq:2paramfirst} (\bar{X} + \tilde{X}, \bar{y} + \tilde{y}, \bar{Z} + \tilde{Z}) = (X(\tilde{\alpha}),y(\tilde{\alpha}),Z(\tilde{\alpha})).$$ First of all $\bar{X} + \tilde{X} \succ 0$ and $\bar{Z} + \tilde{Z} \succ 0$ by inclusion in ${{\mathcal U} }$. Secondly, $(\bar{X} + \tilde{X}, \bar{y} + \tilde{y}, \bar{Z} + \tilde{Z})$ solves the system when $\alpha = \tilde{\alpha}$: $$\begin{bmatrix} {{\mathcal A}}^*(\bar{y} + \tilde{y}) - (\bar{Z} + \tilde{Z}) \\ {{\mathcal A}}(\bar{X} + \tilde{X}) - b(\tilde{\alpha}) \\ (\bar{Z} + \tilde{Z})(\bar{X} + \tilde{X}) - \tilde{\alpha}I \end{bmatrix} = \begin{bmatrix} {{\mathcal A}}^*(\bar{y}) - \bar{Z} + ({{\mathcal A}}^*(\tilde{y}) - \tilde{Z}) \\ b +\tilde{\alpha}{{\mathcal A}}(I) - b(\tilde{\alpha}) \\ 0 \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \\ 0 \end{bmatrix}.$$ Since has a unique solution, holds. Thus, $$(\tilde{X},\tilde{y},\tilde{Z}) = (X(\alpha) - \bar{X},y(\alpha) - \bar{y}, Z(\alpha)-\bar{Z}),$$ and it follows that ${{\mathcal U} }\cap {{\mathcal V} }$ is a translation of the parametric path (without its cluster points): $$\label{eq:2paramsecond} {{\mathcal U} }\cap {{\mathcal V} }= \{(X,y,Z,\alpha) \in \Sn \times {{\R^m\,}}\times \Sn \times \R : (X,y,Z) = (X(\alpha) - \bar{X},y(\alpha) - \bar{y}, Z(\alpha)-\bar{Z}), \ \alpha > 0 \}.$$ Next, we show that $0 \in \operatorname{{cl}}({{\mathcal U} }\cap {{\mathcal V} })$. To see this, note that $$(X(\alpha),y(\alpha),Z(\alpha)) \rightarrow (\bar{X},\bar{y},\bar{Z}),$$ as $\alpha \searrow 0$ along a subsequence. Therefore, along the same subsequence, we have $$( X(\alpha) - \bar{X}, y(\alpha) - \bar{y}, Z(\alpha) - \bar{Z}, \alpha) \rightarrow 0.$$ Each of the elements of this subsequence belongs to ${{\mathcal U} }\cap {{\mathcal V} }$ by and therefore $0 \in \operatorname{{cl}}({{\mathcal U} }\cap {{\mathcal V} })$. We have shown that ${{\mathcal U} }$ and ${{\mathcal V} }$ satisfy all the assumptions of Lemma \[lem:milnor\], hence there exists $\varepsilon > 0$ and an analytic curve $p: [0,\varepsilon) \rightarrow \Sn \times {{\R^m\,}}\times \Sn \times \R$ such that $p(0) = 0$ and $p(t) \in {{\mathcal U} }\cap {{\mathcal V} }$ for $t > 0$. Let $$p(t) = (X_{(t)},y_{(t)},Z_{(t)},\alpha_{(t)}),$$ and observe that by , we have $$\label{eq:2paramthird} (X_{(t)},y_{(t)},Z_{(t)},\alpha_{(t)}) = (X(\alpha_{(t)}) - \bar{X},y(\alpha_{(t)}) - \bar{y}, Z(\alpha_{(t)})-\bar{Z}).$$ Since $p$ is a real analytic curve, the map $g: [0,\varepsilon) \rightarrow \R$ defined as $g(t) = \alpha_{(t)},$ is a differentiable function on the open interval $(0,\varepsilon)$ with $$\lim_{t\searrow 0} g(t) = 0.$$ In particular, this implies that there is an interval $[0,\bar{\varepsilon}) \subseteq [0,\varepsilon)$ where $g$ is monotone. It follows that on $[0,\bar{\varepsilon})$, $g^{-1}$ is a well defined continuous function that converges to $0$ from the right. Note that for any $t > 0$, $(X(t),y(t),Z(t))$ is on the parametric path. Therefore, $$\lim_{t\searrow 0}X(t) = \lim_{t\searrow 0} X(g(g^{-1}(t))) = \lim_{t\searrow 0} X(\alpha_{(g^{-1}(t)}).$$ Substituting with , we have $$\lim_{t\searrow 0}X(t) = \lim_{t\searrow 0} X_{(g^{-1}(t))} + \bar{X} = \bar{X}.$$ Similarly, $y(t)$ and $Z(t)$ converge to $\bar{y}$ and $\bar{Z}$ respectively. Thus every cluster point of the parametric path is identical to $(\bar{X},\bar{y},\bar{Z})$. We have shown that the tail of the parametric path is smooth and it has a limit point. Smoothness of the entire path follows from Berge’s Maximum Theorem, [@MR1464690], or [@MR1491362 Example 5.22]. Convergence to the Analytic Center {#sec:analyticcenter} ---------------------------------- The results of the previous section establish that the parametric path converges to $\operatorname{{relint}}({\mathcal{F}})$ and therefore the primal part of the limit point has excatly $r$ positive eigenvalues. If the smallest positive eigenvalue is very small it may be difficult to distinguish it from zero numerically. Therefore it is desirable for the limit point to be ‘substantially’ in the relative interior, in the sense that its smallest positive eigenvalue is relatively large. The analytic center has this property and so a natural question is whether the limit point coincides with the analytic center. In the following modification of an example of [@HalickaKlerkRoos:01], the parametric path converges to a point different from the analytic center. \[ex:noncvg\] Consider the [**SDP**]{}feasibility problem where ${{\mathcal A}}$ is defined by $$S_1 := \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ \end{bmatrix},\,\, S_2 := \begin{bmatrix} 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 \\ 0 & 1 & 0 & 0 \\ \end{bmatrix}, \,\, S_3 := \begin{bmatrix} 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ \end{bmatrix},$$ $$S_4 := \begin{bmatrix} 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 \\ \end{bmatrix},\,\, S_5 := \begin{bmatrix} 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ \end{bmatrix},$$ and $b := (1,0,0,0,0)^T$. One can verify that the feasible set consists of positive semidefinite matrices of the form $$X= \begin{bmatrix} 1-x_{22} & x_{12} & 0 & 0 \\ x_{12} & x_{22} & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ \end{bmatrix}.$$ and the analytic center is the determinant maximizer over the positive definite blocks of this set and satisfies $x_{22}=0.5$ and $x_{12}=0$. However, the parametric path converges to a matrix with $x_{22} = 0.6$ and $x_{12} = 0$. To see this note that $${{\mathcal A}}(I) = \begin{pmatrix}2 & 1 & 1 & 0 & 1\end{pmatrix}^T,\quad b(\alpha) = \begin{pmatrix}1 + 2\alpha & \alpha & \alpha & 0 & \alpha \end{pmatrix}^T.$$ By feasibility, ${{X(\alpha)}}$ has the form $$\begin{bmatrix} 1+2\alpha-x_{22} & x_{12} & x_{13} & x_{14} \\ x_{12} & x_{22} & 0 & \frac{1}{2}(\alpha-x_{33}) \\ x_{13} & 0 & x_{33} & 0 \\ x_{14} & \frac{1}{2}(\alpha-x_{33}) & 0 & \alpha \\ \end{bmatrix}.$$ Moreover, the optimality conditions of Theorem \[thm:maxdet\] indicate that ${{X(\alpha)}}^{-1} \in \operatorname{range}({{\mathcal A}}^*)$ and hence is of the form $$\begin{bmatrix} * & 0 & 0 & 0 \\ 0 & * & * & * \\ 0& * & * & * \\ 0 & * & * & * \\ \end{bmatrix}.$$ It follows that $x_{12}=x_{13}=x_{14} = 0$ and ${{X(\alpha)}}$ has the form $$\begin{bmatrix} 1 + 2\alpha -x_{22} & 0 & 0 & 0 \\ 0 & x_{22} & 0 & \frac{1}{2}(\alpha-x_{33}) \\ 0 & 0 & x_{33} & 0 \\ 0 & \frac{1}{2}(\alpha-x_{33}) & 0 & \alpha \\ \end{bmatrix}.$$ Of all the matrices with this form, ${{X(\alpha)}}$ is the one maximizing the determinant, that is $$\begin{aligned} ([{{X(\alpha)}}]_{22}, [{{X(\alpha)}}]_{33})^T = \arg \max \ & x_{33}(1+2\alpha - x_{22})(\alpha x_{22} - \frac{1}{4}(\alpha - x_{33})^2), \\ s.t. \ & 0 < x_{22} < 1+2\alpha, \\ & x_{33} > 0, \\ & \alpha x_{22} > \frac{1}{4}(\alpha - x_{33})^2.\end{aligned}$$ Due to the strict inequalities, the maximizer is a stationary point of the objective function. Computing the derivative with respect to $x_{22}$ and $x_{33}$ we obtain the equations $$\begin{aligned} x_{33}(-(\alpha x_{22} - \frac{1}{4}(\alpha - x_{33})^2) + \alpha(1+2\alpha-x_{22}) &= 0, \\ (1+2\alpha-x_{22})((\alpha x_{22} - \frac{1}{4}(\alpha - x_{33})^2) + \frac{1}{2}x_{33}(\alpha-x_{33})) &= 0.\end{aligned}$$ Since $x_{33} > 0$ and $(1+2\alpha-x_{22}) > 0$, we may divide them out. Then solving each equation for $x_{22}$ we get $$\begin{aligned} \label{ex:first} x_{22} &= \frac{1}{8\alpha}(\alpha - x_{33})^2 + \alpha + \frac{1}{2}, \\ \label{ex:second} x_{22} &= \frac{1}{4\alpha}(\alpha - x_{33})^2 - \frac{1}{2\alpha}x_{33}(\alpha-x_{33}).\end{aligned}$$ Substituting into we get $$\begin{aligned} 0 &= \frac{1}{4\alpha}(\alpha - x_{33})^2 - \frac{1}{2\alpha}x_{33}(\alpha-x_{33}) - \frac{1}{8\alpha}(\alpha - x_{33})^2 - \alpha - \frac{1}{2}, \\ &= \frac{1}{8\alpha}(\alpha - x_{33})^2 - \frac{1}{2}x_{33} + \frac{1}{2\alpha}x_{33}^2 - \alpha - \frac{1}{2}, \\ &= \frac{1}{8\alpha}x_{33}^2 - \frac{1}{4}x_{33} +\frac{1}{8}\alpha - \frac{1}{2}x_{33} + \frac{1}{2\alpha}x_{33}^2 - \alpha - \frac{1}{2}, \\ &= \frac{5}{8\alpha}x_{33}^2 - \frac{3}{4}x_{33} +\frac{1}{8}\alpha - \alpha - \frac{1}{2}, \\\end{aligned}$$ Now we solve for $x_{33}$, $$\begin{aligned} x_{33} &= \frac{\frac{3}{4} \pm \sqrt{ \frac{9}{16} - 4(\frac{5}{8\alpha})(\frac{1}{8}\alpha - \alpha - \frac{1}{2})}}{2\frac{5}{8\alpha}}, \\ &= \frac{3\alpha}{5} \pm \frac{4\alpha}{5}\sqrt{ \frac{11\alpha + 5}{4\alpha}}, \\ &= \frac{1}{5}(3\alpha + 2\sqrt{\alpha}\sqrt{ 11\alpha + 5}).\end{aligned}$$ Since $x_{33}$ is fully determined by the stationarity constraints, we have $[{{X(\alpha)}}]_{33} = x_{33}$ and $[{{X(\alpha)}}]_{33} \rightarrow 0$ as $\alpha \searrow 0$. Substituting this expression for $x_{33}$ into we get $$\begin{aligned} [{{X(\alpha)}}]_{22} &= \frac{1}{8\alpha}(\alpha - \frac{1}{5}(3\alpha + 2\sqrt{\alpha}\sqrt{ 11\alpha + 5}))^2 + \alpha + \frac{1}{2}, \\ &= \frac{1}{8\alpha}(\alpha^2 - 2\alpha \frac{1}{5}(3\alpha + 2\sqrt{\alpha}\sqrt{ 11\alpha + 5}) + \frac{1}{25}(9\alpha^2 + 6\alpha \sqrt{\alpha}\sqrt{ 11\alpha + 5} + 4\alpha(11\alpha+5))) + \alpha + \frac{1}{2}, \\ &= \frac{1}{8}\alpha - \frac{1}{20}(3\alpha + 2\sqrt{\alpha}\sqrt{ 11\alpha + 5}) + \frac{1}{200}(9\alpha + 6 \sqrt{\alpha}\sqrt{ 11\alpha + 5} + 4(11\alpha+5)) + \alpha + \frac{1}{2}, \\ &= \frac{31}{25}\alpha - \frac{7}{100}\sqrt{\alpha}\sqrt{ 11\alpha + 5} + \frac{6}{10}.\end{aligned}$$ Now it is clear that $[{{X(\alpha)}}]_{22} \rightarrow 0.6$ as $\alpha \searrow 0$. ### A Sufficient Condition for Convergence to the Analytic Center {#sec:sufficientanalytic} Recall that $\operatorname{face}({\mathcal{F}}) = V\Srp V^T$. To simplify the discussion we may assume that $V = \begin{bmatrix} I \\ 0 \end{bmatrix}$, so that $$\label{eq:facialstructure} \operatorname{face}({\mathcal{F}}) = \begin{bmatrix} \Srp &0 \\ 0 & 0 \end{bmatrix}.$$ This follows from the rich automorphism group of $\Snp$, that is, for any full rank $W\in {\R^{n \times n}}$, we have $W\Snp W^T = \Snp$. Moreover, it is easy to see that there is a one-to-one correspondence between relative interior points under such transformations. Let us now express ${\mathcal{F}}$ in terms of $\operatorname{null}({{\mathcal A}})$, that is, if $A_0 \in {\mathcal{F}}$ and recall that $A_1,\dotso, A_q, \, q=t(n)-m,$ form a basis for $\operatorname{null}({{\mathcal A}})$, then $${\mathcal{F}}= \left( A_0 + \operatorname{{span}}\{ A_1,\dotso,A_q\} \right) \cap \Snp.$$ Similarly, $${{\mathcal{F}}(\alpha)}= \left( \alpha I + A_0 + \operatorname{{span}}\{ A_1,\dotso,A_q\} \right) \cap \Snp.$$ Next, let us partition $A_i$ according to the block structure of : $$\label{eq:partNi} A_i = \begin{bmatrix} L_i & M_i \cr M_i^T & N_i \end{bmatrix} , \quad i\in \{0, \ldots , q\}.$$ Since $A_0 \in {\mathcal{F}}$, from we have $N_0 = 0$ and $M_0 = 0$. Much of the subsequent discussion focuses on the linear pencil $\sum_{i=1}^q x_iN_i$. Let ${{\mathcal N\,}}$ be the linear mapping such that $$\operatorname{null}({{\mathcal N\,}}) = \left \{ \sum_{i=1}^q x_iN_i : x \in \Rq \right \}.$$ \[lem:maxdetN\] Let $\{N_1,\dotso,N_q\}$ be as in , $\operatorname{{span}}\{N_1,\dotso,N_q\} \cap \Snp = \{0\}$, and let $$\label{eq:Q} Q := \arg \max \{\log \det (X): X = I + \sum_{i=1}^q x_i N_i \succ 0, \ x \in \Rq \}.$$ Then for all $\alpha >0$, $$\label{eq:alphaQ} \alpha Q = \arg \max \{\log \det (X): X = \alpha I + \sum_{i=1}^q x_i N_i \succ 0, \ x \in \Rq \}.$$ We begin by expressing $Q$ in terms of ${{\mathcal N\,}}$: $$Q = \arg \max \{\log \det (X): {{\mathcal N\,}}(X) = {{\mathcal N\,}}(I) \}.$$ By the assumption on the span of the matrices $N_i$ and by Lemma \[lem:boundedchar\], the feasible set of is bounded. Moreover, the feasible set contains positive definite matrices, hence all the assumptions of Theorem \[thm:maxdet\] are satisfied. It follows that $Q$ is the unique feasible, positive definite matrix satisfying $Q^{-1} \in \operatorname{range}( {{\mathcal N\,}}^*)$. Moreover, $\alpha Q$ is positive definite, feasible for , and $(\alpha Q)^{-1} \in \operatorname{range}({{\mathcal N\,}}^*)$. Therefore $\alpha Q$ is optimal for . Now we prove that the parametric path converges to the analytic center under the condition of Lemma \[lem:maxdetN\]. \[thm:analyticcenter\] Let $\{N_1,\dotso,N_q\}$ be as in . If $\operatorname{{span}}\{N_1,\dotso,N_q\} \cap \Snp = \{0\}$ and $\bar{X}$ is the limit point of the primal part of the parametric path as in Theorem \[thm:2paramconverge\], then $\bar{X} = \hat{X}$. Let $$\bar{X} =: \begin{bmatrix} \bar{Y} & 0 \\ 0 & 0 \end{bmatrix},\ \hat{X} =: \begin{bmatrix} \hat{Y} & 0 \\ 0 & 0 \end{bmatrix}$$ and suppose, for eventual contradiction, that $\bar{Y} \ne \hat{Y}$. Then let $r,s \in \R$ be such that $$\det(\bar{Y}) < r < s < \det(\hat{Y}).$$ Let $Q$ be as in Lemma \[lem:maxdetN\] and let $x \in \Rq$ satisfy $Q = I + \sum_{i=1}^q x_iN_i$. Now for any $\alpha >0$ we have $$\hat{X} + \alpha( I + \sum_{i=1}^q x_i A_i) = \begin{pmatrix} \hat{Y}+\alpha I + \alpha \sum_{i=1}^q x_i L_i & \alpha \sum_{i=1}^q x_iM_i \cr \alpha \sum_{i=1}^q x_i M_i^T & \alpha Q \end{pmatrix} .$$ Note that there exists $\varepsilon >0$ such that $\hat{X} + \alpha \sum_{i=1}^q x_iA_i \succeq 0$ whenever $\alpha \in (0,\varepsilon)$. It follows that $$\hat{X} + \alpha( I + \sum_{i=1}^q x_i A_i) \in {{\mathcal{F}}(\alpha)}, \quad \forall \alpha \in (0,\varepsilon).$$ Taking the determinant, we have $$\begin{aligned} \frac{1}{\alpha^{n-r}} \det (\hat{X} + \alpha( I + \sum_{i=1}^q x_i A_i)) &= \frac{1}{\alpha^{n-r}}\det \left( \alpha Q-\alpha^2 (\sum_{i=1}^q x_iM_i ) (\hat{Y}+\alpha I + \alpha \sum_{i=1}^q x_iL_i )^{-1} (\sum_{i=1}^q x_iM_i^T) \right) \\ &\qquad \qquad \times\det (\hat{Y}+\alpha I + \alpha \sum_{i=1}^q x_iL_i), \\ &= \det \left( Q-\alpha (\sum_{i=1}^q x_iM_i ) (\hat{Y}+\alpha I + \alpha \sum_{i=1}^q x_iL_i )^{-1} (\sum_{i=1}^q x_iM_i^T) \right) \\ &\qquad \qquad \times\det (\hat{Y}+\alpha I + \alpha \sum_{i=1}^q x_iL_i).\end{aligned}$$ Now we have $$\lim_{\alpha \searrow 0} \ \frac{1}{\alpha^{n-r}} \det (\hat{X} + \alpha( I + \sum_{i=1}^q x_i A_i)) = \det(Q)\det(\hat{Y}).$$ Thus, there exists $\sigma \in (0,\varepsilon)$ so that for $\alpha \in (0,\sigma )$ we have $$\det (\hat{X} + \alpha( I + \sum_{i=1}^q x_i A_i)) > s \alpha^{n-r} \det (Q) .$$ As ${{X(\alpha)}}$ is the determinant maximizer over ${{\mathcal{F}}(\alpha)}$, we also have $$\label{eq:detX} \det( {{X(\alpha)}}) > s \alpha^{n-r} \det( Q ), \quad \forall \alpha \in (0, \sigma ).$$ On the other hand ${{X(\alpha)}}\rightarrow \bar{X}$ and let $${{X(\alpha)}}=: \begin{bmatrix} \alpha I + \sum_{i=1}^q x(\alpha)_i L_i & \sum_{i=1}^q x(\alpha)_i M_i \\ \sum_{i=1}^q x(\alpha)_i M^T_i & \alpha I + \sum_{i=1}^q x(\alpha)_i N_i \end{bmatrix}.$$ Then $\alpha I + \sum_{i=1}^q x(\alpha)_i L_i \rightarrow \bar{Y}$ and there exists $\delta \in (0,\sigma)$ such that for all $\alpha \in (0,\delta)$, $$\det(\alpha I + \sum_{i=1}^q x(\alpha)_i L_i) < r.$$ Moreover, by definition of $Q$, $$\det(\alpha I + \sum_{i=1}^q x(\alpha)_i N_i) \le \det(\alpha Q) = \alpha^{n-r} \det(Q).$$ To complete the proof, we apply the Hadamard-Fischer inequality to $\det({{X(\alpha)}})$. For $\alpha \in (0,\delta)$ we have $$\det({{X(\alpha)}}) \le \det(\alpha I + \sum_{i=1}^q x(\alpha)_i L_i)\det(\alpha I + \sum_{i=1}^q x(\alpha)_i N_i) < r\alpha^{n-r} \det( Q),$$ a contradiction of . Note that Example \[ex:noncvg\] fails the hypotheses of Theorem \[thm:analyticcenter\]. Indeed, the matrix $\begin{bmatrix} 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & -1 \\ 0 & 0 & 2 & 0 \\ 0 & -1 & 0 & 0 \\ \end{bmatrix}$ lies in $\operatorname{null}({{\mathcal A}})$ and the bottom $2\times 2$ block is nonzero and positive semidefinite. The Projected Gauss-Newton Method {#sec:projGN} ================================= We have constructed a parametric path that converges to a point in the relative interior of ${\mathcal{F}}$. In this section we propose an algorithm to follow the path to its limit point. We do not prove convergence of the proposed algorithm and address its performance in Section \[sec:numerics\]. We follow the (projected) Gauss-Newton approach (the nonlinear analog of Newton’s method) originally introduced for [**SDPs**]{}in [@KrMuReVaWo:98] and improved more recently in [@KrukDoanW:10]. This approach has been shown to have improved robustness compared to other symmetrization approaches. For well posed problems, the Jacobian for the search direction remains full rank in the limit to the optimum. Scaled Optimality Conditions ---------------------------- The idea behind this approach is to view the system defining the parametric path as an overdetermined map and use the Gauss-Newton (GN) method for nonlinear systems. In the process, the linear feasibility equations are eliminated and the GN method is applied to the remaining bilinear equation. For $\alpha \ge0$ let $G_{\alpha}: \Snp \times {{\R^m\,}}\times \Snp \rightarrow \Sn \times {{\R^m\,}}\times {\R^{n \times n}}$ be defined as $$\label{eq:GdefGN} G_{\alpha}(X,y,Z):= \begin{bmatrix} {{\mathcal A}}^*(y)-Z \\ {{\mathcal A}}(X) -{b(\alpha) }\\ ZX - \alpha I \\ \end{bmatrix}.$$ The solution to $G_{\alpha}(X,y,Z)= 0$ is exactly $({{X(\alpha)}},{{y(\alpha)}},{{Z(\alpha)}})$ when $\alpha > 0$; and for $\alpha = 0$ the solution set is $${\mathcal{F}}\times ({{\mathcal A}}^*)^{-1}({{\mathcal D} }) \times {{\mathcal D} }, \quad {{\mathcal D} }:= \operatorname{range}({{\mathcal A}}^*) \cap \operatorname{face}({\mathcal{F}})^c.$$ Clearly, the limit point of the parametric path satisfies $G_0(X,y,Z) = 0$. We fix $\alpha > 0$. The GN direction, $(dX,dy,dZ)$, uses the overdetermined [*GN system*]{} $$\label{eq:GNorig} G_{\alpha}'(X,y,Z)\begin{bmatrix} dX \\ dy \\ dZ \end{bmatrix} = -G_{\alpha}(X,y,Z).$$ Note that the search direction is a strict descent direction for the norm of the residual, $\| \operatorname{{vec}}(G_{\alpha}(X,y,Z)) \|_2^2$, when the Jacobian is full rank. The size of the problem is then reduced by projecting out the first two equations. We are left with a single linearization of the bilinear complementarity equation, i.e., $n^2$ equations in only $t(n)$ variables. The [*least squares solution*]{} yields the projected GN direction after backsolves. We prefer steps of length $1$, however, the primal and dual step lengths, $\alpha_p$ and $\alpha_d$ respectively, are reduced, when necessary, to ensure strict feasibility: $X + \alpha_p dX \succ 0$ and $Z+\alpha_d dZ \succ 0$. The parameter $\alpha$ is then reduced and the procedure repeated. On the parametric path, $\alpha$ satisfies $$\label{eq:alpharep} \alpha = \frac{\langle {{Z(\alpha)}}, {{X(\alpha)}}\rangle }{n}.$$ Therefore, this is a good estimate of the target for $\alpha$ near the parametric path. As is customary, we then use a fixed $\sigma \in (0,1)$ to move the target towards optimality, $\alpha \leftarrow \sigma \alpha$. ### Linearization and GN Search Direction For the purposes of this discussion we vectorize the variables and data in $G_{\alpha}$. Let $A \in \R^{m\times t(n)}$ be the matrix representation of ${{\mathcal A}}$, that is $$A_{i,:} := \operatorname{{svec}}(S_i)^T, \quad i\in \{1,\dotso,m\}.$$ Let $N \in \R^{t(n)\times (t(n)-m)}$ be such that its columns form a basis for $\operatorname{null}(A)$ and let $\hat{x}$ be a particular solution to $Ax={b(\alpha) }$, e.g., the least squares solution. Then the affine manifold determined from the equation ${{\mathcal A}}(X)={b(\alpha) }$ is equivalent to that obtained from the equation $$x = \hat{x} + Nv, \quad v\in \R^{t(n)-m}.$$ Moreover, if $z:=\operatorname{{svec}}(Z)$, we have the vectorization $$g_{\alpha}(x,v,y,z) := \begin{bmatrix} A^Ty - z \\ x-\hat{x}-Nv \\ \operatorname{{sMat}}(z)\operatorname{{sMat}}(x) - \alpha I \end{bmatrix} =: \begin{bmatrix} r_d \\ r_p \\ R_c \end{bmatrix}, \label{eq:systemg}$$ Now we show how the first two equations of the above system may be projected out, thereby reducing the size of the problem. First we have $$g'_{\alpha}(x,v,y,z)\begin{pmatrix} dx \\ dv \\ dy \\ dz \end{pmatrix} = \begin{bmatrix} A^Tdy - dz \\ dx - Ndv \\ \operatorname{{sMat}}(dz)\operatorname{{sMat}}(x) + \operatorname{{sMat}}(z)sMat(dx) \end{bmatrix},$$ and it follows that the GN step as in is the least squares solution of the system $$\begin{bmatrix} A^Tdy - dz \\ dx - Ndv \\ \operatorname{{sMat}}(dz)\operatorname{{sMat}}(x) + \operatorname{{sMat}}(z)\operatorname{{sMat}}(dx) \end{bmatrix} = - \begin{bmatrix} r_d \\ r_p \\ R_c \\ \end{bmatrix}.$$ Since the first two equations are linear, we get $dz = A^Tdy+r_d$ and $dx = Ndv - r_p$. Substituting into the third equation we have, $$\operatorname{{sMat}}(A^Tdy + r_d)\operatorname{{sMat}}(x) + \operatorname{{sMat}}(z)\operatorname{{sMat}}(Ndv - r_p) = -R_c.$$ After moving all the constants to the right hand side we obtain the projected GN system in $dy$ and $dv$, $$\label{eq:projGN} \operatorname{{sMat}}(A^Tdy)\operatorname{{sMat}}(x) + \operatorname{{sMat}}(z)\operatorname{{sMat}}(Ndv) = -R_c + \operatorname{{sMat}}(z)\operatorname{{sMat}}(r_p) - \operatorname{{sMat}}(r_d)\operatorname{{sMat}}(x).$$ The least squares solution to this system is the exact GN direction when $r_d = 0$ and $r_p=0$, otherwise it is an approximation. We then use the equations $dz = A^Tdy+r_d$ and $dx = Ndv - r_p$ to obtain search directions for $x$ and $z$. In [@KrukDoanW:10 Theorem 1], it is proved that if the solution set of $G_0(X,y,Z) = 0$ is a singleton such that $X+Z \succ 0$ and the starting point of the projected GN algorithm is sufficiently close to the parametric path then the algorithm, with a crossover modification, converges quadratically. As we showed above, the solution set to our problem is $${\mathcal{F}}\times ({{\mathcal A}}^*)^{-1}({{\mathcal D} }) \times {{\mathcal D} },$$ which is not a singleton as long as ${\mathcal{F}}\ne \emptyset$. Indeed, ${{\mathcal D} }$ is a non-empty cone. Although the convergence result of [@KrukDoanW:10] does not apply to our problem, their numerical tests indicate that the algorithm converges even for problems violating the strict complementarity and uniqueness assumptions and our observations agree. Implementation Details ---------------------- Several specific implementation modifications are used. We begin with initial $x,v,y,z$ with corresponding $X,Z\succ 0$. If we obtain $P \succ 0$ as in Proposition \[prop:boundtest\] then we set $Z = P$ and define $y$ accordingly, otherwise $Z = X = I$. We estimate $\alpha$ using and set $\alpha \leftarrow 2\alpha$ to ensure that our target is somewhat well centered to start. ### Step Lengths and Linear Feasibility We start with initial step lengths $\alpha_p=\alpha_d=1.1$ and then backtrack using a Cholesky factorization test to ensure positive definiteness $$X+\alpha_p dX \succ 0, \quad Z+\alpha_d dZ \succ 0.$$ If the step length we find is still $>1$ after the backtrack, we set it to $1$ and first update $v,y$ and then update $x,z$ using $$x=\hat x + N v, \quad z=A^Ty.$$ This ensures exact linear feasibility. Thus we find that we maintain exact dual feasibility after a few iterations. Primal feasibility changes since $\alpha$ decreases. We have experimented with including an extra few iterations at the end of the algorithm with a fixed $\alpha$ to obtain exact primal feasibility (for the given $\alpha$). In most cases the improvement of feasibility with respect to ${\mathcal{F}}$ was minimal and not worth the extra computational cost. ### Updating $\alpha$ and Expected Number of Iterations In order to drive $\alpha$ down to zero, we fix $\sigma \in (0,1)$ and update alpha as $\alpha \leftarrow \sigma \alpha$. We use a moderate $\sigma = .6$. However, if this reduction is performed too quickly then our step lengths end up being too small and we get too close to the positive semidefinite boundary. Therefore, we change $\alpha$ using information from $\min \{\alpha_p,\alpha_d\}$. If the steplength is reasonably near $1$ then we decrease using $\sigma$; if the steplength is around $.5$ then we leave $\alpha$ as is; if the steplength is small then we *increase* to $1.2\alpha$; and if the steplength is tiny ($<.1$), we increase to $2\alpha$. For most of the test problems, this strategy resulted in steplengths of $1$ after the first few iterations. We noted empirically that the condition number of the Jacobian for the least squares problem increases quickly, i.e., several singular values converge to zero. Despite this we are able to obtain high accuracy search directions.[^4] Since we typically have steplengths of $1$, $\alpha$ is generally decreased using $\sigma$. Therefore, for a desired tolerance $\epsilon$ and a starting $\alpha =1$ we would want $\sigma^k < \epsilon$, or equivalently, $$\quad k < \log_{10} (\epsilon)/\log_{10}(\sigma).$$ For our $\sigma=.6$ and $t$ decimals of desired accuracy, we expect to need $k<4.5t$ iterations. Generating Instances and Numerical Results {#sec:numerics} ========================================== In this section we analyze the performance of an implementation of our algorithm. We begin with a discussion on generating spectrahedra. A particular challenge is in creating spectrahedra with specified singularity degree. Following this discussion, we present and analyze the numerical results. Generating Instances with Varying Singularity Degree {#sec:generating} ---------------------------------------------------- Our method for generating instances is motivated by the approach of [@WeiWolk:06] for generating [**SDPs**]{}with varying *complementarity gaps*. We begin by proving a relationship between strict complementarity of a primal-dual pair of [**SDP**]{}problems and the singularity degree of the optimal set of the primal [**SDP**]{}. This relationship allows us to modify the code presented in [@WeiWolk:06] and obtain spectrahedra having various singularity degrees. Recall the primal [**SDP**]{} $$\label{prob:sdpprimalcopy} {\textbf{SDP}\,}\qquad \qquad {\textit{$p^{\star}$}\index{$p^{\star}$}}:=\min \{ \langle C,X\rangle : {{\mathcal A}}(X)=b, X\succeq 0\},$$ with dual $$\label{prob:sdpdualcopy} {\textbf{D-SDP}\,}\qquad \qquad {\textit{$d^{\star}$}\index{$d^{\star}$}}:=\min \{ b^Ty : {{\mathcal A}}^*(y) \preceq C \}.$$ Let $O_P\subseteq \Snp$ and $O_D\subseteq \Snp$ denote the primal and dual optimal sets respectively, where the dual optimal set is with respect to the variable $Z$. Specifically, $$O_P := \{X\in \Snp : {{\mathcal A}}(X) = b, \ \langle C, X\rangle = p^{\star} \}, \ O_D := \{ Z \in \Snp : Z = C-{{\mathcal A}}^*(y), \ b^Ty = d^{\star}, \ y \in {{\R^m\,}}\}.$$ Note that $O_P$ is a spectrahedron determined by the affine manifold $$\begin{bmatrix} {{\mathcal A}}(X) \\ \langle C, X \rangle \end{bmatrix} = \begin{pmatrix} b \\ p^* \end{pmatrix}.$$ We note that the second system in the theorem of the alternative, Theorem \[thm:alternative\], for the spectrahedron $O_P$ is $$\label{eq:opalternative} 0 \ne \tau C + {{\mathcal A}}^*(y) \succeq 0, \ \tau p^{\star} + y^Tb = 0.$$ We say that *strict complementarity* holds for [**SDP**]{}and [**D-SDP**]{}if there exists $X^{\star} \in O_P$ and $Z^{\star}\in O_D$ such that $$\langle X^{\star}, Z^{\star} \rangle = 0 \text{ and } \operatorname{{rank}}(X^{\star}) + \operatorname{{rank}}(Z^{\star}) = n.$$ If strict complementarity does not hold for [**SDP**]{}and [**D-SDP**]{}and there exist $X^{\star} \in \operatorname{{relint}}(O_P)$ and $Z^{\star} \in \operatorname{{relint}}(O_D)$, then we define the complementarity gap as $$g := n - \operatorname{{rank}}(X^{\star}) - \operatorname{{rank}}(Z^{\star}).$$ Now we describe the relationship between strict complementarity of [**SDP**]{}and [**D-SDP**]{}and the singularity degree of $O_P$. \[prop:scsd\] If strict complementarity holds for [**SDP**]{}and [**D-SDP**]{}, then $\operatorname{sd}(O_P) \le 1$. Let $X^{\star} \in \operatorname{{relint}}(O_P)$. If $X^{\star} \succ 0$, then $\operatorname{sd}(O_P) = 0$ and we are done. Thus we may assume $\operatorname{{rank}}(X^{\star}) < n$. By strict complementarity, there exists $(y^{\star},Z^{\star}) \in {{\R^m\,}}\times \Snp$ feasible for [**D-SDP**]{}with $Z^{\star} \in O_D$ and $\operatorname{{rank}}(X^{\star}) + \operatorname{{rank}}(Z^{\star}) = n$. Now we show that $(1,-y^{\star})$ satisfies . Indeed, by dual feasibility, $$C - {{\mathcal A}}^*(y^{\star}) = Z^{\star} \in \Snp \setminus \{0\},$$ and by complementary slackness, $$p^{\star} - (y^{\star})^Tb = \langle X^{\star}, C \rangle - \langle {{\mathcal A}}^*(y^{\star}), X^{\star} \rangle = \langle X^{\star},Z^{\star} \rangle = 0.$$ Finally, since $\operatorname{{rank}}(X^{\star}) + \operatorname{{rank}}(Z^{\star}) = n$ we have $\operatorname{sd}(O_P) = 1$, as desired. From the perspective of facial reduction, the interesting spectrahedra are those with singularity degree greater than zero and the above proposition gives us a way to construct spectrahedra with singularity degree exactly one. Using the algorithm of [@WeiWolk:06] we construct strictly complementary [**SDPs**]{}and then use the optimal set of the primal to construct a spectrahedron with singularity degree exactly one. Specifically, given positive integers $n, m, r,$ and $g$ the algorithm of [@WeiWolk:06] returns the data ${{\mathcal A}},b,C$ corresponding to a primal dual pair of [**SDPs**]{}, together with $X^{\star} \in \operatorname{{relint}}(O_P)$ and $Z^{\star} \in \operatorname{{relint}}(O_D)$ satisfying $$\operatorname{{rank}}(X^{\star}) = r, \ \operatorname{{rank}}(Z^{\star}) = n-r-g.$$ Now if we set $$\hat{{{\mathcal A}}}(X) := \begin{pmatrix} {{\mathcal A}}(X) \\ \langle C, X \rangle \end{pmatrix}, \ \hat{b} = \begin{pmatrix} b \\ \langle C, X^{\star} \rangle \end{pmatrix},$$ then $O_P = {\mathcal{F}}(\hat{{{\mathcal A}}},\hat{b})$. Moreover, if $g=0$ then $\operatorname{sd}(O_P) = 1$, by Proposition \[prop:scsd\]. This approach could also be used to create spectrahedra with larger singularity degrees by constructing [**SDPs**]{}with greater complementarity gaps, if the converse of Proposition \[prop:scsd\] were true. We provide a sufficient condition for the converse in the following proposition. \[prop:sdscconverse\] If $\operatorname{sd}(O_P)=0$, then strict complementarity holds for [**SDP**]{}and [**D-SDP**]{}. Moreover, if $\operatorname{sd}(O_P) = 1$ and the set of solutions to intersects $\R_{++} \times {{\R^m\,}}$, then strict complementarity holds for [**SDP**]{}and [**D-SDP**]{}. Since we have only defined singularity degree for non-empty spectrahedra, there exists $X^{\star} \in \operatorname{{relint}}(O_P)$. For the first statement, by Theorem \[thm:strongduality\], there exists $Z^{\star} \in O_D$. Complementary slackness always holds, hence $\langle Z^{\star}, X^{\star} \rangle = 0$ and since $X^{\star}\succ 0$ we have $Z^{\star}=0$. It follows that $\operatorname{{rank}}(X^{\star}) + \operatorname{{rank}}(Z^{\star}) = n$ and strict complementarity holds for [**SDP**]{}and [**D-SDP**]{}. For the second statement, let $(\bar{\tau},\bar{y})$ and $(\tilde{\tau},\tilde{y})$ be solutions to with $\bar{\tau} >0$ and $\tilde{\tau} C +{{\mathcal A}}^*(\tilde{y})$ of maximal rank. Let $$\bar{Z} := \bar{\tau}C + {{\mathcal A}}^*(\bar{\tau}), \ \tilde{Z} := \tilde{\tau} C +{{\mathcal A}}^*(\tilde{y}).$$ Then there exists $\varepsilon > 0$ such that $\bar{\tau} + \varepsilon \tilde{\tau} >0$ and $\operatorname{{rank}}(\bar{Z} + \varepsilon \tilde{Z}) \ge \operatorname{{rank}}(\tilde{Z})$. Define $$\tau := \bar{\tau} + \varepsilon \tilde{\tau}, \ y := \bar{y} + \varepsilon \tilde{y}, \ Z := \bar{Z} + \varepsilon \tilde{Z}.$$ Now $(\tau,y)$ is a solution to , i.e., $$0 \ne \tau C + {{\mathcal A}}^*(y) \succeq 0, \ \tau p^{\star} + y^Tb = 0.$$ Moreover, $\operatorname{{rank}}(X^{\star}) + \operatorname{{rank}}(Z) = n$ since $\operatorname{sd}(O_P) = 1$ and $Z$ is of maximal rank. Now we define $$Z^{\star} := \frac{1}{\tau} Z = C - {{\mathcal A}}^*\left(-\frac{1}{\tau}y\right).$$ Since $\tau>0$, it is clear that $Z^{\star} \succeq 0$ and it follows that $\left(-\frac{1}{\tau} y, Z^{\star}\right)$ is feasible for [**D-SDP**]{}. Moreover, this point is optimal since $$d^{\star} \ge -\frac{1}{\tau} y^Tb = p^{\star}\ge d^{\star}.$$ Therefore $Z^{\star} \in O_D$ and since $\operatorname{{rank}}(Z^{\star}) = \operatorname{{rank}}(Z)$, strict complementarity holds for [**SDP**]{}and [**D-SDP**]{}. Numerical Results {#sec:numericsreal} ----------------- For the numerical tests, we generate instances with $n \in \{50,80,110,140\}$ and $m=2n$. These are problems of small size relative to state of the art capabilities, nonetheless, we are able to demonstrate the performance of our algorithm through them. In Table \[tab:sd1\] and Table \[tab:sd1dual\] we record the results for the case $\operatorname{sd}=1$. For each instance, specified by $n$, $m,$ and $r$, the results are the average of five runs. By $r$, we denote the maximum rank over all elements of the generated spectrahedron, which is fixed to $r=n/2$. In Table \[tab:sd1\] we record the relevant eigenvalues of the primal variable, primal feasibility, complementarity, and the value of $\alpha$ at termination, denoted $\alpha_f$. The values for primal feasibility and complementarity are sufficiently small and it is clear from the eigenvalues presented, that the first $r$ eigenvalues are significantly smaller than the last $n-r$. These results demonstrate that the algorithm returns a matrix which is very close to the relative interior of ${\mathcal{F}}$. In Table \[tab:sd1dual\] we record the relevant eigenvalues for the corresponding dual variable, $Z$. Note that $r_d := n-r$ and the eigenvalues recorded in the table indicate that $Z$ is indeed an exposing vector. Moreover, it is a maximal rank exposing vector. While, we have not proved this, we observed that it is true for every test we ran with $\operatorname{sd}= 1$. In Table \[tab:sd2\] and Table \[tab:sd2dual\] we record similar values for problems where the singularity degree may be greater than $1$. Using the approach described in Section \[sec:generating\] we generate instances of [**SDP**]{}and [**D-SDP**]{}having a complementarity gap of $g$ and then we construct our spectrahedron from the optimal set of [**SDP**]{}. By Proposition \[prop:scsd\] and Proposition \[prop:sdscconverse\] the resulting spectrahedron may have singularity degree greater than 1. We observe that primal feasibility and complementarity are attained to a similar accuracy as in the $\operatorname{sd}=1$ case. The eigenvalues of the primal variable fall into three categories. The first $r$ eigenvalues are sufficiently large so as not to be confused with $0$, the last $n-r-g$ eigenvalues are convincingly small, and the third group of eigenvalues, exactly $g$ of them, are such that it is difficult to decide if they should be $0$ or not. A similar phenomenon is observed for the eigenvalues of the dual variable. This demonstrates that exactly $g$ of the eigenvalues are converging to $0$ at a rate significantly smaller than that of the other $n-r-g$ eigenvalues. An Application to PSD Completions of Simple Cycles {#sec:psdcyclecompl} ================================================== In this final section, we show that our parametric path and the relative interior point it converges to have interesting structure for cycle completion problems. Let $G=(V,E)$ be an undirected graph with $n = \lvert V \rvert$ and let $a \in \R^{\lvert E \rvert}$. Let us index the components of $a$ by the elements of $E$. A matrix $X \in \Sn$ is a [*completion*]{} of $G$ under $a$ if $X_{ij} = a_{ij}$ for all $\{i,j\} \in E$. We say that $G$ is [*partially PSD*]{} under $a$ if there exists a completion of $G$ under $a$ such that all of its principle minors consisting entirely of $a_{ij}$ are PSD. Finally, we say that $G$ is [*PSD completable*]{} if for all $a$ such that $G$ is partially PSD, there exists a PSD completion. Recall that a graph is [*chordal*]{} if for every cycle with at least four vertices, there is an edge connecting non-adjacent vertices. The classical result of [@GrJoSaWo:84] states that $G$ is PSD completable if, and only if, it is chordal. An interesting problem for non-chordal graphs is to characterize the vectors $a$ for which $G$ admits a PSD completion. Here we consider PSD completions of non-chordal cycles with loops. This problem was first looked at in [@MR1236734], where the following special case is presented. \[thm:simplecycle\] Let $n\ge 4$ and $\theta, \phi \in [0,\pi]$. Then $$\label{simple} C := \begin{bmatrix} 1 & \cos(\theta) & & & \cos(\phi) \\ \cos(\theta) & 1 & \cos(\theta) & ? & \\ & \cos(\theta) & 1 & \ddots & \\ & ? & \ddots & \ddots & \cos(\theta) \\ \cos(\phi) & & & \cos(\theta) & 1 \end{bmatrix},$$ has a positive semidefinite completion if, and only if, $$\phi \le (n-1)\theta \le (n-2)\pi + \phi \qquad \text{for n even}$$ and $$\phi \le (n-1)\theta \le (n-1)\pi - \phi \qquad \text{for n odd.}$$ The partial matrix has a positive definite completion if, and only if, the above inequalities are strict. Using the results of the previous sections we present an analytic expression for exposing vectors in the case where a PSD completion exists but not a PD one, i.e., the Slater CQ does not hold for the corresponding [**SDP**]{}. We begin by showing that the primal part of the parametric path is always Toeplitz. In general, for a partial Toeplitz matrix, the unique maximum determinant completion is not necessarily Toeplitz. For instance, the maximum determinant completion of $$\begin{bmatrix} 6 & 1 & x & 1 & 1 \cr 1 & 6 & 1 & y & 1 \cr x & 1 & 6 & 1 & z \cr 1 & y & 1 & 6 & 1 \cr 1 & 1 & z & 1 & 6 \end{bmatrix}$$ is given by $x=z=0.3113$ and $y=0.4247$. \[md\] If the parital matrix $$P := \begin{bmatrix} a & b & & & c \\ b & a & b & ? & \\ & b & a & \ddots & \\ & ? & \ddots & \ddots & b \\ c & & & b & a \end{bmatrix}$$ has a positive definite completion, then the unique maximum determinant completion is Toeplitz. First we present the following technical lemma. Let $J_n \in \Sn$ be the matrix with ones on the antidiagonal and zeros everywhere else, that is, $[J_n]_{ij} = 1$ when $i+j=n+1$ and zero otherwise. For instance, $J_2=\begin{bmatrix} 0 & 1 \cr 1 & 0 \end{bmatrix}$. \[persymm\] If $A$ is the maximum determinant completion of $P$, then $A=JAJ$. As $A$ is a completion of $P$, so is $JAJ$. Furthermore, $\det (A) = \det (JAJ)$. Since the maximum determinant completion is unique, we must have that $A=JAJ$. The proof is by induction on the size $n$. When $n=4$ the result follows from Lemma \[persymm\]. Suppose Theorem \[md\] holds for size $n-1$. Let $A$ be the maximum determinant completion of $P$. Then by the optimality conditions of Theorem \[thm:maxdet\], $$A^{-1}= \begin{bmatrix} * & * &0 & \cdots & 0 & * \\ * & * & * & 0 & \ddots & 0\\ 0 & * & * & * & \ddots & \vdots \\ \vdots & \ddots & \ddots & \ddots & \ddots & 0\\ 0 & \cdots & 0 & * & * & * \\ * & 0 & \cdots & 0& *& * \end{bmatrix}.$$ Let $\alpha := A_{1,n-1}$, and consider the $(n-1)\times (n-1)$ partial matrix $$\label{simple2} \begin{bmatrix} a & b & & & \alpha \\ b & a & b & ? & \\ & b & a & \ddots & \\ & ? & \ddots & \ddots & b \\ \alpha & & & b & a \end{bmatrix},$$ By the induction assumption, has a Toeplitz maximum determinant completion, say $B$. Note that $$\label{simple4} B^{-1}= \begin{bmatrix} * & * &0 & \cdots & 0 & * \\ * & * & * & 0 & \ddots & 0\\ 0 & * & * & * & \ddots & \vdots \\ \vdots & \ddots & \ddots & \ddots & \ddots & 0\\ 0 & \cdots & 0 & * & * & * \\ * & 0 & \cdots & 0& *& * \end{bmatrix}.$$ Now consider the partial matrix $$\label{simple3} \begin{bmatrix} B & \begin{bmatrix} c \cr ? \cr \vdots \cr ? \cr b \end{bmatrix} \cr \begin{bmatrix} c & ? & \cdots & ? & b \end{bmatrix} & a \end{bmatrix}$$ Since this is a chordal pattern we only need to check that the fully prescribed principal minors are positive definite. These are $B$ and $$\begin{bmatrix} a & \alpha & c \cr \alpha & a & b \cr c & b & a \end{bmatrix} ,$$ the latter of which is a principal submatrix of the positive definite matrix $A$. Thus has a maximum determinant completion, say $C$. Then $$C^{-1} = \begin{bmatrix} * & \begin{bmatrix} * \cr 0 \cr \vdots \cr 0 \cr * \end{bmatrix} \cr \begin{bmatrix} * & 0 & \cdots & 0 & * \end{bmatrix} & * \end{bmatrix} = : \begin{bmatrix} L & M \cr M^T & N \end{bmatrix} .$$ By the properties of block inversion, $$C = \begin{bmatrix} (L-MN^{-1}M^T)^{-1} & * \cr * & * \end{bmatrix} = \begin{bmatrix} B & * \cr * & * \end{bmatrix} ,$$ and it follows that $B^{-1} = L-MN^{-1}M^T$. Since $MN^{-1}M^T$ only has nonzero entries in the four corners, we obtain that $$L=\begin{bmatrix} * & * &0 & \cdots & 0 & * \\ * & * & * & 0 & \ddots & 0\\ 0 & * & * & * & \ddots & \vdots \\ \vdots & \ddots & \ddots & \ddots & \ddots & 0\\ 0 & \cdots & 0 & * & * & * \\ * & 0 & \cdots & 0& *& * \end{bmatrix}.$$ We now see that $C^{-1}$ and $A^{-1}$ have zeros in all entries $(i,j)$ with $|i-j| >1$ and $(i,j) \not\in\{(1,n-1), (1,n), (n-1,1) , (n,1)\}$. Also, $A$ and $C$ have the same entries in positions $(i,j)$ where $|i-j|\le 1$ or where $(i,j) \in\{(1,n-1), (1,n), (n-1,1) , (n,1)\}$. But then $A$ and $C$ are two positive definite matrices where for each $(i,j)$ either $A_{ij}=C_{ij}$ or $(A^{-1})_{ij} = (C^{-1})_{ij}$, yielding that $A=C$ (see, e.g., [@MR1321785]). Finally, observe that the Toeplitz matrix $B$ is the $(n-1)\times (n-1)$ upper left submatrix of $C$, and that $A=JAJ$, to conclude that $A$ is Toeplitz. When has a PD completion, then this result states that the analytic center of all the completions is Toeplitz. When has a PSD completion, but not a PD completion then the primal part of the parametric path is always Toeplitz and since the Toeplitz matrices are closed, admits a maximum rank Toeplitz PSD completion. In the following proposition we see that the dual part of the parametric path has a specific form. \[Tinverse\] Let $T=(t_{i-j})_{i,j=1}^n$ be a positive definite real Toeplitz matrix, and suppose that $(T^{-1})_{k,1}=0$ for all $k\in \{3,\ldots , n-1\}$. Then $T^{-1}$ has the form $$\begin{bmatrix} a & c & 0& & d \\ c & b & c & \ddots & \\ 0& c & b & \ddots &0 \\ & \ddots & \ddots & \ddots & c \\ d & & 0& c & a \end{bmatrix},$$ with $b=\frac{1}{a} (a^2+c^2-d^2)$. Let us denote the first column of $T$ by $\begin{bmatrix} a & c & 0 & \cdots & 0 & d \end{bmatrix}^T$. By the [*Gohberg-Semencul formula*]{} (see [@MR0353038; @MR1038316]) we have that $$T^{-1} =\frac{1}{a} ( AA^T-BB^T ),$$ where $$A=\begin{bmatrix} a & 0 & 0& & 0 \\ c & a & 0 & \ddots & \\ 0& c & a & \ddots &0 \\ & \ddots & \ddots & \ddots & 0 \\ d & & 0& c & a \end{bmatrix}, B= \begin{bmatrix} 0 & 0 & 0& & 0 \\ d & 0 & 0 & \ddots & \\ 0& d & 0 & \ddots &0 \\ & \ddots & \ddots & \ddots & 0 \\ c & & 0& d & 0 \end{bmatrix}.$$ \[cor:expvecsimplecycle\] If the set of PSD completions of is contained in a proper face of $\Snp$ then there exists an exposing vector of the form $$C_E := \begin{bmatrix} a & c & 0& & d \\ c & b & c & \ddots & \\ 0& c & b & \ddots &0 \\ & \ddots & \ddots & \ddots & c \\ d & & 0& c & a \end{bmatrix},$$ for a face containing the completions. Moreover, $C_E$ satisfies $$2\cos(\theta)c + b = 0 \quad \text{and} \quad a + \cos(\theta)c + \cos(\phi)d = 0.$$ Existence follows from Proposition \[Tinverse\]. By definition, $C_E$ is an exposing vector for the face if, and only if, $C_E \succeq 0$ and $\langle X, C_E\rangle = 0$ for all positive semidefinite completions, $X$, of $C$. Since $X$ and $C_E$ are positive semidefinite, we have $XC_E = 0$ and in particular $\operatorname{{diag}}(XC_E) = 0$, which is satisfied if, and only if, $$\cos(\theta)c + b + \cos(\theta)c = 0 \quad \text{and} \quad a + \cos(\theta)c + \cos(\phi)d = 0,$$ as desired. Conclusion {#sec:conclusion} ========== In this paper we have considered a ‘primal’ approach to facial reduction for [**SDPs**]{}that reduces to finding a relative interior point of a spectrahedron. By considering a parametric optimization problem, we constructed a smooth path and proved that its limit point is in the relative interior of the spectrahedron. Moreover, we gave a sufficient condition for the relative interior point to coincide with the analytic center. We proposed a projected Gauss-Newton algorithm to follow the parametric path to the limit point and in the numerical results we observed that the algorithm converges. We also presented a method for constructing spectrahedra with singularity degree $1$ and provided a sufficient condition for constructing spectrahedra of larger singularity degree. Finally, we showed that the parametric path has interesting structure for the simple cycle completion problem. This research has also highlighted some new problems to be pursued. We single out two such problems. The first regards the eigenvalues of the limit point that are neither sufficiently small to be deemed zero nor sufficiently large to be considered as non-zero. We have experimented with some eigenvalue deflation techniques, but none have led to a satisfactory method. Secondly, there does not seem to be a method in the literature for constructing spectrahedra with specified singularity degree. \[ind:index\] [^1]: Department of Combinatorics and Optimization Faculty of Mathematics, University of Waterloo, Waterloo, Ontario, Canada N2L 3G1; Research supported by The Natural Sciences and Engineering Research Council of Canada and by AFOSR. [^2]: Department of Mathematics, Drexel University, 3141 Chestnut Street, Philadelphia, PA 19104, USA. Research supported by Simons Foundation grant 355645. [^3]: Department of Combinatorics and Optimization Faculty of Mathematics, University of Waterloo, Waterloo, Ontario, Canada N2L 3G1; Research supported by The Natural Sciences and Engineering Research Council of Canada and by AFOSR; [www.math.uwaterloo.ca/\~hwolkowi](www.math.uwaterloo.ca/~hwolkowi). [^4]: Our algorithm finds the search direction using . If we looked at a singular value decomposition then we get the equivalent system $\Sigma (V^T d \bar s) = (U^T RHS)$. We observed that several singular values in $\Sigma$ converge to zero while the corresponding elements in $(U^T RHS)$ converge to zero at a similar rate. This accounts for the improved accuracy despite the huge condition numbers. This appears to be a similar phenomenon to that observed in the analysis of interior point methods in [@MR99i:90093; @MR96f:65055] and as discussed in [@GoWo:04].
--- abstract: 'The accumulated criminal records shows that serious and minor crimes differ in many measures and are related in a complex way. While some of those who have committed minor crime spontaneously evolve into serious criminals, the transition from minor crime to major crime involves many social factors and have not been fully understood yet. In this work, we present a mathematical model to describe how minor criminals turns in to major criminals inside and outside of prisons. The model is design to implement two social effects which respectively have been conceptualized in popular terms “broken windows effect” and “prison as a crime school.” Analysis of the system shows how the crime-related parameters such as the arrest rate, the period of imprisonment and the in-prison contact rate affect the criminal distribution at equilibrium. Without proper control of contact between prisoners, the longer imprisonment rather increases occurrence of serious crimes in society. An optimal allocation of the police resources to suppress crimes is also discussed.' author: - | Jongo Park[^1] and Pilwon Kim[^2]\ Department of Mathematical Sciences\ Ulsan National Institute of Science and Technology(UNIST)\ Ulsan Metropolitan City\ 44919, Republic of Korea title: Dynamics of Crime In and Out of Prisons --- Introduction ============ Understanding what factors cause a high crime rate is essential to developing effective measures to prevent crime in a community. The accumulated criminal records [@national2009understanding; @kesteren2000criminal] shows that serious and minor crimes differ in many measures such as occurrence rate, arrest rate and rehabilitation rate. There is also difference between control activity of the police devoted to serious crimes and that devoted to minor crimes [@britt1975crime]. In recent years, there have been substantial progresses in developing mathematical tools to investigate criminal activity. Mathematical models based on the reaction-diffusion equations have been proposed [@short2008statistical; @short2010nonlinear; @rodriguez2010local] to study dynamics of localized patterns of criminal activity, especially focusing on the re-victimisation phenomena and the hot-spot formation. Some other models have adapted their basis from population biology, such as infectious disease models [@campbell1997social; @ormerod2001non; @mcmillon2014modeling] and predator-prey models [@vargo1966note; @comissiong2012criminals; @nuno2008triangle]. A similar approach was used for modeling organized crimes [@comissiong2012life; @sooknanan2013catching], where gang membership is treated as an infection that multiplies through peer contagion. In this study, we present a mathematical model to describe how minor criminals turns in to major criminals. While some of those who have committed minor crime spontaneously evolve into serious criminals [@trove.nla.gov.au/work/9870833], the transition from minor crime to major crime involves many social factors and have not been fully understood yet. Besides the basic progressive nature of crime, we are interested in finding extra factors that accelerate the transition from minor crimes to major crimes. This paper focuses on criminal transitions occurring inside and outside of prison, which respectively have been conceptualized in popular terms “broken windows effect” and “prison as a crime school”: the broken windows theory states that accumulation of low level offenses in a community, if not adequately controlled, acts as a social pressure that leads to more serious crimes [@wilson1982police; @harcourt2006broken; @cerda2009misdemeanor]. On the contrary, in prisons, staying with many criminal peers in a limited facility makes minor criminals frequently contact with hard-core and skilled criminals and possibly deepens their illegal involvement [@damm2016prison; @cullen2011prisons; @henneguelle2016better]. The peer-effect on crime recidivism is strongly supported by recent empirical research in many countries [@bayer2009building; @damm2013deal; @ouss2011prison]. We are especially interested in the influence of over-crowded prison facilities on criminal transition. A mathematical model reflecting the effect of incarceration on recidivism has been proposed in [@mcmillon2014modeling]. However, to the authors’ knowledge, the in-prison dynamics between minor and major criminals and the effect of the prison capacity on it have never been studied before in a mathematical framework. The behavior of the model is investigated through stability analysis, bifurcation analysis, and numerical simulations. By analyzing the corresponding system of equations, we demonstrate how the crime-related parameters such as the arrest rate, the rehabilitation rate and the capacity of the prison affect the criminal distribution at equilibrium. The results also suggests an optimal allocation of the police resources to minimize occurrence of serious crimes. This may be used to assist policy-makers in the development of effective crime control strategies. Model ===== Basic assumptions {#basic-assumptions .unnumbered} ----------------- We first formulate our proposal as a compartmental model, with the population $T>0$ being divided into five disjoint groups $N, M, F, P_M$ and $P_F$. The group $N$ represents non-criminals who have never involved any criminals or have finished serving their prison sentence. $M$ and $F$ denote individuals who have committed misdemeanor and felony respectively, but have not get arrested yet. Once they are arrested and are sentenced to be imprisoned, they become inmates, $P_M$ and $P_F$ , respectively. They may return to normal civilian $N$ after serving their sentence in prison. However, part of them commit a crime again, reverting back to $M$ or $F$. Figure 1 shows the structure of transitions occurring between groups in the basic model. ![Transition diagram for the basis model. The direct transition from $N$ to $F$ is considered negligible and is not implemented in the model.](diagram.png) In the absence of good evidence to the contrary, we follow the common rules in population dynamics: 1) the transfer out of any particular group is proportional to its size. 2) if the transfer is caused by contact between two group members, it is proportional to the size of both groups. Based on the rule 1, we set $c\ge 0$ and $d\ge 0$ to express the transition rates from $N$ to $M$, and from $M$ to $F$, respectively. $$N\xlongrightarrow{ c}M\quad\text{and}\quad M\xlongrightarrow{d}F \label{N2M2F}$$ We assume the direct transition from $N$ to $F$ is negligible compared to that from $M$ to $F$. The sequential transition from $N$ through $M$ to $F$ is justified from the report that most of those people who commit major crime have committed minor crimes before [@trove.nla.gov.au/work/9870833]. The second rule(transition by contact) is based on the assumption that the population is homogeneously mixed, and should be dealt in our model with care. While the models for organized gang crimes [@comissiong2012life; @sooknanan2013catching] treat the transition between gang members as contagion by frequent contact, such analogy with epidemic process may be not adequate for daily contacts occurring in a general society. Considering the nature of criminals that hide their true intension from others, we can presume that assimilation with criminals occurring by random contact is rare and negligible compared to other factors that we will consider in the following sections. Once criminals are caught and convicted, they are imprisoned to serve their sentence. If the parameters $a_M\ge 0$ and $a_F\ge 0$ are respectively the arrest-and-conviction rates for misdemeanor and felony, the corresponding transitions are $$M\xlongrightarrow{a_M}P_M\quad\text{and}\quad F\xlongrightarrow{a_F}P_F \label{MF2P}$$ Let $i_M>0$ and $i_F>0$ be the period of imprisonment for minor and major criminals, respectively. In addition, let $0\le r_M \le 1$ and $0\le r_F \le 1$ denote the rehabilitation rates for minor and major criminals, respectively, which determines the proportion of prisoners moving back to after release. Then we have the transitions $$P_M\xlongrightarrow{r_M/i_M}N\quad\text{and}\quad P_F\xlongrightarrow{r_F/i_F}N \label{P2N}$$ This implies that the other portions, $1-r_M$ and $1-r_F$, of inmates commit a crime again, as $$P_M\xlongrightarrow{(1-r_M)/i_M}M\quad\text{and}\quad P_F\xlongrightarrow{(1-r_F)/i_F}F \label{P2MF}$$ Broken windows effect out of prisons {#broken-windows-effect-out-of-prisons .unnumbered} ------------------------------------ Although the above transitions describe the basic structure of dynamics between criminals and non-criminals, it does not properly reflect important interactions between them. We extend the basic model to incorporate the broken windows theory. The theory states that seemingly petty signals of mischief, if not adequately controlled, elicit more serious crime. In order to describe the atmospheric pressure that accelerates transition from minor criminals to major criminals, we add to (1) the quadratic size effect as $$M\xlongrightarrow{bM}F \label{MM2FF}$$ where $b \ge 0$ is the coefficient that represents the broken-windows effect. Crime-school effect in prisons {#crime-school-effect-in-prisons .unnumbered} ------------------------------ We further extend the model to investigate how interactions in prison influence post-release behavior of criminals. The beneficial deterrent effect of prison may be weakened by the negative side-effects of incarceration: Through routine contacts in a limited facility area, criminals get to learn from each other, build new networks and find new opportunities for crime [@henneguelle2016better]. We assume that, in prison, criminal motivations and skills are spread between inmates and the “contagion” occurs by frequent contacts between them. To describe such in-prison transition from the minor criminals to the major criminals, we define $P_M’$ as those who used to be minor criminals and turn to be major criminals by assimilation with them. They emerged from the transitions as $$P_M+P_F\xlongrightarrow{\beta}P_M'+P_F\quad\text{and}\quad P_M+P_M'\xlongrightarrow{\beta}P_M'+P_M' \label{2PMP}$$ where $\beta\ge 0$ is the transmission contact rate of prisoners. It is desirable to maintain the total number of the prisoners under the capacity of the prison facilities. Overcrowding in prison increases intensity of interactions between prisoners and raises risk of recidivism [@RePEc:hal:journl:halshs-01184046]. The existence of $P_M’$ is problematic, as they are virtually major criminals, while they are treated as minor ones: they are “disguised major criminals” and are released after short detention with a low rehabilitation rate. $$P_M'\xlongrightarrow{r_F/i_M}N\quad\text{and}\quad P_M'\xlongrightarrow{(1-r_F)/i_M}F. \label{PMP2}$$ Now, based on the transitions (1) to (7), we derive the corresponding model as $$\begin{aligned} \frac{dN}{dt}&=-cN+\frac{r_M}{i_M}P_M+\frac{r_F}{i_F}P_F+\frac{r_F}{i_M}P_M'\\ \frac{dM}{dt}&=cN-(a_M+d)M-bM^2+\frac{1-r_M}{i_M}P_M\\ \frac{dF}{dt}&=dM+bM^2-a_F F+\frac{1-r_F}{i_F}P_F+\frac{1-r_F}{i_M}P_M'\\ \frac{dP_M}{dt}&=a_M M-\frac{1}{i_M}P_M-\beta P_M(P_F+P_M')\\ \frac{dP_F}{dt}&=a_F F-\frac{1}{i_F}P_F\\ \frac{dP_M'}{dt}&=-\frac{1}{i_M}P_M'+\beta P_M(P_F+P_M') \label{model} \end{aligned}$$ where $N\left(0\right)\geq0, M\left(0\right)\geq0, F\left(0\right)\geq0, P_M\left(0\right)\geq0, P_F\left(0\right)\geq0$ and $P_M'\left(0\right)\geq0$. In our work, we assume that the transitions between the compartments of the model converge to equilibrium in short time scales and the society maintains with a fixed total population $T=N+M+F+P_M+P_F+P_M'$ in the meantime. The model does not consider birth and death of population: including these factors requires nontrivial extension of work, since there are substantial difference in the death rate between criminals and non-criminals. It has been reported that incarceration seriously reduces life expectancy [@wildeman2016incarceration]. Some study has shown that five years behind bars increased the chance of death by 78$\%$ and shortened the expected life span at age 30 by 10 years [@patterson2013dose]. To involve these factors in the model, one needs to find a possible mechanism that induces such difference in demographic data. We leave this as a plausible extension of the current model for the future study. Analysis of Model ================= The model (\[model\]) is a 6-dimensional nonlinear systems and is hard to analytically study as is. In the following, we investigate the asymptotic behaviours of the model for the four extreme cases; i) no crime-school effect, ii) no broken-windows effect, iii) low arrest rate and strong punishment, and iv) strong precaution and low rehabilitation. Suppose all the parameters in the system (\[model\]) are positive except $\beta=0$. There is a unique equilibrium $\left(N^{\star},M^{\star},F^{\star},P_M^{\star},P_F^{\star},P_M'^{\star}\right) $ of the system (\[model\]) such that $$\begin{aligned} M^{\star}&=\frac{-p+\sqrt{w}}{2q}, \\ F^{\star}&=\frac{cT-s_M M^{\star}}{s_F}\\ P_M^{\star}&=i_M a_M M^{\star},\\ P_F^{\star}&=i_F a_F F^{\star},\\ P_M'^{\star}&=0,\\ N^{\star}&=T-M^{\star}-F^{\star}-P_M^{\star}-P_F^{\star},\end{aligned}$$ where $$\begin{aligned} p&=cd+cda_F i_F+c a_F r_F+d a_F r_F+c a_M a_F i_M r_F+ a_M a_F r_M r_F,\\ q&=b c +b c a_F i_F+b a_F r_F,\\ w&=p^{2}+4ca_F r_F qT,\\ s_M&=c+a_M i_M+a_M r_M,\\ s_F&=c+a_F i_F+a_F r_F.\end{aligned}$$ The equilibrium is locally asymptotically stable either for sufficiently small values of $a_M$ and $a_F$ , or for sufficiently large values of $i_M$ and $i_F$. \[BW\] Without the crime-school effect, no minor criminals in the prison degenerates to potential felonies, that is, $P_{M}'^{\star}=0$. The population of minor and major criminals in and out of the prison are directly proportional with the arrest rate and the imprisonment period. One can estimate the population of the uncaught criminals from the number of inmates as $$M^{\star}=\frac{1}{i_M a_M} P_M^{\star}\quad \text{and}\quad F^{\star}=\frac{1}{i_F a_F} P_F^{\star}. \label{estimate}$$ The next theorem shows that the distribution of criminals become more complex if minor criminals in prison can turn into potential felonies. Let $b=0$. There is a unique equilibrium $\left(N^{\star},M^{\star},F^{\star},P_{M}^{\star},P_{F}^{\star},P_{M}'^{\star}\right)$ of the system of (\[model\]) $$\begin{aligned} M^{\star}&=\frac{1}{ui_{M}}\left(ca_{F}i_{M}r_{F}T-P_{M}^{\star}s_{F}\right)\\ F^{\star}&=\frac{1}{ui_{M}}\left(\left(d+a_{M}\left(1-r_{F}\right)\right)ci_{M}T+P_{M}^{\star}\left(dr_{F}-dr_{M}-s_{M}\right)\right)\\ P_{M}^{\star}&=\frac{-p+\sqrt{w}}{2q}\\ P_{F}^{\star}&=a_{F}i_{F}F^{\star}\\ P_{M}'^{\star}&=-P_{M}^{\star}+a_{M}i_{M}M^{\star} \end{aligned}$$ where $$\begin{aligned} p&=cd+cda_{F}i_{F}+cd\beta a_{F}i_{M}i_{F}T+c\beta a_{M}a_{F}i_{M}i_{F}T+ca_{F}r_{F}+da_{F}r_{F}+ca_{M}a_{F}+da_{F}r_{F}\\ &+ca_{M}a_{F}i_{M}r_{F}+c\beta a_{M}a_{F}i_{M}^{2}r_{F}T+a_{M}a_{F}r_{M}r_{F}-c\beta a_{M}a_{F}i_{M}i_{F}r_{F}T\\ q&=-cd\beta i_{M}-c\beta a_{F}i_{F}-cd\beta a_{F}i_{M}i_{F}-c\beta a_{M}a_{F}i_{M}i_{F}-d\beta a_{F}i_{F}r_{M}\\ &-\beta a_{M}a_{F}i_{F}r_{M}-c\beta a_{F}i_{M}r_{F}-d\beta a_{F}i_{M}r_{F}-c\beta a_{M}a_{F}i_{M}^{2}r_{F}+c\beta a_{F}i_{F}r_{F}+d\beta a_{F}i_{F}r_{F}\\ &+c\beta a_{M}a_{F}i_{M}i_{F}r_{F}-\beta a_{M}a_{F}i_{M}r_{M}r_{F}+\beta a_{M}a_{F}i_{F}r_{M}r_{F}\\ w&=p^{2}+4ca_{M}a_{F}i_{M}r_{F}qT\\ u&=cd+ca_{M}+cda_{F}i_{F}+ca_{M}a_{F}i_{F}-ca_{M}r_{F}+ca_{F}r_{F}+da_{F}r_{F}\\ &+a_{M}a_{F}r_{F}+ca_{M}a_{F}i_{M}r_{F}-ca_{M}a_{F}i_{F}r_{F}\\ s_{M}&=c-cr_{F}+ca_{M}i_{M}+a_{M}r_{M}-ca_{M}i_{M}r_{F}-a_{M}r_{M}r_{F}\\ s_{F}&=c-cr_{F}+ca_{F}i_{F}+a_{F}r_{F}-ca_{F}i_{F}r_{F}-a_{F}r_{M}r_{F} \end{aligned}$$ The equilibrium is locally asymptotically stable for sufficiently small $a_{M}$ and $a_{F}$. \[CrimSchool\] One can confirm that $P_{M}'^{\star}>0$ at equilibrium if $b=0$. Since the potential felonies $P_{M}'$ are formally counted as minor criminals, estimate of the criminal population based on the number minor and major criminals as in (\[estimate\]) results in an undershoot of minor criminals in society. In the next two theorems we describe asymptotic behaviour of the model (\[model\]) with small parameters, based on the geometric perturbation theory [@tikhonov1952systems]. Theorem 3 deals with the case of the low arrest rate and the long imprisonment for major criminals. This corresponds to the situation that the police fail to track down criminals properly due to limited budget or lack of effective measures, while they make an example of few arrested criminals by punishing them severely. Suppose $a_{F}=k_{1}\epsilon$ and $\frac{1}{i_{F}}=k_{2}\epsilon$ where $0<\epsilon\ll1$ and $k_{1},k_{2}>0$. Then, the corresponding degenerate system has a solution $\Gamma^{0}=\left(N^{0},M^{0},F^{0},P_{M}^{0},P_F^{0},P_{M}'^{0}\right)$ where $$\begin{aligned} &N^{0}\left(t\right)=M^{0}\left(t\right)=P_{M}^{0}\left(t\right)=P_{M}'^{0}\left(t\right)=0,\\ &P_{F}^{0}=\frac{k_{1}}{k_{1}+k_{2}}T+C\exp\left(-\left(k_{1}+k_{2}\right)Tt\right),\quad \text{and}\quad F^{0}\left(t\right)=T-P_{F}^{0}\left(t\right) \end{aligned}$$ for some $C\in{\mathbb{R}}$. here exists a locally unique attracting solution $\Gamma^{\epsilon}$ of the system $\left(\ref{model}\right)$ for $\epsilon$ sufficiently small, which tends to $\Gamma^{0}$ for $\epsilon\rightarrow0$. \[SFP\] Theorem 3 shows that if the arrest rate for major crimes is extremely low, no matter how strong the punishment is, most population turns into major criminals, either in or out of the prison. The next case is for the low transition rate $c$ and the low rehabilitation rates $r_M$ and $r_F$. One of the possible scenario for this is stigmatizing persons who have ever committed crimes and hardly accepting them as a part of society. This also gives a signal of strong precaution to people that they can be expelled from the society even with one minor criminal activity, leading to the low transition rate $c$. Suppose $r_{M}=k_{1}\epsilon$, $r_{F}=k_{2}\epsilon$ and $c=k_{3}\epsilon$ where $0<\epsilon\ll1$ and $k_{1},k_{2},k_{3}>0$. Then, the corresponding degenerate system has a solution $\Gamma^{0}=\left(N^{0},M^{0},F^{0},P_{M}^{0},P_F^{0},P_{M}'^{0}\right)$ where $$\begin{aligned} &M^{0}\left(t\right)=P_{M}^{0}\left(t\right)=P_{M}'^{0}\left(t\right)=0,\\ &N^{0}\left(t\right)=\frac{k_{2}a_{F}}{k_{2}a_{F}+k_{3}\left(1+a_{F}i_{F}\right)}T+C\exp\left(-\left(\frac{a_{F}}{1+a_{F}i_{F}}k_{2}+k_{3}\right)t\right),\\ &F^{0}\left(t\right)=\frac{1}{1+a_{F}i_{F}}\left(T-N^{0}\left(t\right)\right),\quad\text{and}\quad P_{F}^{0}\left(t\right)=\frac{a_{F}i_{F}}{1+a_{F}i_{F}}\left(T-N^{0}\left(t\right)\right) \end{aligned}$$ for some $C\in{\mathbb{R}}$. here exists a locally unique attracting solution $\Gamma^{\epsilon}$ of the system $\left(8\right)$ for $\epsilon$ sufficiently small, which tends to $\Gamma^{0}$ for $\epsilon\rightarrow0$. \[SocialStigama\] Making criminals’ rehabilitation hard with strict separation eventually divides the population into non-criminals and felonies. To have a larger potion of non-criminals, however, we need to hold relatively higher level of the rehabilitation rate than the transition rate, that is, $k_2>k_3$. Bifurcation Analysis ==================== In this section, we perform the bifurcation analysis of the proposed model (\[model\]). For a typical simulation, we set the parameters as $$\begin{aligned} &b=0.00001,c=0.00012,d=0.0004\\ &a_{M} =0.1,a_{F} =0.1,r_{M} =0.4,r_{F} =0.2\\ &\beta=0.001,i_M=0.5,i_F=5. \end{aligned}$$ These parameters are calibrated such that the distribution of major/minor criminals and their arrest rates largely agree with crime statistics in several countries [@national2009understanding; @kesteren2000criminal]. We set the total population $T=1,000,000$ through out the analysis. ![Equilibrium distribution according to the broken-windows effect $b$](b){width="70.00000%"} In Figure 2, the bifurcation diagram for the broken-windows effect $b$ is illustrated. The minor crime tends to decrease and the major crime increases as $b$ grows. This implies that elimination of environmental factors that reveal misdemeanors is important to prevent occurrence of more serious crimes. Note that keeping $b$ near zero cuts down major criminals to a negligible level. This implies that the broken widows effect becomes even more important in a safe society where the ratio of major criminals is relatively low. ![Equilibrium distribution according to the in-prison contact rate $\beta$](be.eps){width="70.00000%"} While suppressing $b$ promotes the preventive effect on serious crimes out of prison, controls inside prison can work as a more practical measure against crimes. The bifurcation diagram in Figure 3 shows how the distribution of criminals changes with the contact rate $\beta$ in prison. The rise of $\beta$ increases the number of major criminals in society. Another in-prison measure to control crimes is the period of imprisonment for criminals. There are mixed evidence regarding the question of whether spending more time in prison increases the rehabilitation rate [@reci2; @reci3]. However, here we assume that the period of imprisonment and the rehabilitation rate are weakly positively correlated, as long as criminals are held in custody in a effectively managed facility. Let us denote $w \ge 1$ as a general weight of offense and set the period of imprisonment as $$i_M=w\, i_M^{\text{min}}\quad \text{and}\quad i_F=w\, i_F^{\text{min}}$$ where $i_M^{\text{min}}=0.5$(year) and $i_F^{\text{min}}=5$(year) are the minimum period for minor and major criminals, respectively. The weight of offense is also related with the rehabilitation rate. We set as $$r_M=0.06w+0.34\quad \text{and}\quad r_F=0.03w+0.17$$ so that $r_M$ and $r_F$ slightly increases with $w$. Note that this agrees with (9) when $w=1$. [0.47]{} ![Equilibrium distribution according to the weight of offense $w$ (a) at $\beta=0.0002$ and (b) at $\beta=0.001$.](w1.eps "fig:"){width="\textwidth"} [0.47]{} ![Equilibrium distribution according to the weight of offense $w$ (a) at $\beta=0.0002$ and (b) at $\beta=0.001$.](w2.eps "fig:"){width="\textwidth"} Figure 4 shows how the weight of offense contributes to the criminal distribution in two cases: (a) with $\beta=0.0002$ and (b) $\beta=0.001$. It is no surprising that the number of inmates increases with $w$ in both cases, since a higher weight of offense means a longer detention. More noteworthy differentiation between (a) and (b) is the change in $F$, the number of major criminals in society, according to $w$. When the transmission contact rate is as low as $\beta=0.0002$, a higher weight of offense reduces the number of criminals. On the contrary, when $\beta=0.001$, assigning more weight of offense leads to increase of major crimes in society. Hence a higher weight of offense has a positive reform effect only when the frequent contact between prisoners is effectively prohibited. Changes in the security measures are likely to have a greater impact on crime [@lee2016conclusions]. Let us investigate how the allocation of the police resource to control activity of major/minor crime affects the distribution of criminals. Let $c_{T}$ be the total budget for security. Also let $c_{M}$ and $c_{F}$ be the budget for control of minor crime and major crime, respectively. Note $c_{T}=c_{M}+c_{F}$. We assume that the arrest rate is proportional to the budget used to control the crime. Then we can set $a_{M}=e_{M}c_{M}$ and $a_{F}=e_{F}c_{F}$ where $e_{M}$ and $e_{F}$ are the police efficiency for minor and major crime, respectively. In the example, $e_{M}=0.2$ and $e_{F}=0.04$ are used. Figure 5 shows how the budget ratio $c_{F}/c_{T}$ affects the number of major criminals. The minimum of $F$ is achieved at around $c_{F}/c_{T}\approx 0.4$. Spending more portion of the budget for the major crime control brings negligence on minor crimes, which eventually leads to excessive occurrence of major crimes due to the broken windows effect and the crime school effect. ![Equilibrium distribution according to the allocation of police resource. $e_M = 0.2, e_F = 0.04$](ct.eps){width="70.00000%"} Discussion ========== We here present the mathematical models for crime dynamics that mainly focus on transition from minor to major criminals occurring in and out of prisons. It is confirmed that both the broken windows effect and the crime-school effect greatly change the criminal distribution. While utilizing the broken windows effect is a preventive measure, improving conditions in correctional facilities can provide a more direct and efficient measure against crimes. The presented work showed that suppressing interactions between overcrowded inmates in prisons is crucial in controlling crimes in society. If not keeping the in-prison contact rate at a low level, extension of the period of imprisonment only results in rapid increase in major crimes. The model also shows the importance of an balanced resource allocation between control activity devoted to serious crimes and that devoted to minor crimes. The analysis confirms that, due to the broken windows effect and the crime-school effect, targeting only major crimes can be very inefficient and even bring an opposite result that increases major criminals. While the results in this work are not predictions, we hope that they can provide useful insights into crime dynamics and possibly suggest effective policies towards crime abatement. [**Acknowledgements**]{}\ This work was supported by the Ministry of Education of the Republic of Korea and the National Research Foundation of Korea (NRF-2017R1D1A1B04032921). The funder had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. [10]{} Patrick Bayer, Randi Hjalmarsson, and David Pozen, *Building criminal capital behind bars: Peer effects in juvenile corrections*, The Quarterly Journal of Economics **124** (2009), no. 1, 105–147. David W Britt and Charles R Tittle, *Crime rates and police behavior: A test of two hypotheses*, Social Forces **54** (1975), no. 2, 441–451. Courtney Brown, *Serpents in the sand: Essays on the nonlinear nature of politics and human destiny*, University of Michigan Press, 1995. Michael Campbell and Paul Ormerod, *Social interaction and the dynamics of crime*, Volterra Consulting Ltd (1997). Magdalena Cerd[á]{}, Melissa Tracy, Steven F Messner, David Vlahov, Kenneth Tardiff, and Sandro Galea, *Misdemeanor policing, physical disorder, and gun-related homicide: a spatial analytic test of“ broken-windows” theory*, Epidemiology (2009), 533–541. Donna Marie Giselle Comissiong, Joanna Sooknanan, and Balswaroop Bhatt, *Criminals treated as predators to be harvested: a two prey one predator model with group defense, prey migration and switching*, Journal of Mathematics Research **4** (2012), no. 4, 92. [to3em]{}, *Life and death in a gang-a mathematical model of gang membership*, Journal of Mathematics Research **4** (2012), no. 4, 10. National Research Council et al., *Understanding crime trends: Workshop report*, National Academies Press, 2009. Francis T Cullen, Cheryl Lero Jonson, and Daniel S Nagin, *Prisons do not reduce recidivism: The high cost of ignoring science*, The Prison Journal **91** (2011), no. 3\_suppl, 48S–65S. Anna Piil Damm and Cedric Gorinas, *Deal drugs once, deal drugs twice: peer effects on recidivism from prisons*, Essays on Marginalization and Integration of Immigrants and Young Criminals—A Labor Economics Perspective. Aarhus University (2013). Anna Piil Damm and C[é]{}dric Gorinas, *Prison as a criminal school: Peer effects and criminal learning behind bars*, The Rockwool Foundation Research Unit Study Paper (2016), no. 105. Bernard E Harcourt and Jens Ludwig, *Broken windows: New evidence from new york city and a five-city social experiment*, The University of Chicago Law Review (2006), 271–320. Ana[ï]{}s Henneguelle, Benjamin Monnery, and Annie Kensey, *Better at home than in prison? the effects of electronic monitoring on recidivism in france*, The Journal of Law and Economics **59** (2016), no. 3, 629–667. Young-Oh Hong, *A study on recidivism rates and recidivism prediction for violent crimes*, Korean Institute of Criminology, 2000. J van Kesteren, Patricia Mayhew, and Paul Nieuwbeerta, *Criminal victimization in seventeen industrialized countries*, WODC, 2000. Patrick A Langan and David J Levin, *Recidivism of prisoners released in 1994*, Fed. Sent. R. **15** (2002), 58. YongJei Lee, John E Eck, and Nicholas Corsaro, *Conclusions from the history of research into the effects of police force size on crime—1968 through 2013: a historical systematic review*, Journal of Experimental Criminology **12** (2016), no. 3, 431–451. David McMillon, Carl P Simon, and Jeffrey Morenoff, *Modeling the underlying dynamics of the spread of crime*, PloS one **9** (2014), no. 4, e88923. Benjamin Monnery, *Incarceration length and recidivism: qualitative results from a collective pardon in france*, Post-print, HAL, 2015. Juan C Nuno, Miguel A Herrero, and Mario Primicerio, *A triangle model of criminality*, Physica A: Statistical Mechanics and its Applications **387** (2008), no. 12, 2926–2936. Australia. Parliament. House of Representatives. Standing Committee on Legal, Constitutional Affairs, and 1942 Bishop, Bronwyn, *Crime in the community : victims, offenders and fear of crime*, \[Canberra, A.C.T. : House of Representatives, Standing Committee on Legal and Constitutional Affairs\], 2004 (English), Title from title frame of PDF file ; viewed 5 Dec. 2004. Paul Ormerod, Craig Mounfield, and Laurence Smith, *Non-linear modelling of burglary and violent crime in the uk*, Volterra Consulting Ltd (2001). D Wayne Osgood, *Statistical models of life events and criminal behavior*, Handbook of quantitative criminology, Springer, 2010, pp. 375–396. Aurelie Ouss, *Prison as a school of crime: Evidence from cell-level interactions*, (2011). Evelyn J Patterson, *The dose–response of time served in prison on mortality: New york state, 1989–2003*, American Journal of Public Health **103** (2013), no. 3, 523–528. Jovan Rajs, Teet H[ä]{}rm, and Ulf Brodin, *A statistical model examining repetitive criminal behavior in acts of violence.*, The American journal of forensic medicine and pathology **8** (1987), no. 2, 103–106. Nancy Rodriguez and Andrea Bertozzi, *Local existence and uniqueness of solutions to a pde model for criminal behavior*, Mathematical Models and Methods in Applied Sciences **20** (2010), no. supp01, 1425–1457. Martin B Short, Andrea L Bertozzi, and P Jeffrey Brantingham, *Nonlinear patterns in urban crime: Hotspots, bifurcations, and suppression*, SIAM Journal on Applied Dynamical Systems **9** (2010), no. 2, 462–483. Martin B Short, P Jeffrey Brantingham, Andrea L Bertozzi, and George E Tita, *Dissipation and displacement of hotspots in reaction-diffusion models of crime*, Proceedings of the National Academy of Sciences (2010). Martin B Short, Maria R D’orsogna, Virginia B Pasour, George E Tita, Paul J Brantingham, Andrea L Bertozzi, and Lincoln B Chayes, *A statistical model of criminal behavior*, Mathematical Models and Methods in Applied Sciences **18** (2008), no. supp01, 1249–1267. Martin B Short, Maria R D’orsogna, Patricia J Brantingham, and George E Tita, *Measuring and modeling repeat and near-repeat burglary effects*, Journal of Quantitative Criminology **25** (2009), no. 3, 325–339. J Sooknanan, B Bhatt, and DMG Comissiong, *Catching a gang–a mathematical model of the spread of gangs in a population treated as an infectious disease*, International Journal of Pure and Applied Mathematics **83** (2013), no. 1, 25–43. Andrei Nikolaevich Tikhonov, *Systems of differential equations containing small parameters in the derivatives*, Matematicheskii sbornik **73** (1952), no. 3, 575–586. Louis G Vargo, *A note on crime control*, The bulletin of mathematical biophysics **28** (1966), no. 3, 375–378. Christopher Wildeman, *Incarceration and population health in wealthy democracies*, Criminology **54** (2016), no. 2, 360–382. James Q Wilson and George L Kelling, *The police and neighborhood safety: Broken windows*, Atlantic monthly **127** (1982), no. 2, 29–38. [^1]: [email protected] [^2]: [email protected]
--- abstract: 'Here we present experimental details on the Ramsey Tomography Oscilloscope (RTO) protocol and details of the calculations used to extract the flux noise magnitude from Ramsey decay data.' author: - Daniel Sank$^1$ - 'R. Barends$^1$' - 'Radoslaw C. Bialczak$^1$' - Yu Chen$^1$ - 'J. Kelly$^1$' - 'M. Lenander$^1$' - 'E. Lucero$^1$' - 'Matteo Mariantoni$^{1,5}$' - 'M. Neeley$^{1,4}$' - 'P.J.J. O’Malley$^1$' - 'A. Vaisencher$^1$' - 'H. Wang$^{1,2}$' - 'J. Wenner$^1$' - 'T.C. White$^1$' - 'T. Yamamoto$^3$' - Yi Yin$^1$ - 'A.N. Cleland$^{1,5}$' - 'John M. Martinis$^{1,5}$' title: | Supplementary material for Surface spin fluctuations probed with\ flux noise and coherence in Josephson phase qubits --- Ramsey Tomography Oscilloscope ============================== Here we describe the data processing procedure for the Ramsey Tomography Oscilloscope (RTO). We found that careful signal processing was important in reducing statistical noise in the power spectra generated by the RTO. The bandwidth of the RTO measurement is set fundamentally by the rate at which the qubit can be measured and reset. In our case this would allow ideally 10,000 quantum measurements per second. With our current asynchronous control software this limit could not be reached while simultaneously tracking the time at which each measurement occurred. Maximum data rate with accurate time stamping was achieved with 2,400 quantum measurements per second, 600 of each of the four tomography sequences. Averaging the 600 measurements together produced one frequency measurement per second. This set the bandwidth of the experiment to be 0.5 Hz due to the Nyquist criterion. ![Cross spectra. (a) Cross spectrum measured using the RTO. (b) Cross spectrum computed from two independently simulated $1/f$ noise signals.[]{data-label="Figure:crossSpectrum"}](figure1Supplementary.eps){width="9cm"} Data was typically acquired for eight to ten hours, yielding between 28,000 and 36,000 points in the time series. Power spectra are computed as follows. First, the time series is divided into four or five non-overlapping sections. We compute the power spectrum of each section separately and average them together at the end of the procedure. To eliminate uncorrelated quantum measurement shot noise, we use an interleaving procedure on each section. Each section is split into two interleaved time series, $f_1(n)$ and $f_2(n)$ ($n$ is the discrete time index). These series are multiplied by Hann windows and the discrete Fourier transforms $F_1(k)$ and $F_2(k)$ are computed ($k$ is a frequency bin index). We form the product $F_1(k)F_2^*(k)$, average neighboring bins together using a Gaussian weight function with full width at half maximum (FWHM) of 20 bins, and take the magnitude to obtain the periodogram $P(k)$. Next, the periodogram is multiplied by a factor 1/0.375 to correct for the loss of incoherent (noise) power caused by application of the Hann window [@Harris:Windows]. The periodogram is then smoothed by averaging neighboring frequency bins with a Gaussian weight function with a variable FWHM scaling quadratically from 1 bin at the low end of the frequency band to 20 bins at the high end. The power spectrum $S(f)$ is then computed from the periodogram according to $$S(f)=\frac{2T}{(N/2)^2}P(k=fT)$$ where $T$ is the total length of time represented by the section of the time series, and $N$ is the number of points in the section. Finally, spectra generated from each section are averaged together. Cross Correlation ================= In order to check that the flux noise we measured was generated locally to each device, we used the RTO to measure cross correlation of the noise signals generated in two devices separated by 500 $\mu$m on the same chip. Time series of the two devices’ resonance frequencies were measured using the RTO, and the cross correlation was computed. Results are shown in Fig. \[Figure:crossSpectrum\]. Although there are frequencies at which the cross correlation amplitude is as high as 0.3, this must be compared against the cross correlation computed for two independently simulated noise signals. We find that the cross correlation of two independently simulated $1/f$ noise signals show very similar peak structure to the data, indicating that the noises within the two qubits are no more correlated than independent noise. This result agrees with the finding in Ref.[@Bialczak:Tomography], where it was inferred from quantum state tomography performed on two coupled qubits that dephasing in each qubit was uncorrelated. We note that the absence of a low frequency roll-off in the RTO data indicates that the low frequency flux noise is correlated on time scales exceeding the length of data acquisition. For this reason it is unsurprising that residual cross-correlation was found in both the data and the simulation. Comparison of RTO and Ramsey Decay ================================== ![(color online) The integrand of $I$. The curves are well behaved over the entire integration range.[]{data-label="Figure:ramseyIntegrands"}](figure2Supplementary.eps){width="9cm"} ![(color online) The integral $I$ evaluated versus $t$ for several values of $\alpha$. Note the strong sensitivity to $\alpha$; as $\alpha$ goes from 1.0 to 1.15, a $15\%$ change, the integral increases by a factor of $\sim$10.[]{data-label="Figure:ramseyIntegrals"}](figure3Supplementary.eps){width="9cm"} We wish to fit our Ramsey decay data to the theoretical curve given by Eq.(1) in the main text, which, for the case where the flux noise is $S_{\Phi}(f)=S_{\Phi}^*/f^{\alpha}$, is $$p(t) = \exp \left[ -\frac{(2\pi)^2}{2} \left( \frac{df_{10}}{d\Phi}\right)^2 S_{\Phi}^* \, t^{1+\alpha} \int_{f_{\textrm{m}}}^{\infty} \frac{\textrm{sin}(\pi z)^2}{(\pi z)^2} \frac{dz}{z^{\alpha}} \right] \label{eq:ramseyFormula}$$ Here $\textrm{sinc}(x)\equiv \sin(x)/x$, $f_{\textrm{m}} \approx 1$ hour and $t$ is in the range 0 to 400 ns [^1]. In order to do this we need to evaluate the integral $$I = \int_{f_{\textrm{m}}t}^{\infty} \frac{\sin(\pi z)^2}{(\pi z)^2}\frac{dz}{z^{\alpha}} .$$ We compute the integral numerically. Since $f_{\textrm{m}}t$ is on the order of $10^{-12}$, the lower limit of integration is a very small positive number and the integrand is diverging at the lower limit. On the other hand, the integrand oscillates for $z>1$. The integral is therefore unfit for numerical analysis in its current form as it has both divergent and oscillatory behavior. The problem is mitigated by the change of variables $x\equiv -\ln(z)$ which yields $$I = \int_{-\infty}^{-\ln(f_{\textrm{m}}t)} \frac{\sin(\pi e^{-x})^2} {(\pi e^{-x})^2}\frac{dx}{e^{x(\alpha-1)}}$$ The integrand is now well conditioned over the whole integration range. Plots of this integrand for several values of $\alpha$ are shown in Fig.\[Figure:ramseyIntegrands\]. Note that an upper cutoff in the frequency integral would translate to a lower cutoff in the integral over $x$. Because of the logarithmic scale combined with the very small value of the integrand for values of $x \leq 5$, ignoring a possible upper cutoff greater than 1MHz incurs negligible error. We perform the integral $I$ for 50 values of $t$ in the experimental range 0 to 400 ns, and for several values of $\alpha$ near 1. Results of the integration as a function of $t$ are shown in Fig.\[Figure:ramseyIntegrals\]. We also show $I$ as a function of $\alpha$ for two fixed values of $t$ in Fig.\[Figure:ramseyIntegralsVsAlpha\]. From these curves we construct interpolating functions and use them to fit our measured Ramsey decay data to Eq.(\[eq:ramseyFormula\]). Note particularly in Fig.\[Figure:ramseyIntegralsVsAlpha\] the strong dependence of the noise integral on $\alpha$. It is because of this strong dependence that we are able to accurately determine which value of $\alpha$ gives the best agreement with the power spectra measured directly using the RTO, as described in the main text. ![(color online) The integral $I$ evaluated as a function of $\alpha$ for several values of $t$. Note the strong dependence on $\alpha$.[]{data-label="Figure:ramseyIntegralsVsAlpha"}](figure4Supplementary.eps){width="9cm"} [3]{} natexlab\#1[\#1]{}bibnamefont \#1[\#1]{}bibfnamefont \#1[\#1]{}citenamefont \#1[\#1]{}url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{} , ****, (). , ****, (). , , , , , , ****, (). [^1]: The lower cutoff frequency $f_{\textrm{m}}=1/\textrm{hour}$ arises from the manner in which we acquire the data. Because we use a projective quantum measurement to read the state of the qubit, we must repeat the experiment for each value of $t$ many times to reduce statistical noise. Rather than average each point in $t$ sequentially, we spread the averaging of each point over the entire trace acquisition period. This results in better averaging of the noise signal as explained in Ref.[@VanHarlingen:CriticalCurrentDecoherence].
--- abstract: 'On the basis of angular spectrum representation, the reversed propagation dynamics of Laguerre-Gaussian beam in left-handed materials (LHMs) is presented. We show that negative phase velocity gives rise to a reversed screw of wave-front, and ultimately leads to a reversed rotation of optical vortex. Furthermore, negative Gouy-phase shift causes an inverse spiral of Poynting vector. It is found that the Laguerre-Gaussian beam in LHMs will present the same propagation characteristics as the counterpart with opposite topological charges in regular right-handed materials (RHMs). The momentum conservation theorem insures that the tangential component of the wave momentum at the RHM-LHM boundary is conserved. It is shown that although the linear momentum reverses its direction, the angular momentum remains unchanged.' author: - 'Hailu Luo$^{1,2}$' - 'Zhongzhou Ren$^{1}$' - 'Weixing Shu$^{2}$' - 'Shuangchun Wen$^{2}$' title: 'Reversed propagation dynamics of Laguerre-Gaussian beams in left-handed materials' --- Introduction {#Introduction} ============ Almost 40 years ago, Russian scientist Victor Veselago proposed that a material with electric permittivity $\varepsilon<0$ and magnetic permeability $\mu<0$, would reverse all known optical properties [@Veselago1968]. He termed these media as left-handed materials (LHMs) since the wave vector ${\bf k}$, forms a left-handed triplet with the vectors ${\bf E}$ and ${\bf H}$. That is, phase velocity and the Poynting vector are antiparallel, which consequently results in counter-intuitive phenomena such as reversals of the conventional Doppler shift and Cherenkov radiation as well as reversed refraction. Veselago pointed out that electromagnetic waves incident on a planar interface between a regular right-handed material(RHM) and a LHM will undergo negative refraction. Hence a LHM planar slab can act as a lens and focus waves from a point source. Recently, Pendry extended Veslago’s analysis and further predicted that a LHM slab can amplify evanescent waves and thus behaves like a perfect lens [@Pendry2000]. Pendry proposed that the amplitudes of evanescent waves from a near-field object could be restored at its image. Therefore, the spatial resolution of the superlens can overcome the diffraction limit of conventional imaging systems and reach the sub-wave length scale. The physical realization of such a LHM was demonstrated only recently for a novel class of engineered composite materials [@Smith2000; @Shelby2001; @Parazzoli2003; @Houck2003]. After the first experimental observation of negative refraction, intriguing and counterintuitive phenomenon in LHMs, such as amplification of evanescent waves [@Pendry2000; @Fang2005], unusual photon tunneling [@Zhang2002; @Kim2004], and negative Goos-Hänchen shift [@Kong2002; @Berman2002] have attracted much attention. Here we want to explore the reversed propagation dynamics of Laguerre-Gaussian beams in LHMs. The propagation of Laguerre-Gaussian beam has been investigated in conventional RHMs [@Basistiy1995; @Rozas1997; @Curtis2003; @Grier2003]. Such beams have a phase dislocation on the beam axis that in related literature is sometimes referred to as an optical vortex [@Curtis2003]. For a general Laguerre-Gaussian beam the Poynting vector has an azimuthal component. This means that there is an energy flow along the circumference of the beam as it propagates, giving rise to an orbital angular momentum [@Allen1992]. It is found that the spiral of the Poynting vector of a Laguerre-Gaussian beam is proportional to the Gouy-phase shift [@Padgett1995; @Allen2000]. It is known that an electromagnetic beam propagating through a focus experiences an additional $\pi$ phase shift with respect to a plane wave. This phase anomaly was discovered by Gouy in 1890 and has since been referred to as the Gouy-phase shift [@Siegman1986]. Because of the negative index, however, we can expect an reversed Gouy-phase shift in LHMs [@Luo2007b]. Hence it will be interesting for us to describe in detail how the Poynting vector evolves as it propagates and how the reversed Gouy-phase shift affects its spiral in LHMs. In this work, we will reveal reversed propagation dynamics of Laguerre-Gaussian beam in LHMs, such as inverse screw of wave-front, inverse spiral of Poynting vector, and inverse rotation of vortex field. First, starting from the representation of plane-wave angular spectrum, we obtain the analytical description for a Laguerre-Gaussian beam propagating in LHMs. Our formalism permits us to introduce the reversed Gouy-phase shift to describe the wave propagation. Next, we will recover how the wave-front and Poynting vector evolves, and how the reversed Gouy-phase shift affects their propagation behavior. Then, we attempt to investigate how the negative index influences the linear momentum and angular momentum of Laguerre-Gaussian beams. Finally, we will explore how the negative index gives rise to the reversed rotation of the vortex field. For a comparison, the corresponding propagation characteristics in RHMs will also been discussed. The Paraxial propagation of a Laguerre-Gaussian beam {#II} ==================================================== To investigate the propagation dynamics of a Laguerre-Gaussian beam in LHMs, we use the Maxwell’s equations to determine the field distribution both inside and outside the LHM. We consider a monochromatic electromagnetic field ${\bf E}({\bf r},t) = Re [{\bf E}({\bf r})\exp(-i\omega t)]$ and ${\bf B}({\bf r},t) = Re [{\bf B}({\bf r})\exp(-i\omega t)]$ of angular frequency $\omega$ propagating from the RHM to the LHM. The field can be described by Maxwell’s equations $$\begin{aligned} \nabla\times {\bf E} &=& - \frac{\partial {\bf B}}{\partial t}, ~~~{\bf B} = \mu_0 \boldsymbol{\mu}\cdot{\bf H},\nonumber\\ \nabla\times {\bf H} &=& \frac{\partial {\bf D}}{\partial t},~~~~~{\bf D} =\varepsilon_0 \boldsymbol{\varepsilon} \cdot {\bf E}. \label{maxwell}\end{aligned}$$ From the Maxwell’s equations, we can easily find that the wave propagation is only permitted in the medium with $\varepsilon, \mu>0$ or $\varepsilon,\mu<0$. In the former case, ${\bf E}$, ${\bf H}$ and ${\bf k}$ form a right-handed triplet, while in the latter case, ${\bf E}$, ${\bf H}$ and ${\bf k}$ form a left-handed triplet. We introduce the Lorentz-gauge vector potential to describe the propagation characteristics of Laguerre-Gaussian in RHMs and LHMs. The vector potential of the beam propagating in the $+z$ direction can be written in the form $${\bf A}=A_0(\alpha{\bf e}_x+\beta{\bf e}_y)u_{p,l}({\bf r})\exp(i k z-i\omega t),\label{ca}$$ where $A_0$ is a complex amplitude, ${\bf e}_x$ and ${\bf e}_y$ are unit vectors, $k =n_{R,L}\omega/c$, $c$ is the speed of light in vacuum, $n_R=\sqrt{\varepsilon_R\mu_R}$ and $n_L=-\sqrt{\varepsilon_L\mu_L}$ are the refractive index of RHM and LHM, respectively [@Veselago1968]. The coefficients $\alpha$ and $\beta$ satisfying $\sigma=i(\alpha\beta^\ast-\alpha^\ast\beta)$, are the polarization operators with $\sigma=\pm1$ for left-handed and right-handed circularly polarized light. When the field distribution is specified at a boundary surface or a transverse plane, one can obtain a unique solution of the electric field of the wave propagating in the $+z$ direction. Here, we assume that the transverse electric field at the $z=0$ plane is given by a Laguerre-Gaussian function as follows: $$\begin{aligned} u(r,\varphi,0)=\frac{C_{pl}}{w_{0}} \left[\frac{\sqrt{2}r}{w_{0}^2}\right]^{|l|} L_p^{|l|}\left[\frac{2 r^2}{w_{0}^2}\right]\exp\left[\frac{r^2}{w_{0}^2}-il\varphi\right].\label{F0}\end{aligned}$$ A Laguerre-Gaussian beam has two mode indices to fully describe the mode: $l$ and $p$. A given mode will have $l$ complete cycles of phase $2\pi$ upon going around the mode circumference, so that $l$ is known as the azimuthal index. The index $p$ gives the number $p+1$ of radial nodes. Laguerre-Gaussian light beams are well known to possess orbital angular momentum due to an $\exp [il\varphi]$ phase term, where $\varphi$ is the azimuthal phase. This obital angular momentum $l \hbar$ is distinct from the spin angular momentum due to the polarization state of the light [@Allen1992]. From the point of view of Fourier optics, we know that if the Fourier component at the $z=0$ plane represents the angular spectrum that the transverse component of the wave propagating in the half space $z>0$ should have. Then, the field in the region $z>0$ can be expressed by an integral of the plane wave components associated with the angular spectrum given at the $z>0$ plane [@Goodman1996]. The angular spectrum is related to the boundary distribution of the field by means of the relation $$\begin{aligned} \tilde{{u}}(k)=\int_0^\infty d r r J_l(k r)u( r,\varphi,0),\label{as}\end{aligned}$$ where $J_l$ is the first kind of Bessel function with order $l$. The two-dimensional Fourier transformations of Eq.(\[as\]) can be easily obtained from an integration table [@Gradshteyn1980]. In fact, after the field on the plane $z=0$ is known, Eq. (\[F0\]) together with Eq. (\[as\]) provides the expression of the field in the space $z>0$, which yields $${u}(r,\varphi,z )=\int_0^\infty d k k \exp \bigg(-\frac{i k^2 z}{2n_{R,L} k_0}\bigg)J_l(kr)\tilde{u}(k).\label{field}$$ which is a standard two-dimensional Fourier transform [@Goodman1996]. The field $u({\bf r}, z)$ is the slowly varying envelope amplitude which satisfies the paraxial wave equation $$\bigg[i\frac{\partial}{\partial z}+\frac{1}{2 n_{R,L} k_0}\nabla_\perp^2 \bigg] u({\bf r},z)=0,\label{pe}$$ where $\nabla_\perp=\partial_x {\bf e}_x+ \partial_y {\bf e}_y$. From Eq. (\[pe\]) we can find that the field of paraxial beam in LHMs can be written in the similar way to that in RHMs, while the sign of the refractive index is negative. The gauge condition on the vector and scalar potentials takes the form $\phi=(i/k)\nabla\cdot {\bf A}$. The electric and magnetic fields are obtained from the potentials as $$\begin{aligned} {\bf E}({\bf r},t)&=&-\frac{\partial{\bf A}}{\partial t}-\nabla \phi=A_0\bigg[i \omega (\alpha{\bf e}_x+\beta{\bf e}_y)u\nonumber\\ &&-\bigg(\alpha \frac{\partial u}{\partial x}+\beta \frac{\partial u}{\partial y}\bigg){\bf e}_z\bigg]\exp(ikz-i\omega t) ,\label{Efield}\end{aligned}$$ $$\begin{aligned} {\bf B}({\bf r},t)&=&\nabla \times{\bf A} =A_0\bigg[-i k (\beta{\bf e}_x-\alpha{\bf e}_y)u\nonumber\\ &&+\bigg(\beta \frac{\partial u}{\partial x}-\alpha \frac{\partial u}{\partial y}\bigg){\bf e}_z\bigg]\exp(ikz-i\omega t) .\label{Hfield}\end{aligned}$$ These field expressions neglect terms in each component that are smaller than those retained in accordance with the paraxial approximation [@Lax1975]. The $z$ components are smaller than the $x$ and $y$ components by a factor of order $1/k w_0$. It is readily verified that the fields satisfy Maxwell’s equations. Note that the Cartesian derivatives can be converted to polar $r$ and $\varphi$ derivatives in the usual way. To be uniform throughout the following analysis, we introduce different coordinate transformations $z_i^\ast (i=1,2)$ in the RHM and the LHM, respectively. First we want to explore the field in the RHM. Without any loss of generality, we assume that the input waist locates at the object plane $z=-a$ and $z_1^\ast=z+a$. The field in the RHM can be written as $$\begin{aligned} u_{pl}^R=&&\frac{C_{pl}}{w(z_1^\ast)} \left[\frac{\sqrt{2}r}{w^2(z_1^\ast)}\right]^{|l|} L_p^{|l|}\left[\frac{\sqrt{2}r}{w^2(z_1^\ast)}\right] \exp\bigg[\frac{-r^2}{w^2(z_1^\ast)}\bigg]\nonumber\\ &&\times \exp\bigg[i n_R k_0 z_1^\ast+\frac{-i n_R k_0 r^2 z_1^\ast}{R(z_1^\ast)}\bigg]\exp[-il\varphi]\nonumber\\ &&\times \exp[-i (2p+|l|+1)\arctan (z_1^\ast/z_R)],\label{F1}\end{aligned}$$ $$\begin{aligned} w(z_{1}^\ast)=w_0\sqrt{1+(z_{1}^\ast/z_R)^2},~~R(z_{1}^\ast)=z_1^\ast+\frac{z_R^2}{z_1^\ast}.\end{aligned}$$ Here $C_{pl}$ is the normalization constant, $L_p^l[2 r^2/w_{1}^2(z_1^\ast)]$ is a generalized Laguerre polynomial, $z_R= n_R k_0 w_0^2 /2$ is the Rayleigh length, $w(z_{1}^\ast)$ is the beam size and $R(z_{1}^\ast)$ the radius of curvature of the wave front. The last term in Eq. (\[F1\]) denotes the Gouy phase which is given by $\Phi_1=-(2p+|l|+1)\arctan(z_1^\ast/z_R)$. We are now in a position to calculate the field in LHM. In fact, the field in the RHM-LHM boundary can be easily obtained from Eq. (\[F1\]) by choosing $z=0$. The plane-wave spectrum of the Laguerre-Gaussian beam can be obtained by performing the two-dimensional Fourier transform in Eq. (\[as\]). After the plane-wave spectrum on the plane $z=0$ is known, Eq. (\[field\]) provides the expression of the field in the space $z>0$. For simplicity, we assume that the wave propagates through the boundary without reflection, the field in the LHM can be written as $$\begin{aligned} u_{pl}^L=&&\frac{C_{pl}}{w(z_2^\ast)} \left[\frac{\sqrt{2}r}{w^2(z_2^\ast)}\right]^{|l|} L_p^{|l|}\left[\frac{\sqrt{2}r}{w^2(z_2^\ast)}\right] \exp\bigg[\frac{-r^2}{w^2(z_2^\ast)}\bigg]\nonumber\\ &&\times \exp\left[i n_L k_0 z_2^\ast+\frac{-i n_L k_0 r^2 }{R(z_2^\ast)}\right]\exp[-il\varphi]\nonumber\\&&\times \exp[-i (2p+|l|+1)\arctan (z_2^\ast/z_L)],\label{F2}\end{aligned}$$ $$\begin{aligned} w(z_{2}^\ast)=w_0 \sqrt{1+(z_{2}^\ast/z_L)^2},~~R(z_{2}^\ast)=z_2^\ast+\frac{z_L^2}{z_2^\ast}.\label{w2}\end{aligned}$$ Here $z_2^\ast=z-(1-n_L/n_R)a$ and $z_L= n_L k_0 w_0^2 /2$ is the Rayleigh length in LHM. The beam size $w(z_2^\ast)$ and the radius of curvature $R(z_2^\ast)$ are given by Eq. (\[w2\]). The Gouy-phase shift in LHM is given by $\Phi_G=-(2p+|l|+1)\arctan (z_2^\ast/z_L)$. Because of the negative index, the reversed Gouy-phase shift should be introduced. A more intuitive interpretation of the reversed Gouy phase can be given in terms of a geometrical quantum effect [@Hariharan1996] or the uncertainty principle [@Feng2001]. As can be seen in the following section, the inverse Gouy-phase shift will give rise to an inverse spiral of Poynting vector. ![\[Fig1\] (Color online) The helical wave front for Laguerre-Gaussian beam with $l=1$ result from an azimuthal phase structure of $\exp[-i\varphi]$. In the LHM, the phase velocity ${\bf v}_p$ reverses its direction. The wave-fronts exhibit anti-clockwise screw in the RHM, while present clockwise screw in the LHM.](Fig1.eps){width="8cm"} For a Laguerre-Gaussian beam with $l\neq 0$, the on-axis phase form $\exp [il\varphi]$ results in that the surfaces of wave-front have helical form. Specifically, $l$ refers to the number of complete cycles of phase $2\pi$ upon going around the beam circumference. Now let us to study the screw of the wave front. Here the sense of the positive angles is chosen as anticlockwise, while negative angles are considered in the clockwise direction. In the regular RHM, the constant wavefront satisfies $$n_R k_0 z_1^\ast+\frac{-i n_R k_0 r^2 }{R(z_1^\ast)}-l\varphi+\Phi_G=const.$$ The schematic view of the wave front is a three-dimensional screw surface of ($r\cos\varphi$, $r \sin\varphi$, $z$). The plotting range of $r$ is from $0$ to $5w_0$ with the interval of $\Delta r=0.5 w_0$ and that of $n_R k_0 z$ is from $-4\pi$ to $0$ with the interval of $n_R k_0 \Delta z=0.1\pi$. The wavefront structure exhibits a anticlockwise-screw type with a pitch of $\lambda_0/n_R$ along the $+z$ axis. Next we explore the screwing fashion of wave front in the LHM. The constant wavefront satisfies $$n_L k_0 z_2^\ast+\frac{-i n_L k_0 r^2 }{R(z_2^\ast)}-l\varphi+\Phi_G=const.$$ The plotting range of $r$ is from $0$ to $5w_0$ with the interval of $\Delta r=0.5 w_0$ and that of $n_L k_0 z$ is from $0$ to $-4\pi$ with the interval of $n_L k_0 \Delta z=-0.1\pi$. The wave-front structure is a clockwise-screw type with a pitch of $\lambda_0/|n_L|$ along the $+z$ axis. Figure \[Fig2\] shows a typical form of a helical wavefront structure from the RHM to the LHM. At the RHM-LHM interface, the wave front will reverse its screwing fashion. It is intriguing to observe that the wave-front of Laguerre-Gaussian beam with $l=1$ in LHMs will exhibit the same skewing fashion as the counterpart with $l=-1$ in RHMs. As can be seen in the following section, the inverse screw of wavefront will result in an inverse rotation of optical vortex. Poynting vector and angular momentum ==================================== The propagation characteristics of electromagnetic fields are closely linked to their local energy flow, which is usually discussed by use of the Poynting vector. There has been considerable interest in orbital angular momentum of Laguerre-Gaussian beams [@Allen1992] relating to Poynting vector in free space. The Poynting vector has a magnitude of energy per second per unit area and a direction which represents the energy flow at any point in the field. The time average Poynting vector, ${\bf S}$ can be written as $${\bf S}=\frac{1}{2}\text{Re}[{\bf E}\times{\bf H}^\ast].\label{PV}$$ The spiral of the Poynting vector in free space or regular RHMs has been discussed extensively [@Allen1992; @Padgett1995; @Allen2000; @Volyar1999; @London2003; @Padgett2003]. Now a question arise: what happens in LHMs with simultaneously negative permeability and permittivity? The potential interests encourage us to derive a general expression to describe the Poynting vector in RHMs and LHMs. Substituting the expression of Eqs. (\[Efield\]) and (\[Hfield\]) into Eq. (\[PV\]) we find $$\begin{aligned} S_r&=&\frac{1}{\mu\mu_0} \frac{\omega k r}{R}|u|^2,\nonumber\\ S_\varphi&=&\frac{1}{\mu\mu_0} \left[\frac{\omega l}{r}|u|^2-\frac{1}{2}\omega \sigma\frac{\partial |u|^2}{\partial r}\right],\nonumber\\ S_z &=&\frac{1}{\mu\mu_0}\omega k |u|^2.\label{PVD}\end{aligned}$$ Here the component $S_r$, relates to the spread of the beam as it propagates. The azimuthal component $S_\varphi$ describes the energy flow that circulates around the propagating axis. The presence of this flow is due to the existence of the longitudinal components $E_z$ and $H_z$ of the field. The first term of the azimuthal component depends on $l$, where $l\hbar$ has been identified as the orbital angular momentum per photon [@Allen1992]. Its second term relates to the contribution of polarization and intensity gradient. The contribution of circular polarization will lead to a spin angular momentum of the beam. The axial component $S_z$ describes the energy flow that propagates along the $+z$ axis. Next, we attempt to explore the angular momentum in LHMs. Since the dispersion cannot be ignored in a causal system with a negative index of refraction. The momentum conservation theorem should be derived from the Maxwell equations and the Lorentz force and is given by [@Kong2005] $$\nabla \cdot {\bf T}+ \frac{\partial {\bf G}}{\partial t}=-{\bf F}.\label{mct}$$ where ${\bf G}$ is the momentum density vector and ${\bf F}$ is the force density. The momentum flow ${\bf T}$ also referred as the Maxwell stress tensor. It is well-known that a material contribution to the energy density accompanies the propagation of electromagnetic energy in dispersive materials. Analogously, there exists a corresponding material contribution to the wave momentum. Thus the momentum conservation equation for the electromagnetic wave can be written in the form [@Kemp2005] $$\begin{aligned} {\bf G}&=&\frac{1}{2}\text{Re}\left[\varepsilon \mu {\bf E}\times{\bf H}^\ast+\frac{\bf k}{2}\left(\frac{\partial \varepsilon}{\partial \omega}|{\bf E}|^2+\frac{\partial \mu}{\partial \omega}|{\bf H}|^2\right)\right],\nonumber\\ {\bf T}&=&\frac{1}{2}\text{Re} [({\bf D}\cdot{\bf E}^\ast+{\bf B}\cdot{\bf H}^\ast)I -({\bf D} {\bf E}^\ast+{\bf B} {\bf H}^\ast)],\nonumber\\ {\bf F}&=&\frac{1}{2}\text{Re}[\rho_e {\bf E}^\ast+{\bf J}\times{\bf B}^\ast+\rho_m {\bf H}^\ast+{\bf M}\times {\bf D}^\ast].\label{momentum}\end{aligned}$$ Here the momentum density ${\bf G }$ contains the Minkowski momentum ${\bf G}_M={\bf D}\times {\bf B}$ plus material dispersion terms. The tensor $I$ is $3\times3$ identity matrix. The electric and magnetic polarization vectors are give by ${\bf P}_e=\varepsilon_0 (\varepsilon-1){\bf E}$ and ${\bf P}_m=-\mu_0 (\mu-1){\bf H}$, respectively. Bound electric current ${\bf J}=\partial {\bf P}_e/\partial t $ and bound electric charge $\rho_e=\nabla\cdot{\bf P}_e$ have been accounted. Similarly, bound magnetic current ${\bf M}=\partial {\bf P}_m/\partial t $ and bound magnetic charge $\rho_m=\nabla\cdot{\bf P}_m$ should be introduced to describe the angular momentum flow in LHMs. Now we want to enquire: how to determine directions of the momentum density and the momentum flow? It is well known that the time-domain energy density in a frequency nondispersive medium are defined as $W=\frac{1}{2}[{\bf D}\cdot {\bf E}+{\bf E}\cdot {\bf H}]$. Obviously, the energy density in LHMs would be negative if the permittivity and permeability were negative. Hence, the energy density in a frequency dispersive medium is defined as [@Jackson1999; @Landau1984] $$W=\frac{1}{4}\left[\frac{\partial (\varepsilon \omega)}{\partial \omega}|{\bf E}|^2+\frac{\partial (\mu \omega)}{\partial \omega}|{\bf H}|^2\right].\label{energy}$$ In principle, the energy density can be decomposed into electric and magnetic parts. The positive electric and magnetic energy requires $\partial (\varepsilon \omega)/\partial \omega>0$ and $\partial (\mu \omega)/\partial \omega>0$. Subsequent calculations of Eq. (\[momentum\]) show that both the momentum density ${\bf G }$ and the momentum flow ${\bf T}$ in LHMs are antiparallel to the power flow ${\bf S}=\frac{1}{2} \text{Re}[{\bf E}\times{\bf H}^\ast]$. Hence we conclude that the linear angular momentum flux will reverse its direction in LHMs. Note that Eqs. (\[momentum\]) and (\[energy\]) are valid only for lossless media, and its application to lossy media produces unphysical phenomena such as a negative energy in LHMs. In a lossy and dispersive LHM, the momentum flow of a monochromatic wave is opposite to the power flow direction. However, the momentum density may be parallel or antiparallel to the power flow [@Kemp2007]. The cross product of this momentum density with the radius vector ${\bf r}$ yields an angular momentum flow. The angular momentum flow in the $z$ direction depends upon the component of ${\bf G}_\varphi$, such that $${\bf J}_z= {\bf r} \times {\bf T}_\varphi.$$ Conservation of momentum at a material boundary ensures that the tangential component of the wave momentum is conserved [@Kemp2005; @Kemp2007]. Hence the angular momentum flow still remains unchanged in the LHM. We can predict theoretically that the orbital angular momentum for per phonon still remain $l \hbar$. In order to accurately describe the angular momentum flow, it is necessary to include material dispersion and losses. Thus a certain dispersion relation, such as Lorentz medium model, should be involved. It has been shown that the trajectory at peak intensity becomes a straight line skewed with respect to the beam axis in RHMs [@Courtial2000]. To obtain a better physical picture of the straight trajectory in LHMs, the ray optical models of Laguerre-Gaussian beam are plotted in Fig. \[Fig2\]. Within a ray optical picture, the angular spectrum may be represented by skew rays in the optical beam. The screwing behavior of rays can be deduced from Eq. (\[PVD\]) in which we see that each ray having an azimuthal angle $\theta=l/(n_{R,L} k_0 r)$ and a polar angel $\eta=r/R$ with respect to the beam axis. Hence all rays lies on a single-sheeted hyperboloid surface. Note that wave-vector and the Poynting vector are parallel in the RHM and antiparallel in the LHM. Energy conservation requires that the $z$ component of Poynting vector must propagates away from the interface. Both wave-vector and energy flow incident on a planar interface between a RHM and a LHM will undergo negative refraction. Therefore all rays will reverse its screwing fashion in the LHM (see Fig. \[Fig2\]). ![\[Fig2\] (Color online) Ray optical model of Laguerre-Gaussian beam in the RHM and the LHM. The rays (green arrows) lies on a single-sheeted hyperboloid surface. Note that the arrows indicate the direction of the Poynting vectors. When Rays travel from a RHM to a LHM, the negative refraction results in the reversed screw. The rays exhibit anti-clockwise screw in the RHM, while he rays present clockwise screw in the LHM.](Fig2.eps){width="8cm"} To explore the reversed propagation dynamics in the LHM, we need to move beyond ray tracing. The ray optical models neglect diffraction and thus could not be used to predict precisely the spiral of Poynting vector. To include diffraction, we had to use a more accurate description of the electromagnetic field. It is well known that the trajectory of the Poynting vector is described by the spiral curve [@Allen1992]. The relative value of the components determines the trajectory of the Poynting vector. The spiral angle of the Poynting vector is given by $\theta=\theta_0+ \kappa z$, where $\kappa ={\partial\theta}/{\partial z}=S_\varphi/(r S_z)$ is the rate of rotation. The period of the trajectory along the $z$ axis is $2\pi/ \kappa$. For a general Laguerre-Gaussian beam, the rate of the azimuthal rotation is relate to the distance given by $$\begin{aligned} \frac{\partial\theta}{\partial z}=&&\frac{l}{n_{R,L} k_0 r^2}-\frac{\sigma |l|}{n_{R,L}k_0 r^2}+\frac{2 \sigma}{n_{R,L}k_0 w^2(z)}\nonumber\\ && +\frac{4 \sigma}{n_{R,L}k_0 w^2(z)}\frac{L_{p-1}^{|l+1|}[2r^2/w^2(z)]}{L_p^{|l|}[2r^2/w^2(z)]}.\label{RAD}\end{aligned}$$ Note that the first term is polarization independent. While the last three terms depends on the polarization. For modes with $p=0$, the final term is always zero. For a single-ringed Laguerre-Gaussian beam $p=0$ but $l\neq 0$, the scaled radius of the peak intensity is given by $r_{max}=\sqrt{|l|/2}w(z)$. We find that, for all values of $l$ and $\sigma$, the rotation angle is given by $$\theta_{max}=\frac{l}{|l|}\arctan\frac{z}{z_{R,L}}.\label{Gouy}$$ Figure \[Fig3\] shows the vector fields illustrating the spiral angle of the Poynting vector. The Poynting vector exhibits anticlockwise spiral in the RHM as depicted in Fig. \[Fig3\](a), while presents clockwise spiral in the LHM as plotted in Fig. \[Fig3\](b). The theoretical analysis and numerical calculations presented here coincide with experimental observations [@Leach2006]. For $l\neq 0$, Laguerre-Gaussian beam have annular intensity profiles and as $r_{max}$ is typically much greater than optical wavelength $\lambda$, the skew angle is expected to be very small. ![\[Fig3\] (Color online) Numerically computed field intensity distribution and transversal components of Poynting vector (green arrows) for $l=3$ Laguerre-Gaussian beam. (a)The Poynting vector exhibits anticlockwise spiral in RHM $z_1^{\ast}=n_R k_0 w_0^2 /2$. (b) The Poynting vector presents clockwise spiral in LHM $z_2^{\ast}=|n_L| k_0 w_0^2 /2$. For the purpose of comparison, we have chosen $n_L=-n_R$.](Fig3a.eps "fig:"){width="8cm"} ![\[Fig3\] (Color online) Numerically computed field intensity distribution and transversal components of Poynting vector (green arrows) for $l=3$ Laguerre-Gaussian beam. (a)The Poynting vector exhibits anticlockwise spiral in RHM $z_1^{\ast}=n_R k_0 w_0^2 /2$. (b) The Poynting vector presents clockwise spiral in LHM $z_2^{\ast}=|n_L| k_0 w_0^2 /2$. For the purpose of comparison, we have chosen $n_L=-n_R$.](Fig3b.eps "fig:"){width="8cm"} It can clearly be seen that the sense of spiral (clockwise or anticlockwise) depends on the signs of $l$ and the Rayleigh length. However, the amount of rotation of the poynting vector is independent of the magnitude of $l$. It is interesting to note that the Laguerre-Gaussian beam in LHMs will present the same fashion of spiral as the counterpart with opposite topological charges in RHMs. This is consist with the ray optical model describing the spiral of the Poynting vector. Equation (\[Gouy\]) implies that the absolute rotation for a single-ringed Laguerre-Gaussian beam at $z=z_{R,L}$ is $\pi/4$, regardless of $l$. When the far-field pattern of Poynting vector is calculated it is found that the Poynting vector is spiraled by $\pi/2$. When $l=0$ and $p=0$, the Laguerre-Gaussian is identical to the fundamental Gaussian beam, the spiral of the Poynting vector arises from the effect of circular polarization [@Allen2000]. The azimuthal rotations in the RHM and the LHM are relate to the distance can be obtained as $\theta=\sigma\arctan(z/z_{R})$ and $\theta=\sigma\arctan(z/z_{L})$, respectively. It is intriguing to note that the right-handed circularly polarized beam in LHMs will present the same fashion of spiral as the left-handed circularly polarized beam in RHMs. Hence we can describe quantitatively the amount of spiral of the Poynting vector, from which we can determine whether a material is a LHM or a RHM. ![\[Fig4\] (Color online) Interfering a Gaussian beam and a Laguerre-Gaussian beam of azimuthal index $l=3$ will produce the vortex field with three spiral arms. (a) The field distribution at plane of $z_1^{\ast}=n_R k_0 w_0^2 /2$. (b) The field distribution at plane of $z_2^{\ast}=|n_L| k_0 w_0^2 /2$. When the vortex field enters the LHM, the rotation changes its fashion.](Fig4a.eps "fig:"){width="8cm"} ![\[Fig4\] (Color online) Interfering a Gaussian beam and a Laguerre-Gaussian beam of azimuthal index $l=3$ will produce the vortex field with three spiral arms. (a) The field distribution at plane of $z_1^{\ast}=n_R k_0 w_0^2 /2$. (b) The field distribution at plane of $z_2^{\ast}=|n_L| k_0 w_0^2 /2$. When the vortex field enters the LHM, the rotation changes its fashion.](Fig4b.eps "fig:"){width="8cm"} Interfering the Laguerre-Gaussian beam with a fundamental Gaussian beam will transform the azimuthal phase variation of the pattern into an azimuthal intensity variation. Hence the helical phase ultimately results in an vortex field with $|l|$ spiral arms [@Padgett1996; @Soskin1997; @Macdonald2002]. The intriguing properties strongly motivate us to explore the vortex field propagation in LHMs. A simulation of the vortex field rotation produced by interfering a fundamental Gaussian beam and an Laguerre-Gaussian beam of azimuthal index $l=3$ is shown in Fig. \[Fig4\]. It can be seen that the vortex has three spiral arms, which is a result of the mismatch between the wave-fronts of the Laguerre-Gaussian beam and the Gaussian beam. The reversed screwing wave-fronts will directly cause an inverse rotation of the vortex filed in the LHM. The vortex will always has a spiral shape unless the wavefronts of the two beams have the same curvature. For example, the vortex field at a focusing waist will exhibit $|l|$ intense spots. After the vortex propagating through the focusing waist, the spiral arms will change their shapes. Now let us consider how to modulate the rotation of the vortex field. As we change the path length of the Gaussian beam, the spiral arms will rotate around the propagating axis. This is analogous to altering the phase different between the Laguerre-Gaussian beam and the Gaussian beam [@Macdonald2002]. The spiral arms repeat every $\lambda/|n_L|$ in the LHM, but only rotate fully after propagating $|l|\lambda/|n_L|$. A path length change in the Gaussian beam of $3 \lambda/|n_L|$ will cause the pattern to rotate through $2 \pi$ and $-2 \pi$ in RHM and LHM, respectively. The vortex presents an anticlockwise rotation in the RHM, while exhibits a clockwise rotation in the LHM. Once the vortex field enter the LHM, it will reverse its rotation fashion. Conclusions =========== In conclusion, we have investigated the reversed propagation dynamics of Laguerre-Gaussian beam in LHMs. We have introduced the concepts of negative Gouy-phase shift to describe the propagation of Laguerre-Gaussian beam in LHMs. The negative phase velocity and negative Gouy-phase shift caused inverse screw of wave-fronts, reversed spiral of Poynting vector, and inverse rotation of vortex field. At a RHM-LHM interface, direct calculation of Maxwell’s equations dictates the wave-vector and energy flow undergoes negative refraction. Consequently, inside the LHMs, the screw of wave-fronts, the spiral of Poynting vector, and the rotation of vortex will reverse their direction. We have shown that the Poynting vector of Laguerre-Gaussian beam in LHMs will present the same fashion of spiral as the counterpart with opposite topological charges in RHMs. Conservation of momentum at the boundary ensure that the tangential component of the wave momentum is conserved. We have found that although the linear momentum reverses its direction, the angular momentum still remains unchanged. Since the photons in Laguerre-Gaussian beam possess angular momentum, the reversed propagation dynamics may offer new fundamental insights into the nature of LHMs. The authors are sincerely grateful to Professors Wei Hu and Zhenlin Wang for many fruitful discussions. This work was supported by projects of the National Natural Science Foundation of China (Grants Nos. 10535010, 10576012, 10674045, 10775068, and 60538010), the 973 National Major State Basic Research and Development of China (Grant No. G2000077400), and Major State Basic Research Developing Program (Grant No. 2007CB815000). V. G. Veselago, Sov. Phys. Usp. **10**, 509 (1968). J. B. Pendry, Phys. Rev. Lett. **85**, 3966 (2000). D. R. Smith, W. J. Padilla, D. C. Vier, S. C. Nemat-Nasser, S. Schultz, Phys. Rev. Lett. **84**, 4184 (2000). R. A. Shelby, D. R. Smith, S. Schultz, Science **292**, 77 (2001). C. G. Parazzoli, R. B. Greegor, K. Li, B. E. C. Koltenbah, M. Tanielian, Phys. Rev. Lett. **90**, 107401 (2003). A. A. Houck, J. B. Brock, I. L. Chuang, Phys. Rev. Lett. **90**, 137401 (2003). N. Fang, H. Lee, C. Sun, and X. Zhang, Science [**308**]{}, 534 (2005). Z. M. Zhang and C. J. Fu, Appl. Phys. Lett. **80**, 1097 (2002). K. Y. Kim, Phys. Rev. E **70**, 047603 (2004). J. A. Kong, B. Wu, and Y. Zhang, Appl. Phys. Lett. **80**, 2084 (2002). P. R. Berman, Phys. Rev. E **66**, 067603 (2002). D. Rozas, C. T. Law, and G. A. Swartzlander Jr., J. Opt. Soc. Am. B **14**, 3054 (1997). D. G. Grier, Nature (London) **424**, 810 (2003). J. E. Curtis and D. G. Grier, Phys. Rev. Lett. **90**, 133901 (2003). I. V. Basistiy, M. S. Soskin, and M. V. Vasnetsov, Opt. Commun. **119**, 604 (1995). L. Allen, M. W. Beijersbergen, R. J. C. Spreeuw, and J. P. Woerdman, Phys. Rev. A **45**, 8185 (1992). M. J. Padgett and L. Allen, Opt. Commun. **121**, 36 (1995). L. Allen and M. J. Padgett, Opt. Commun. **184**, 67 (2000). A. E. Siegman, *Lasers* (University Science, Mill Valley, 1986). H. Luo, Z. Ren, W. Shu, and F. Li, Phys. Rev. E **75**, 026601 (2007). J. W. Goodman, *Introduction to Fourier Optics* (McGraw-Hill, New York, 1996). I. S. Gradshteyn and I. M. Ryzhik, *Tables of Integrals, Series, and Products* (Academic, San Diego, CA, 1980). M. Lax, W. H. Louisell and W. McKnight, Phys. Rev. A **11**, 1365 (1975). P. Hariharan and P. A. Robinson, J. Mod. Opt. **43**, 219 (1996). S. Feng, H. G. Winful, Opt. Lett. **26**, 485 (2001). A. V. Volyar, V. G. Shvedov, and T. A. Fadeeva, Tech. Phys. Lett. **25**, 203 (1999). R. Loudon, Phys. Rev. A **68**, 013806 (2003). M. J. Padgett, S. M. Barnett and R. Loudon, J. Mod. Opt. **50**, 1555 (2003). J. A. Kong, *Electromagnetic Wave Theory* (EMW Publishing, Cambridge, MA, 2005). B. A. Kemp, T. M. Grzegorczyk and J. A. Kong, Opt. Express **13**, 9280 (2005). J. D. Jackson, *Classical Electrodynamics* (Wiley, New York, 1999). L. D. Landau, E. M. Lifshitz, and L. P. Pitaevskii, *Electrodynamics of Continuous Media* (Pergamon, New York, 1984). B. A. Kemp, J. A. Kong and T. M. Grzegorczyk, Phys. Rev. A **75**, 053810 (2007). J. Courtial and M. J. Padgett, Opt. Commun. **173**, 269 (2000). J. Leach, S. Keen, M. J. Padgett, C. Saunter, G. D. Love, Opt. Express **14**, 11919 (2006). M. J. Padgett, J. Alrt, N. Simpson and L. Allen, Am. J. Phys. **64**, 77 (1996). M. S. Soskin, V. N. Gorshkov, M. V. Vasnetsov, J. T. Malos and N. R. Heckenberg Phys. Rev. A **56**, 4064 (1997). M. P. MacDonald, K. Volke-Sepulveda, L. Paterson, J. Arlt, W. Sibbett, and K. Dholakia, Opt. Commun. **201**, 21 (2002).
--- abstract: | For a Blaschke product $ B $ of degree $ d $ and $ \lambda $ on $ \partial\mathbb{D} $, let $ \ell_{\lambda} $ be the set of lines joining each distinct two preimages in $ B^{-1}(\lambda) $. The envelope of the family of lines $ \{\ell_{\lambda}\}_{\lambda\in\partial\mathbb{D}} $ is called the interior curve associated with $ B $. In 2002, Daepp, Gorkin, and Mortini proved the interior curve associated with a Blaschke product of degree 3 forms an ellipse. While let $ L_{\lambda} $ be the set of lines tangent to $ \partial{\mathbb{D}} $ at the $ d $ preimages $ B^{-1}(\lambda) $ and the trace of the intersection points of each two elements in $ L_{\lambda} $ as $ \lambda $ ranges over the unit circle is called the exterior curve associated with $ B $. In 2017, the author proved the exterior curve associated with a Blaschke product of degree 3 forms a non-degenerate conic. In this paper, for a Blaschke product of degree $ d $, we give some geometrical properties that lie between the interior curve and the exterior curve. author: - 'Masayo Fujimura [^1]' bibliography: - 'fuji-ref.bib' title: Interior and exterior curves of finite Blaschke products --- [**Keywords**]{} Complex analysis, Blaschke product, Algebraic curve, Dual curve [**MSC**]{} 30C20, 30J10 Introduction ============ A [*Blaschke product*]{} of degree $ d $ is a rational function defined by $$\label{eq:B} B(z)=e^{i\theta}\prod_{k=1}^{d}\frac{z-a_k}{1-\overline{a_k} z} \qquad (a_k\in \mathbb{D},\ \theta\in\mathbb{R}).$$ In the case that $ \theta=0 $ and $ B(0)=0 $, $ B $ is called [*canonical*]{}. For a Blaschke product of degree $ d $, set $$f_1(z)=e^{-\frac{\theta}{d}i}z, \quad \mbox{and} \quad f_2(z)=\frac{z-(-1)^da_1\cdots a_de^{i\theta}} {1-(-1)^d\overline{a_1\cdots a_de^{i\theta}}z}.$$ Then, the composition $ f_2\circ{B}\circ f_1 $ is a canonical one, and geometrical properties with respect to preimages of these two Blaschke products $ B $ and $ f_2\circ{B}\circ f_1 $ are same. Hence, we only need to consider a canonical Blaschke product for the following discussions. Moreover, the derivative of a Blaschke product has no zeros on $ \partial\mathbb{D} $. For instance, see [@Inner]. Hence, there are $ d $ distinct preimages of $ \lambda\in\partial\mathbb{D} $ by $ B $. Let $ z_1,\cdots,z_d $ be the $ d $ distinct preimages of $ \lambda\in\partial\mathbb{D} $ by $ B $, and $ \ell_{\lambda} $ the set of lines joining $ z_j $ and $ z_k $ $ (j\neq k) $. Here, we consider the family of lines $${\cal L}_B=\{\ell_{\lambda}\}_{\lambda\in\mathbb{D}},$$ and the envelope $ I_B $ of $ {\cal L}_B $. We call the envelope $ I_B $ the [*interior curve associated with $ B $*]{}. For a Blaschke product of degree $ 3 $, the interior curve forms an ellipse [@daepp] and corresponds to the inner ellipse of Poncelet’s theorem (cf. [@flatto]). \[thm:DGM\] Let $ B $ be a canonical Blaschke product of degree $3$ with zeros $ 0,\,a_1 $, and $ a_2 $. For $ \lambda\in\partial\mathbb{D} $, let $ z_1,z_2$, and $ z_3 $ denote the points mapped to $ \lambda $ under $ B $, and write $$\label{eq:partial} F(z)=\frac{B(z)/z}{B(z)-\lambda} =\frac{m_1}{z-z_1}+\frac{m_2}{z-z_2}+\frac{m_3}{z-z_3}.$$ Then the lines joining $ z_1 $ and $ z_2 $ is tangent to the ellipse $ E $ with equation $$\label{eq:DGM} |z-a_1|+|z-a_2|=|1-{\overline{a_1}}a_2|$$ at the point $ \zeta_3=\dfrac{m_1z_2+m_2z_1}{m_1+m_2} $. Conversely, every point of $ E $ is the point of tangency with $ E $ of a line that passes through two distinct points $ z_1 $ and $ z_2 $ on the unit circle for which $ B(z_1)=B(z_2) $. This result reminds us of the following classical result in Marden’s book [@marden] that was proved first by Siebeck [@siebeck]. \[thm:marden1\] The zeros $ z_1' $ and $ z_2' $ of the function $$F(z)=\frac{m_1}{z-z_1}+\frac{m_2}{z-z_2}+\frac{m_3}{z-z_3} \ \Big(=\frac{n(z-z_1')(z-z_2')}{(z-z_1)(z-z_2)(z-z_3)}\Big)$$ are the foci of the conic which touches the line segments $ z_1z_2,\ z_2z_3 $ and $ z_3z_1 $ in the points $ \zeta_3,\zeta_1 $, and $ \zeta_2 $ that divide these segments in the ratios $ m_1:m_2,\ m_2:m_3 $ and $ m_3:m_1 $, respectively. If $ n=m_1+m_2+m_3\neq0 $, the conic is an ellipse or hyperbola according as $ nm_1m_2m_3>0 $ or $ <0 $. For a Blaschke product $ B $ of degree 3, let $ z_1,z_2 $, and $ z_3 $ be the preimages for some $ \lambda \in\partial\mathbb{D} $ by $ B $ and $ F $ be defined as , then the following folds ([@daepp Lemma 4]), $$m_1+m_2+m_3=1 \quad \mbox{and} \quad 0<m_j<1 \ \mbox{for} \ j=1,2,3.$$ Theorem \[thm:DGM\] asserts the existence of “the common ellipse” for a given Blaschke product, as long as the given three points $ z_1,z_2 $, and $ z_3 $ are the preimage for some $ \lambda $ on the unit circle. The ellipse is also related to the numerical range of another specific matrix with eigenvalues $ a_1 $ and $ a_2 $ of the non-zero zero points of $ B $. Gorkin and Skubak studied such relations ([@gorkin]). Moreover, for a canonical Blaschke product $ B $ of degree $ 4 $ with zeros $ 0,a_1,a_2$, and $ a_3 $, the interior curve associated with $ B $ is defined by the equation of total degree 6 with respect to $ z $ and $ {\overline{z}}$. The coefficients of $ z^6 $ and $ {\overline{z}}^6 $ are $$({\overline{a_1}}-{\overline{a_2}})^2({\overline{a_2}}-{\overline{a_3}})^2 ({\overline{a_3}}-{\overline{a_1}})^2 \quad \mbox{and} \quad (a_1-a_2)^2(a_2-a_3)^2(a_3-a_1)^2,$$ respectively, with mutually distinct $ a_1,a_2 $, and $ a_3 $. The file size of a defining equation of this interior curve is about 200Kb as a text file. See also [@fuji-cmft]. Thus, it is not so easy to obtain the defining equation of the interior curve by calculating the envelope of $ B $ of degree greater than 4. Next, we consider the geometrical properties of Blaschke products outside the unit disk. Let $ B $ be a canonical Blaschke product of degree $ d $. For $ \lambda\in\partial\mathbb{D} $, let $ L_{\lambda} $ be the set of $ d $ lines tangent to $ \partial\mathbb{D} $ at the $ d $ preimages of $ \lambda\in\partial\mathbb{D} $ by $ B $. Here, we denote by $ E_B $ the trace of the intersection points of each two elements in $ L_{\lambda} $ as $ \lambda $ ranges over the unit circle. We call the trace $ E_B $ the [*exterior curve associated with $ B $*]{}. In [@fuji-circum], we obtained the following. \[thm:algd\] Let $ B $ be a canonical Blaschke product of degree $ d $. Then, the exterior curve $ E_B $ is an algebraic curve of degree at most $ d-1 $. The proof of Theorem \[thm:algd\] is already described in [@fuji-circum], but we will give an outline proof in section \[sec:2\] in order to provide the defining equation of $ E_B $. The following result comes to mind when we pay attention to the degree of $ E_B $. However, we remark that the degree of the exterior curve may degenerate to less than $ d-1 $ (see Remark \[rem:degree\]). \[thm:marden2\] The zeros of the function $ F(z)=\sum_{j=1}^{d}\frac{m_j}{z-z_j}\ (m_j\in\mathbb{R^*}), $ are the foci of the curve of class $ d-1 $ which touches each line-segment $ z_jz_k $ in a point dividing the line segment in the ratio $ m_j:m_k $. The main aim of this paper is to explore the relation between the geometrical properties of the interior curve and the exterior curve. As the main theorem in this paper, we will show that the following result in section \[sec:3\]. \[thm:dual\] Let $ B $ be a canonical Blaschke product of degree $ d $, and $ E^*_B $ the dual curve of the homogenized exterior curve $ E_B $. Then, the interior curve is given by $$I_B:\ u^*_B(-z)=0,$$ where $ u^*_B(z)=0 $ is a defining equation of the affine part of $ E^*_B $. Moreover, as an application of this theorem, we construct examples of Blaschke products having two ellipses as the interior curve in section \[sec:exp\]. Interior and exterior curves {#sec:2} ============================ Although, the proof of Theorem \[thm:algd\] is already described in [@fuji-circum], in order to confirm the method of construction of the defining equation, we provide an outline proof here. Let $ \displaystyle B(z)=z\prod_{k=1}^{d-1}\frac{z-a_k}{1-\overline{a_k} z} \ (a_k\in \mathbb{D}) $ and written as follows $$B(z)= \dfrac{z^d-\sigma_1z^{d-1}+\sigma_2z^{d-2}+\cdots+(-1)^{d-1}\sigma_{d-1}z} {1-\overline{\sigma_1}z+\cdots+(-1)^{d-1}\overline{\sigma_{d-1}}z^{d-1}},$$ where $ \sigma_k $ are the elementary symmetric polynomials on variables $ a_1,\cdots, a_{d-1} $ of degree $ k $ $(k=1,\cdots,d-1) $. Let $ \sigma_0=1 $ and $ \sigma_d=0 $. Eliminating $ \lambda $ from $ B(z_1)=B(z_2)=\lambda $, we have $$\begin{aligned} & \Big(z_1^d-\sigma_1z_1^{d-1}+\cdots+(-1)^{d-1}\sigma_{d-1}z_1\Big) \Big(1-\overline{\sigma_1}z_2+\cdots +(-1)^{d-1}\overline{\sigma_{d-1}}z_2^{d-1}\Big)\\ & \qquad -\Big(z_2^d-\sigma_1z_2^{d-1}+\cdots+(-1)^{d-1}\sigma_{d-1}z_2\Big) \Big(1+\overline{\sigma_1}z_1+\cdots +(-1)^{d-1}\overline{\sigma_{d-1}}z_1^{d-1}\Big)\\ & = \sum_{j=1}^d\sum_{k=1}^d (-1)^{j+k}\overline{\sigma_{d-j}}\sigma_{d-k} (z_1^kz_2^{d-j}-z_1^{d-j}z_2^k) \\ & =\sum_{N=1}^d\sum_{K=0}^{N-1} (-1)^{d-N+K}(\sigma_{d-N}\overline{\sigma_{K}} -\overline{\sigma_{N}}\sigma_{d-K}) (z_1z_2)^K(z_1^{N-K}-z_2^{N-K}) \\ &= (z_1-z_2)\sum_{N=1}^d\sum_{K=0}^{N-1} (-1)^{d-N+K}(\sigma_{d-N}\overline{\sigma_{K}} -\overline{\sigma_{N}}\sigma_{d-K}) (z_1z_2)^K\\ &\quad \times\Big((z_1+z_2)^{N-K-1}-\gamma_1z_1z_2(z_1+z_2)^{N-K-3}+\cdots +\gamma_M(z_1z_2)^M(z_1+z_2)^R\Big)=0, \end{aligned}$$ where $ R $ is the remainder after dividing $ N-K-1 $ by $ 2 $, $$M=\frac{N-K-1-R}{2} , \qquad \gamma_1=N-K-2,$$ and $ \gamma_M $ is a non-zero coefficient. The intersection point $ z $ of two lines $ l_1 $ and $ l_2 $ satisfies $$\label{eq:interd} z_1z_2=\dfrac{z}{{\overline{z}}} \quad \mbox{and} \quad z_1+z_2=\dfrac{2}{{\overline{z}}},$$ since each $ l_k\ (k=1,2) $ is a line tangent to the unit circle at a point $ z_k $. Note that the intersection point is the point at infinity if and only if $ z_1+z_2=0 $. Hence, we have $$\begin{aligned} \notag & \sum_{N=1}^d\sum_{K=0}^{N-1} (-1)^{d-N+K}\big(\sigma_{d-N}\overline{\sigma_{K}} -\overline{\sigma_{N}}\sigma_{d-K}\big) z^K{\overline{z}}^{d-N} \\ \label{eq:algd} & \qquad \times \Big( 2^{N-K-1}-2^{N-K-3}\gamma_1z{\overline{z}}+\cdots + 2^R\gamma_Mz^M{\overline{z}}^M\Big)=0. \end{aligned}$$ This equality gives a defining equation of $ E_B $ with degree at most $ d-1 $. When the degree is low, we can describe the exterior curve concretely, as follows. Let $ B $ be a canonical Blaschke product of degree $ d $ with zeros $ 0 $, $a_1,\cdots $, $a_{d-1} \in\mathbb{D} $. - For a canonical Blaschke product of degree $ 2 $ with zeros $ 0 $ and $ a_1 (\neq0) $, the exterior curve is the line $ {\overline{a_1}}z+a_1{\overline{z}}-2=0 $. - For $ d=3 $, the exterior curve is either an ellipse, a circle, a parabola, or a hyperbola. $$\label{env3} {\overline{a_1}}{\overline{a_2}}z^2 -(|a_1a_2|^2-|a_1+a_2|^2+1)z{\overline{z}}+a_1a_2{\overline{z}}^2 -2({\overline{a_1}}+{\overline{a_2}})z-2(a_1+a_2){\overline{z}}+4=0.$$ - For $ d=4 $, the defining equation of the exterior curve is written as $$\begin{aligned} \notag & {\overline{\sigma_{3}}}z^3+({\sigma_{1}}{\overline{\sigma_{2}}}-{\sigma_{2}}{\overline{\sigma_{3}}}-{\overline{\sigma_{1}}})z^2{\overline{z}}-({\sigma_{1}}-{\sigma_{2}}{\overline{\sigma_{1}}}+{\sigma_{3}}{\overline{\sigma_{2}}})z{\overline{z}}^2+{\sigma_{3}}{\overline{z}}^3 \\ \label{eq:env4gene} & -2{\overline{\sigma_{2}}}z^2-(2{\sigma_{1}}{\overline{\sigma_{1}}}-2{\sigma_{3}}{\overline{\sigma_{3}}}-4)z{\overline{z}}-2{\sigma_{2}}{\overline{z}}^2+4{\overline{\sigma_{1}}}z+4{\sigma_{1}}{\overline{z}}-8=0, \end{aligned}$$ where $ \sigma_{k} $ are the elementary symmetric polynomials on three variables $ a_1,a_2,a_3 $ of degree $ k \ (k=1,2,3) $, i.e. $${\sigma_{1}}=a_1+a_2+a_3,\ {\sigma_{2}}=a_1a_2+a_1a_3+a_2a_3 \quad \mbox{and}\quad {\sigma_{3}}=a_1a_2a_3.$$ Even if we use symbolic computation systems, it is hard to calculate the defining equation of the interior curve for $ d=5 $. However, we can obtain the exterior curve as follows. For a canonical Blaschke product of degree 5, the defining equation of the exterior curve is written as $$\begin{aligned} \notag & {\overline{\sigma_{4}}}z^4+({\sigma_{1}}{\overline{\sigma_{3}}}-{\overline{\sigma_{2}}}-{\sigma_{2}}{\overline{\sigma_{4}}})z^3{\overline{z}}-({\sigma_{1}}{\overline{\sigma_{1}}}-{\sigma_{2}}{\overline{\sigma_{2}}}+{\sigma_{3}}{\overline{\sigma_{3}}}-{\sigma_{4}}{\overline{\sigma_{4}}}-1)z^2{\overline{z}}^2 \\ \notag & +({\sigma_{3}}{\overline{\sigma_{1}}}-{\sigma_{4}}{\overline{\sigma_{2}}}-{\sigma_{2}})z{\overline{z}}^3+{\sigma_{4}}{\overline{z}}^4 -2{\overline{\sigma_{3}}}z^3+2(2{\overline{\sigma_{1}}}-{\sigma_{1}}{\overline{\sigma_{2}}}+{\sigma_{3}}{\overline{\sigma_{4}}})z^2{\overline{z}}\\ \notag & -2({\sigma_{2}}{\overline{\sigma_{1}}}-2{\sigma_{1}}-{\sigma_{4}}{\overline{\sigma_{3}}})z{\overline{z}}^2 -2{\sigma_{3}}{\overline{z}}^3 +4{\overline{\sigma_{2}}}z^2+4({\sigma_{1}}{\overline{\sigma_{1}}}-{\sigma_{4}}{\overline{\sigma_{4}}}-3)z{\overline{z}}\\ & +4{\sigma_{2}}{\overline{z}}^2 -8{\overline{\sigma_{1}}}z-8{\sigma_{1}}{\overline{z}}+16=0,\end{aligned}$$ where $ \sigma_{k} $ are the elementary symmetric polynomials on four variables $ a_1,\cdots,a_4 $ of degree $ k \ (k=1,\cdots,4) $. \[rem:degree\] For $ d=4 $, the degree of the exterior curve is not greater than $ 2 $ if and only if the Blaschke product has a double zero point at the origin and the sum of the other zero points equals to 0. While, the degree of the defining equation of the exterior curve $ E_B $ is always $ 4 $ for every $ B $ of degree 5. Even though we can obtain the defining equation of the exterior curve concretely for $ d\geq 6$, We abandon to describe it. Because the size of the equation is relatively large. Proof of Theorem \[thm:dual\] {#sec:3} ============================= The affine part of the projective space $ \mathbb{P}_2(\mathbb{R}) $ can be identified with the complex plane $ \mathbb{C} $. Recall that the dual curve $ C^* $ of $ C\subset \mathbb{P}_2(\mathbb{R}) $ is defined by $$C^*=\{L\in\mathbb{P}_2^*(\mathbb{R})\,;\, L \mbox{ is a line tangent to } C \mbox{ at some } p\in C\}.$$ Let $ z' $ and $ z'' $ are two preimages for some $ \lambda\in\partial\mathbb{D} $, and $ \ell $ is the line joining $ z' $ and $ z''$. Let $ \zeta $ be the intersection point of two lines tangent to the unit circle at the points $ z' $ and $ z'' $ (cf. Figure \[pic:pp\]). Therefore the point $ \zeta $ is the pole and the line $ \ell $ is its polar with respect to the unit circle. ![The point $ \zeta $ is the pole and the line $ \ell $ is its polar with respect to the unit circle.[]{data-label="pic:pp"}](Fig2.pdf){width="0.3\linewidth"} Then, the equation of $ \ell $ is written as $$\label{eq:ell} z+z'z'' {\overline{z}}=z'+z''.$$ The intersection point $ \zeta $ satisfies $$\label{eq:pole} z'+z''=\frac2{\overline{\zeta}} \mbox{\qquad and\qquad } z'z''=\frac{\zeta}{\overline{\zeta}}.$$ Substituting into , the line $ \ell $ is written by the data of $ \zeta $ as follows, $$\overline{\zeta}z+\zeta\overline{z}=2.$$ Substituting $ \zeta=\alpha+\beta i $ and $ z=x+yi $ into the above equality again, the line $ \ell $ is expressed as the line on the real $ xy $-plane, $ \alpha x+\beta y-1=0. $ Therefore the line $ \ell\subset\mathbb{C}\subset \mathbb{P}_2(\mathbb{R}) $ corresponds to the point $ (-\alpha:-\beta:1) \in\mathbb{P}_2^*(\mathbb{R}) $, and this point corresponds to the point $ -\zeta\in\mathbb{C} $. Hence, the assertion is obtained from the fact that the family of all tangent lines of the interior curve $ I_B $ coincides with the family of lines $ \mathcal{L}_B=\{ \ell_{\lambda}\}_{\lambda} $. Equivalently, the converse also holds. Let $ B $ be a canonical Blaschke product of degree $ d $, and $ I^*_B $ be the dual curve of the homogenized interior curve $ I_B $. Then, the exterior curve is given by $$E_B:\ v^*_B(-z)=0,$$ where $ v^*_B(z)=0 $ is a defining equation of the affine part of $ I^*_B $. As we mentioned in section $ 1 $, for $ d=3 $, the ellipse corresponds to the inner ellipse of Poncelet’s theorem. For $ d\geq 4 $, Theorems [\[thm:algd\]]{} and [\[thm:dual\]]{} provides the defining equation of the envelope of the family of lines $ \{\ell_{\lambda}\}_{\lambda\in\mathbb{D}} $, where $ \ell_\lambda $ is the set of all segments joining each distinct two preimages in $ B^{-1}(\lambda) $. Here, we remark that $ \ell_\lambda $ includes diagonals of the $ d $-sided polygon with vertices at $ B^{-1}(\lambda) $. In general, the defining equation of this envelope is not always reducible, but the “outermost part” of the curve gives the so-called Poncelet curve associated with the Blaschke product. For instance, see [[@mirman]]{} and [[@daepp2015 Definition 5.1 and Theorem 5.2]]{} for details about definitions and related topics of Poncelet curve. Examples {#sec:exp} ======== For a Blaschke product $ B $ of degree $ 4 $, the interior curve $ I_B $ is an ellipse if and only if $ B $ is a composition of two Blaschke products of degree $ 2 $. See [@fuji-circum], and also see [@gorkin2] for the relationship between this result and the numerical range of shift perators. Here, as an appliation of Theorem \[thm:dual\], we construct a Blaschke product of degree 5 whose interior curve is a union of two ellipses. Let $$B_{a,b}(z)=z\frac{z^2-a}{1-a z^2}\frac{z^2-b}{1-b z^2} \qquad (0<a,b<1),$$ where $ a,\,b $ satisfy the equality $ a^3b^3-2a^2b^2-(b^2+a^2)+3ab=0 $. In this case, $ E_{B_{a,b}} $ is given as follows, $$\begin{aligned} \notag E_{B_{a,b}} : & \Big(a(b+1)^2x^2+a(b-1)^2y^2-4b\Big) \\ \label{eq:ex1} & \Big((a^2b^3-ab^2+2b^2+3b-a)x^2+(a^2b^3-ab^2-2b^2+3b-a)y^2 -4b\Big)=0, \end{aligned}$$ where $ z=x+iy $. Therefore, the exterior curve is a union of two ellipses for every $ B_{a,b} $. Then, the interior curve is also a union of two ellipses because the dual curve of an irreducible conic is also an irreducible conic and the interior curve is a compact curve in $ \mathbb{D} $. See Figure [\[pic:elip-elip1\]]{}. In fact, $ I_{B_{a,b}} $ is given by, $$I_{B_{a,b}} : \Big(\frac{4b}{a(b+1)^2}x^2+\frac{4b}{a(b-1)^2}y^2-1\Big) \Big(\frac{4a}{b(a+1)^2}x^2+\frac{4a}{b(a-1)^2}y^2-1\Big)=0.$$ The two foci are $ \pm\sqrt{a} $ (the first factor) and $ \pm\sqrt{b} $ (the second factor). Let $$B_c(z)=z\Big(\frac{z-\frac14}{1-\frac14 z}\Big)^2 \Big(\frac{z-c}{1-c z}\Big)^2 \qquad (0<c<1),$$ where $ c $ is a solution of $ c^3-72c^2+48c-4=0 $. There are two possibilities of $ c $, $$c\approx 0.0976036,\quad \mbox{or} \quad c\approx 0.5745591.$$ In this case, $ E_{B_c} $ is given as follows, $$\begin{aligned} \notag E_{B_c} : & \Big(4z^2+(-225{\overline{z}}c+8{\overline{z}}-64)z+4{\overline{z}}^2 -64{\overline{z}}+256\Big) \\ \label{eq:elipelip} & \Big(16c^2z^2+(-257{\overline{z}}c^2+(272{\overline{z}}-64)c-64{\overline{z}})z +16{\overline{z}}^2c^2-64{\overline{z}}c+64\Big)=0. \end{aligned}$$ In the case of $ c\approx 0.0976036 $, is a union of two ellipses. See [Figure \[pic:elip-elip\]]{}. The other case, is a union of an ellipse and a hyperbola. See [Figure \[pic:elip-hyp\]]{}. In any case, the interior curve should be a union of two ellipses. In fact, the interior curve is the union of the following two circles, $$\Big|z-\frac14\Big|=\frac{15}{16}\sqrt{c}, \quad \mbox{and}\quad \big|z-c\big|=\frac18(17c-8).$$ [^1]: This work was partially supported by JSPS KAKENHI Grant Number JP15K04943.
--- author: - | Fritz Colonius\ Institut für Mathematik, Universität Augsburg, Augsburg, Germany - | João A. N. Cossich and Alexandre J. Santana\ Departamento de Matemática, Universidade Estadual de Maringá\ Maringá, Brazil title: 'Controllability properties and invariance pressure for linear discrete-time systems' --- **Abstract**[^1]**.** For linear control systems in discrete time controllability properties are characterized. In particular, a unique control set with nonvoid interior exists and it is bounded in the hyperbolic case. Then a formula for the invariance pressure of this control set is proved. **Keywords.** controllability, control sets, **** invariance pressure, invariance entropy, discrete-time control systems **MSC 2010.** 93B05, 37B40, 94A17 Introduction ============ Invariance pressure for subsets of the state space generalizes invariance entropy of deterministic control systems by adding potentials on the control range. We consider control systems in discrete time of the form$$x_{k+1}=F(x_{k},u_{k}),k\in\mathbb{N}_{0}=\{0,1,\ldots\},$$ where $F:M\times U\rightarrow M$ is smooth for a smooth manifold $M$ and a compact control range $U\subset\mathbb{R}^{m}$. The invariance entropy $h_{inv}(K,Q)$ determines the average data rate needed to keep the system in $Q$ (forward in time) when in starts in $K\subset Q$. Basic references for invariance entropy are Nair, Evans, Mareels, and Moran [@NEMM04] and the monograph Kawan [@Kawa13], where also the relation to minimal data rates is explained. With some analogy to classical constructions for dynamical systems, invariance pressure adds continuous functions $f:U\rightarrow \mathbb{R}$ called potentials giving a weight to the control values. For continuous-time systems, invariance entropy of hyperbolic control sets has been analyzed in Kawan [@Kawa11b] and Kawan and Da Silva [@KawaDS16]. Kawan and Da Silva [@KawaDS18] and [@KawaDS19] analyze invariance entropy of partially hyperbolic controlled invariant sets and chain control sets. Huang and Zhong [@HuanZ18] show dimension-like characterizations of invariance entropy. Measure-theoretic versions of invariance entropy have been considered in Colonius [@Colo18] and Wang, Huang, and Sun [@WangHS19]. Invariance pressure has been analyzed in Colonius, Cossich, and Santana [@Cocosa1; @Cocosa2; @Cocosa3]. In Zhong and Huang [@ZHuag19] it is shown that several generalized notions of invariance pressure fit into the dimension-theoretic framework due to Pesin. The main results of the present paper are given for linear control systems $x_{k+1}=Ax_{k}+Bu_{k}$ with an invertible matrix $A$ and control values $u_{k}$ in a compact neighborhood $U$ of the origin in $\mathbb{R}^{m}$. It is shown that a unique control set $D$ with nonvoid interior exists if and only if the system without control constraints is controllable (i.e., the pair $(A,B)$ is controllable), and $D$ is bounded if and only if $A$ is hyperbolic. In this case a formula for the invariance pressure of compact subsets $K$ in $D$ is presented. The contents of this paper are as follows: Section \[section2\] collects general properties of control sets for nonlinear discrete-time systems. Section \[section3\] characterizes controllability properties of linear discrete-time systems with control constraints and Section \[section4\] shows that here a unique control set with nonvoid interior exists and that it is bounded if and only if the uncontrolled system is hyperbolic. Section \[section5\] introduces invariance entropy and as a generalization total invariance pressure where potentials on the product of the state space and the control range are allowed. For linear systems, Section \[section6\] first derives an upper bound for the total invariance pressure and a lower bound for the invariance pressure. Combined they yield a formula for the invariance pressure in the hyperbolic case. Control sets for nonlinear systems\[section2\] ============================================== In this section we introduce some notation and prove several properties of control sets with nonvoid interior for nonlinear discrete-time systems. They are analogous to properties of systems in continuous time, however, the statements are a bit more involved, since one has to consider in addition to the interior of control sets their transitivity sets. A discussion of various slightly differing versions in the literature is contained in Colonius [@Colo18 Section 5]. We consider control systems of the form$$x_{k+1}=F(x_{k},u_{k}),k\in\mathbb{N}_{0}, \label{nonlinear}$$ on a $C^{\infty}$-manifold $M$ of dimension $d$ endowed with a corresponding metric. For an initial value $x_{0}\in M$ at time $k=0$ and control $u=(u_{k})_{k\geq0}\in\mathcal{U}:=U^{\mathbb{N}_{0}}$ we denote the solutions by $\varphi(k,x_{0},u),k\in\mathbb{N}_{0}$. Assume that the set of control values $U\subset\mathbb{R}^{m}$ satisfies $U\subset\overline{\mathrm{int}U}$. Let $\tilde{U}$ be an open set containing $\overline{U}$ and suppose that the map $F:M\times\tilde{U}\rightarrow M$ is a $C^{\infty}$-map. For $x\in M$ and $k\in\mathbb{N}$ the reachable set $\mathbf{R}_{k}(x)$ and the controllable set $\mathbf{C}_{k}(x)$ are$$\begin{aligned} \mathbf{R}_{k}(x) & :=\{y\in M\left\vert \exists u\in\mathcal{U}:y=\varphi(k,x,u)\right. \},\\ \mathbf{C}_{k}(x) & :=\{y\in M\left\vert \exists u\in\mathcal{U}:\varphi(k,y,u)=x\right. \},\end{aligned}$$ resp., and $\mathbf{R}(x)$ and $\mathbf{C}(x)$ are the respective unions over all $k\in\mathbb{N}$. The system is called accessible in $x$ if$$\mathrm{int}\mathbf{R}(x)\not =\varnothing\text{ and }\mathrm{int}\mathbf{C}(x)\not =\varnothing. \label{access0}$$ Accessibility in $x$ certainly holds if$$\mathrm{int}F(x,U)\not =\varnothing\text{ and }\mathrm{int}\{y\in M\left\vert x\in F(y,U)\right. \}\not =\varnothing.$$ Next we specify maximal subsets of complete approximate controllability. \[Definition3.1\]For system of the form (\[nonlinear\]) a nonvoid subset $D\subset M$ is called a control set if it is maximal with (i) $D\subset \overline{\mathbf{R}(x)}$ for all $x\in D$, (ii) for every $x\in D$ there is $u\in\mathcal{U}$ with $\varphi(k,x,u)\in D$ for all $k\in\mathbb{N}$. The transitivity set $D_{0}$ of $D$ is $D_{0}:=\{z\in D\left\vert z\in \mathrm{int}\mathbf{C}(z)\right. \}$. We define for $k\geq1$ a $C^{\infty}$-map$$G_{k}:M\times U^{k}\rightarrow M,G_{k}(x,u):=\varphi(k,x,u).$$ Following Wirth [@Wirth98] we say that a pair $(x,u)\in M\times \mathrm{int}U^{k}$ is regular if $\mathrm{rank}\frac{\partial G_{k}}{\partial u}(x,u)=d$ (clearly, this implies $mk\geq d$). For $x\in M$ and $k\in \mathbb{N}$ the regular reachable set and the regular controllable set at time $k$ are $$\begin{aligned} \mathbf{\hat{R}}_{k}(x) & :=\left\{ \varphi(k,x,u)\left\vert (x,u)\text{ is regular}\right. \right\} ,\\ \mathbf{\hat{C}}_{k}(x) & :=\left\{ y\in M\left\vert x=\varphi(k,y,u)\text{ with }(y,u)\text{ regular}\right. \right\} ,\end{aligned}$$ resp., and the regular reachable set $\mathbf{\hat{R}}(x)$ and controllable set $\mathbf{\hat{C}}(x)$ are given by the respective union over all $k\in\mathbb{N}$. It is clear that $\mathbf{\hat{R}}(x)$ and $\mathbf{\hat{C}}(x)$ are open for every $x$. Accessibility condition (\[access0\]) implies that there is $k_{0}\in\mathbb{N}$ such that for all $k\geq k_{0}$ one has $\mathrm{int}\mathbf{R}_{k}(x)\not =\varnothing$ and $$\mathbf{R}_{k}(x)\subset\overline{\{\varphi(k,x,u)\in\mathrm{int}\mathbf{R}_{k}(x)\left\vert u\in\mathrm{int}U^{k}\right. \}}.$$ By Sard’s Theorem the set of points $\varphi(k,x,u)\in\mathbf{R}_{k}(x)$ such that $(x,u)$ is not regular has Lebesgue measure zero. \[proposition\_transitivity\]Assume that accessibility condition (\[access0\]) holds for all $x\in M$. Then for every control set $D$ with nonvoid interior the transitivity set $D_{0}$ is nonvoid and dense in $\mathrm{int}D$. For $x\in\mathrm{int}D$ there is $k_{0}\in\mathbb{N}$ such that the reachable set $\mathbf{R}_{k}(x)$ at time $k$ has nonvoid interior for all $k\geq k_{0}$. There is $k\geq k_{0}$ with $\mathbf{R}_{k}(x)\cap\mathrm{int}D\not =\varnothing$, hence we may assume that there is $y:=\varphi (k,x,u)\in\mathrm{int}\mathbf{R}_{k}(x)\cap\mathrm{int}D$. Then, by Sard’s Theorem, it follows that there is a point $y=\varphi(k,x,u)\in\mathrm{int}D$ with some regular $(x,u)$, i.e., $y\in\mathrm{int}D\cap\mathbf{\hat{R}}_{k}(x)$. Then $x\in\mathrm{int}\mathbf{C}(y)$. Let $V\subset\mathrm{int}\mathbf{C}(y)$ be a neighborhood of $x$. Since $x\in\mathrm{int}D$ and $D\subset\overline{\mathbf{R}(y)}$, there is $z\in V\cap\mathbf{R}(y)\subset D$ and thus $y\in\mathbf{C}(z)$. By construction, the point $z\in D$ satisfies $z\in\mathrm{int}\mathbf{C}(y)\subset\mathrm{int}\mathbf{C}(z)$, hence it is in the transitivity set of $D$ and $D_{0}$ is dense in $\mathrm{int}D$. In the general context of semigroups of continuous maps (and with slightly different notation), Patrão and San Martin [@PatSM07 Propositions 4.8 and 4.10] show that the transitivity set $D_{0}$ is dense in a control set $D$ with nonvoid interior provided that $D_{0}\not =\varnothing$. We note the following further results for control sets. \[lemma1\] Assume that $D$ is a control set for a control system which is accessible for all $x\in M$. Then its transitivity set $D_{0}$ satisfies $D_{0}\subset\mathbf{R}(x)$ for all $x\in D$. Let $x\in D$ and $x_{0}\in D_{0}$. By approximate controllability of $D$ and $x_{0}\in\mathrm{int}\mathbf{C}(x_{0})$, there are $k\geq1$ and $u\in \mathcal{U}$ with $\varphi(k,x,u)\in\mathrm{int}\mathbf{C}(x_{0})$. Hence there are $l\geq1$ and $v\in\mathcal{U}$ such that $\varphi(l,\varphi (k,x,u),v)=x_{0}$. Therefore $x\in\mathbf{C}(x_{0})$, that is, $x_{0}\in\mathbf{R}(x)$. \[lem1.1.2\] Assume that $D$ is a control set for a control system, which is accessible for all $x\in M$. Let the control range $U\subset\mathbb{R}^{m}$ be a compact neighborhood of the origin. If the transitivity set $D_{0}$ of $D$ is nonvoid, then for all $x_{0}\in D_{0}$ $$D=\overline{\mathbf{R}(x_{0})}\cap\mathbf{C}(x_{0}),$$ and, in particular, the set $D$ is measurable. Let $x_{0}\in D_{0}$. Note that $D\subset\overline{\mathbf{R}(x_{0})}$ by definition of control set. Moreover, given $x\in D$, by Proposition \[lemma1\] $x_{0}\in\mathbf{R}(x)$, that is, $x\in\mathbf{C}(x_{0})$, which shows that $D\subset\mathbf{C}(x_{0})$. Hence $D\subset D^{\prime}:=\overline{\mathbf{R}(x_{0})}\cap\mathbf{C}(x_{0})$. On the other hand, it is not difficult to see that $D^{\prime}$ is a set of approximate controllability. By the maximality of $D$ we have $D^{\prime}\subset D$, which concludes the proof. The following proposition shows that a trajectory starting in the interior of a control set $D$ and remaining in it up to a positive time must actually remain in the interior of $D$. \[proposition\_in\]Assume that the maps $F(\cdot,u)$ are local diffeomorphisms on $M$ for all $u\in U$. Let $x$ be in the interior of a control set $D$ and suppose that for some $\tau\in\mathbb{N}$ and $u\in\mathcal{U}$ one has $\varphi(k,x,u)\in D,k\in\{1,\dotsc,\tau\}$. Then $\varphi(k,x,u)\in\mathrm{int}D,k\in\{1,\dotsc,\tau\}$. Suppose that $y:=\varphi(k,x,u)\in D\cap\partial D$ for some $k\in \{1,\dotsc,\tau\}$. By the assumption on the maps $F(\cdot,u)$ and $x\in\mathrm{int}D$, there is a neighborhood $N_{0}(y)$ of $y$ with $N_{0}(y)=\varphi(k,N(x),u)$ for a neighborhood $N(x)\subset D$ of $x$. Since $y\in D$, there are a control $v\in\mathcal{U}$ and $k_{0}\in\mathbb{N}$ with $\varphi(k_{0},y,v)\in\mathrm{int}D$. Then there is a neighborhood $N_{1}(y)$ with $\varphi(k_{0},N_{1}(y),u)\subset\mathrm{int}D$. By the maximality property of control sets it follows that the neighborhood $N_{0}(y)\cap N_{1}(y)$ of $y$ is contained in $D$, contradicting $y\in\partial D$. Controllability properties of linear systems\[section3\] ======================================================== Next we consider linear control systems in $\mathbb{K}^{d}$, $\mathbb{K}=\mathbb{R}$ or $\mathbb{K}=\mathbb{C}$, of the form $$x_{k+1}=Ax_{k}+Bu_{k},\ \ u_{k}\in U\subset\mathbb{R}^{m}, \label{linsys}$$ where $A\in Gl(d,\mathbb{K})$ and $B\in\mathbb{K}^{d\times m}$ and the control range $U$ is a compact convex neighborhood of $0\in\mathbb{K}^{m}$ with $U=\overline{\mathrm{int}U}$. For initial value $x\in\mathbb{K}^{d}$ and control $u\in\mathcal{U}=U^{\mathbb{N}_{0}}$ the solutions of (\[linsys\]) are given by $$\varphi(k,x,u)=A^{k}x+\sum_{i=0}^{k-1}A^{k-1-i}Bu_{i},i\in\mathbb{N}_{0}.$$ Where convenient, we also use the notation $\varphi_{k,u}:=\varphi (k,\cdot,u):\mathbb{R}^{d}\rightarrow\mathbb{R}^{d}$. Note the following observation. For $x\in\mathbb{K}^{d}$ the reachable set $\mathbf{R}_{k}(x)$ at time $k$, $$\mathbf{R}_{k}(x)=\{y\in\mathbb{K}^{d}\left\vert \ \exists u\in\mathcal{U}\ \mbox{with}\ \varphi(k,x,u)=y\right. \}$$ is compact and convex. Convexity follows from the convexity of $U$. Since $U\subset\mathbb{R}^{m}$ is compact, there is $M>0$ such that $\Vert u\Vert\leq M$, for all $u\in U$. Then, if $y=\varphi(k,x,u)\in\mathbf{R}_{k}(x)$, $u=(u_{i})\in U^{k}$, we get $$\Vert y\Vert\leq\Vert A^{k}x\Vert+\sum_{i=0}^{k-1}\Vert A^{k-1-i}Bu_{i}\Vert\leq\Vert A\Vert^{k}\Vert x\Vert+M\sum_{i=0}^{k-1}\Vert A\Vert ^{k-1-i}\Vert B\Vert<\infty,$$ hence $\mathbf{R}_{k}(x)$ is bounded. In order to show that $\mathbf{R}_{k}(x)$ is closed, consider a sequence $y_{n}=\varphi(k,x,u^{n})$ in $\mathbf{R}_{k}(\bar{x})$ such that $y_{n}\rightarrow y\in\mathbb{R}^{d}$ and $u^{n}\in U^{k}$. By compactness of $U$, we have that $U^{k}$ is compact, hence there is a subsequence converging to some $u\in U^{k}$. Therefore $y=\varphi(k,x,u)\in\mathbf{R}_{k}(x)$ by continuity. \[proposition\_two\]For all $k,l\in\mathbb{N}$ we have $$\mathbf{R}_{k}(0)+A^{k}\mathbf{R}_{l}(0)=\mathbf{R}_{l+k}(0)\text{ and }\mathrm{int}\mathbf{R}_{k}\mathbf{(0)+}A^{k}\mathbf{R}_{l}(0)\subset \mathrm{int}\mathbf{R}_{k+l}(0).$$ Let $x_{1}\in\mathbf{R}_{k}(0)$ and $x_{2}\in\mathbf{R}_{l}(0)$. Then there are $u,v\in\mathcal{U}$ such that $$x_{1}=\sum_{i=0}^{k-1}A^{k-1-i}Bu_{i}\ \mbox{and}\ x_{2}=\sum_{i=0}^{l-1}A^{l-1-i}Bv_{i}.$$ Define $$w_{i}=\left\{ \begin{array} [c]{rcl}v_{i}, & \mbox{if} & 0\leq i\leq l-1\\ u_{i-l}, & \mbox{if} & l\leq i\leq k+l-1 \end{array} \right. .$$ Then $$\begin{aligned} \varphi(k+l,0,w) & =\sum_{i=0}^{k+l-1}A^{k+l-1-i}Bw_{i}=\sum_{i=0}^{l-1}A^{k+l-1-i}Bw_{i}+\sum_{i=l}^{k+l-1}A^{k+l-1-i}Bw_{i}\\ & =A^{k}\sum_{i=0}^{l-1}A^{l-1-i}Bv_{i}+\sum_{i=0}^{k-1}A^{k-1-i}Bu_{i}=A^{k}x_{2}+x_{1}.\end{aligned}$$ Hence $x_{1}+A^{k}x_{2}=\varphi(k+l,0,w)\in\mathbf{R}_{l+k}(0)$. The converse inclusion follows by reversing these steps. The second assertion follows since the set on left hand side is open. Define the time reversed counterpart of system (\[linsys\]) by $$x_{k+1}=A^{-1}x_{k}-A^{-1}Bu_{k},\ \ u_{k}\in U\subset\mathbb{R}^{m}. \label{linsys_rev}$$ The reachable and controllable sets from the origin at time $k$ for this system are denoted by $\mathbf{R}_{k}^{-}(0)$ and $\mathbf{C}_{k}^{-}(0)$, respectively. \[prop1.1.8\]The reachable and controllable sets for system (\[linsys\]) and the time reversed system (\[linsys\_rev\]) satisfy for all $k\in \mathbb{N}$$$\mathbf{R}_{k}(0)=\mathbf{C}_{k}^{-}(0)\text{ and }\mathbf{C}_{k}(0)=\mathbf{R}_{k}^{-}(0)\text{.}$$ Note that $x\in\mathbf{C}_{k}(0)$ if and only if there is $u\in\mathcal{U}$ with $$\ A^{k}x+\sum_{i=0}^{k-1}A^{k-1-i}Bu_{i}=0\text{, i.e., }\ x=-\sum_{i=0}^{k-1}A^{-1-i}Bu_{i}.$$ For any $u\in U^{k}$, we define $v_{j}=u_{k-1-j}$, $0\leq j\leq k-1$. Then $$\begin{aligned} x & =-\sum_{i=0}^{k-1}A^{-1-i}Bu_{i}=-\sum_{j=0}^{k-1}A^{-1-(k-1-j)}Bu_{k-1-j}=-\sum_{j=0}^{k-1}(A^{-1})^{k-j}Bv_{j}\\ & =-\sum_{j=0}^{k-1}(A^{-1})^{k-1-j}A^{-1}Bv_{j}=\sum_{j=0}^{k-1}(A^{-1})^{k-1-j}(-A^{-1}B)v_{j}.\end{aligned}$$ Hence we conclude that $x\in\mathbf{C}_{k}(0)$ if and only if there exists a control $v\in U^{k}$ such that $x=\varphi^{-}(k,0,v)$, where $\varphi^{-}$ is the solution of (\[linsys\_rev\]). This proves that $\mathbf{C}_{k}(0)=\mathbf{R}_{k}^{-}(0)$. The other equality follows analogously. \[lemma\_long\]If $(A,B)$ is controllable, there is $\delta>0$ such that the ball $B_{\delta}(0)$ satisfies $B_{\delta}(0)\subset\mathrm{int}\mathbf{R}_{d-1}(0)$. Furthermore, $\mathbf{R}_{n}(0)\subset\mathbf{R}_{m}(0)$ for $m\geq n$. Since the control range is a neighborhood of $0$, controllability implies that there is $\delta>0$ with $B_{\delta}(0)\subset\mathrm{int}\mathbf{R}_{d-1}(0)$. The second assertion follows since $0$ is an equilibrium for $u=0$. \[prop1.1.11\]If $(A,B)$ is controllable, the reachable set of system (\[linsys\]) satisfies $\overline{\mathbf{R}(0)}=\overline{\mathrm{int}\mathbf{R}(0)}$. The inclusion $\overline{\mathrm{int}\mathbf{R}(0)}\subset\overline {\mathbf{R}(0)}$ holds trivially. For the converse we first show that $\mathbf{R}(y)\subset\mathrm{int}\mathbf{R}(0)$ for $y\in\mathrm{int}\mathbf{R}(0)$. In fact, let there exists a neighborhood $V_{y}$ of $y\ $such that $V_{y}\subset\mathbf{R}(0)$. Given $z\in\mathbf{R}(y)$, there are $k\in\mathbb{N}$ and $u\in\mathcal{U}$ such that $z=\varphi(k,y,u)$. Since $A\in Gl(d,\mathbb{R})$, the map $\varphi_{k,u}$ is a diffeomorphism and we have that $\varphi_{k,u}(V_{y})$ is a neighborhood of $z$ and clearly $\varphi_{k,u}(\mathbf{R}(0))\subset\mathbf{R}(0)$. So $z\in\varphi _{k,u}(V_{y})\subset\mathbf{R}(0)$, which shows that $z\in\mathrm{int}\mathbf{R}(0)$. Now, let $x\in\overline{\mathbf{R}(0)}$ and $V$ a neighborhood of $x$. There is $y\in\mathbf{R}(0)$ such that $y\in V$, so there are $k\in\mathbb{N}$ and $u\in\mathcal{U}$ such that $y=\varphi(k,0,u)$. Since $0\in\mathrm{int}\mathbf{R}(0)$ there exists a neighborhood $W$ of $0$ such that $W\subset \mathrm{int}\mathbf{R}(0)$ and $\varphi_{k,u}(W)\subset V$ by continuity of $\varphi_{k,u}$. For $z\in W$ the arguments above show that $\mathbf{R}(z)\subset\mathrm{int}\mathbf{R}(0)$ and it follows that $$\varphi(k,z,u)\in V\cap\mathbf{R}(z)\subset V\cap\mathrm{int}\mathbf{R}(0)$$ and hence $x\in\overline{\mathrm{int}\mathbf{R}(0)}$. We will need the following lemmas. \[lemma2\]For every $\lambda\in\mathbb{C}$ there are $n_{k}\rightarrow \infty$ such that $\frac{\lambda^{n_{k}}}{\left\vert \lambda\right\vert ^{n_{k}}}\rightarrow1$, and, in particular, $$\frac{\operatorname{Im}(\lambda^{n_{k}})}{\operatorname{Re}(\lambda^{n_{k}})}\rightarrow0\text{ for }k\rightarrow\infty.$$ There is $\theta\in\lbrack0,2\pi)$ with $\lambda=\left\vert \lambda\right\vert (\cos\theta+\imath\sin\theta)$, hence$$\lambda^{n}=\left\vert \lambda\right\vert ^{n}(\cos(n\theta)+\imath \sin(n\theta)).$$ If $\theta\in2\pi\mathbb{Q}$, there are $n,N\in\mathbb{N}$ with $n\theta =N2\pi$, hence $\lambda^{n}=\left\vert \lambda\right\vert ^{n}\cos (N2\pi)=\left\vert \lambda\right\vert ^{n}$. Else, there are $n_{k}\rightarrow\infty$ such that modulo $2\pi$ one has $n_{k}\theta\rightarrow0$. This implies $\cos(n_{k}\theta)\rightarrow1$ and $\sin(n_{k}\theta )\rightarrow0$, hence$$\frac{\lambda^{n_{k}}}{\left\vert \lambda\right\vert ^{n_{k}}}=\cos (n_{k}\theta)+\imath\sin(n_{k}\theta)\rightarrow1.$$ This implies$$\frac{\operatorname{Im}(\lambda^{n_{k}})}{\operatorname{Re}(\lambda^{n_{k}})}=\frac{\operatorname{Im}\left( \frac{\lambda^{n_{k}}}{\left\vert \lambda\right\vert ^{n_{k}}}\right) }{\operatorname{Re}\left( \frac {\lambda^{n_{k}}}{\left\vert \lambda\right\vert ^{n_{k}}}\right) }=\frac {\sin(n_{k}\theta)}{\cos(n_{k}\theta)}\rightarrow0.$$ The next lemma states a property of convex sets. \[Lemma\_cone\]If $C$ is an open convex subset of $\mathbb{K}^{n}$ and $Y\subset C$ a subspace, then $C=C+Y$. The following theorem describes the general structure of reachable and controllable sets. It is analogous to a well known property of linear systems in continuous time, cf. Sontag [@Son98 Section 3.6] and Hinrichsen and Pritchard [@HiP20 Theorem 6.2.15]; the proof for discrete-time systems, however, is more involved. Recall that the state space $\mathbb{K}^{d}$ can be decomposed with respect to $A$ into the direct sum of the stable subspace $E^{s}$, the center space $E^{c}$ and the unstable subspace $E^{u}$ which are the direct sums of all generalized (real) eigenspaces for the eigenvalues $\lambda$ of $A$ with $\left\vert \lambda\right\vert <1$, $\left\vert \lambda\right\vert =1$ and $\left\vert \lambda\right\vert >1$, respectively. Furthermore, we let $E^{uc}:=E^{u}\oplus E^{c}$ and $E^{sc}:=E^{s}\oplus E^{c}$. \[prop1.1.12\] Consider the control system given by (\[linsys\]) and suppose that the system without control restriction is controllable. \(i) There exists a compact and convex set $K\subset E^{s}\subset\mathbb{K}^{d}$ with nonvoid interior with respect to $E^{s}$ such that $\overline {\mathbf{R}(0)}=K+E^{uc}$. Moreover $0\in K$ and $E^{uc}\subset\mathrm{int}\mathbf{R}(0)$. \(ii) There exists a compact and convex set $F\subset E^{u}\subset \mathbb{K}^{d}$ with nonvoid interior with respect to $E^{u}$ such that $\overline{\mathbf{C}(0)}=F+E^{sc}$. Moreover $0\in F$ and $E^{sc}\subset\mathrm{int}\mathbf{C}(0)$. We will first prove the result for $\mathbb{K}=\mathbb{C}$. \(i) In the first step, we will show that $E^{uc}\subset\mathrm{int}\mathbf{R}(0)$. As $\mathbf{R}(0)$ is convex, its interior is convex too. Therefore it suffices to prove that the generalized eigenspaces for eigenvalues with absolute value greater than or equal to $1$ are contained in $\mathrm{int}\mathbf{R}(0)$. Fix an eigenvalue $\lambda$ of $A$ with $\left\vert \lambda\right\vert \geq1$ and let $E_{q}(\lambda)=\mathrm{ker}(A-\lambda I)^{q}$, $q\in\mathbb{N}_{0}$. It suffices to show that $E_{q}(\lambda)\subset\mathrm{int}\mathbf{R}(0)$ for all $q$. We prove the statement by induction on $q$, the case $q=0$ being trivial since $E_{q}(\lambda)=\{0\}\subset\mathrm{int}\mathbf{R}(0)$. So assume that $E_{q-1}(\lambda))\subset\mathrm{int}\mathbf{R}(0)$ and take any $w\in E_{q}(\lambda)$. We must show that $w\in\mathrm{int}\mathbf{R}(0)$. By Lemma \[lemma\_long\] there is $\delta>0$ such that $aw\in\mathrm{int}\mathbf{R}_{d-1}(0)$ for all $a\in\mathbb{C}$ with $\left\vert a\right\vert <\delta$. Note that for all $\left\vert a\right\vert <\delta$ and all $n\geq1$$$\begin{aligned} A^{n}aw & =(A-\lambda I+\lambda I)^{n}aw=\sum_{j=0}^{n}{\binom{n}{j}}(A-\lambda I)^{n-j}\lambda^{j}aw\\ & =\lambda^{n}aw+\sum_{j=0}^{n-1}{\binom{n}{j}}(A-\lambda I)^{n-j}\lambda ^{j}aw.\end{aligned}$$ Since $aw\in E_{q}(\lambda)$, it follows that $(A-\lambda I)^{i}aw\in E_{q-1}(\lambda)$ for all $i\geq1$, hence $z(n):=\sum_{j=0}^{n-1}{\binom{n}{j}}(A-\lambda I)^{n-j}\lambda^{j}aw\in E_{q-1}(\lambda),n\geq1$. Using $aw\in\mathrm{int}\mathbf{R}_{d-1}(0)$ Lemma \[lemma\_long\] and Lemma \[Lemma\_cone\] imply for$\ n\geq1$$$\lambda^{n}aw=A^{n}aw-z(n)\in A^{n}aw+E_{q-1}(\lambda)\subset\mathrm{int}\mathbf{R}_{n+d-1}(0)+E_{q-1}(\lambda)\subset\mathrm{int}\mathbf{R}(0). \label{3.3}$$ We write$$a=\alpha+\imath\beta\text{ and }\lambda^{n}=x_{n}+\imath y_{n}$$ with $\alpha,\beta\in\mathbb{R}$ and $x_{n},y_{n}\in\mathbb{R}^{d}$. **Claim:** There are a sequence $(n_{k})_{k\in\mathbb{N}}$ with $n_{k}\rightarrow\infty$ and $a_{n_{k}}\in\mathbb{C}$ with $\left\vert a_{n_{k}}\right\vert <\delta$ such that $\lambda^{n_{k}}a_{n_{k}}\in \mathbb{R}$. In fact, we have$$\lambda^{n}a=(x_{n}+\imath y_{n})(\alpha+\imath\beta)=x_{n}\alpha-y_{n}\beta+\imath(x_{n}\beta+y_{n}\alpha)\in\mathbb{R},$$ if and only if $x_{n}\beta+y_{n}\alpha=0$. Case (a): If $x_{n}=0$, one may choose $\alpha_{n}:=0$ and gets $\lambda ^{n}a_{n}=-y_{n}\beta_{n}\in\mathbb{R}$ for $\beta_{n}=\frac{\delta}{2}$ with $\left\vert a_{n}\right\vert =\left\vert \beta_{n}\right\vert =\frac{\delta }{2}$. Case (b): Otherwise $\lambda^{n}a\in\mathbb{R}$ if and only if$$\beta=-\alpha\frac{y_{n}}{x_{n}}=-\alpha\frac{\operatorname{Im}(\lambda^{n})}{\operatorname{Re}(\lambda^{n})}.$$ According to Lemma \[lemma2\] there are $n_{k}\in\mathbb{N}$, arbitrarily large, such that with $\alpha_{n_{k}}:=\frac{\delta}{2}$ and $\beta_{n_{k}}:=-\alpha_{n_{k}}\frac{y_{n_{k}}}{x_{n_{k}}}$ $$\left\vert \beta_{n_{k}}\right\vert =\frac{\delta}{2}\left\vert \frac {\operatorname{Im}(\lambda^{n_{k}})}{\operatorname{Re}(\lambda^{n_{k}})}\right\vert <\frac{\delta}{2}.$$ It follows for $a_{n_{k}}:=\alpha_{n_{k}}+\beta_{n_{k}}$ that$$\left\vert a_{n_{k}}\right\vert ^{2}=\alpha_{n_{k}}^{2}+\beta_{n_{k}}^{2}<\frac{1}{4}\delta^{2}+\frac{1}{4}\delta^{2}\text{, and hence }\left\vert a_{n_{k}}\right\vert <\delta.$$ We have shown that with this choice of $a_{n_{k}}$ we have $\lambda^{n_{k}}a_{n_{k}}\in\mathbb{R}$ and the **Claim** is proved. Furthermore in case (a), by $\left\vert \lambda\right\vert \geq1$,$$\left\vert \lambda^{n}a_{n}\right\vert =\left\vert \lambda\right\vert ^{n}\left\vert a_{n}\right\vert \geq\left\vert a_{n}\right\vert =\frac{\delta }{2},$$ and in case (b)$$\left\vert \lambda^{n_{k}}a_{n_{k}}\right\vert =\left\vert \lambda\right\vert ^{n_{k}}\left\vert a_{n_{k}}\right\vert \geq\left\vert a_{n_{k}}\right\vert \geq\left\vert \alpha_{n_{k}}\right\vert =\frac{\delta}{2}.$$ Now choose $\ell\in\mathbb{N}$ with $\ell\geq2/\delta$. Recall that all points $a_{n_{k}}w\in\mathrm{int}\mathbf{R}_{d-1}(0)$. We may assume that $n_{2}\geq n_{1}+d-1$, hence$$A^{n_{1}}a_{n_{1}}w\in\mathrm{int}\mathbf{R}_{n_{1}+d-1}(0)\subset \mathrm{int}\mathbf{R}_{n_{2}}(0).$$ We may also assume that $n_{3}-n_{2}\geq n_{2}+d-1$, hence$$A^{n_{2}}a_{n_{2}}w\in\mathrm{int}\mathbf{R}_{n_{2}+d-1}(0)\subset \mathrm{int}\mathbf{R}_{n_{3}-n_{2}}(0).$$ Thus Proposition \[proposition\_two\] implies$$A^{n_{1}}a_{n_{1}}w+A^{n_{2}}a_{n_{2}}w\in\mathrm{int}\mathbf{R}_{n_{2}}(0)+A^{n_{2}}\mathbf{R}_{n_{3}-n_{2}}(0)\subset\mathrm{int}\mathbf{R}_{n_{3}-n_{2}+n_{2}}(0)=\mathrm{int}\mathbf{R}_{n_{3}}(0).$$ Proceeding in this way, we finally arrive at$$\sum_{k=1}^{\ell}A^{n_{k}}a_{n_{k}}w\in\mathrm{int}\mathbf{R}_{n_{\ell}}(0).$$ Thus we find with (\[3.3\]),$$\sum_{k=1}^{\ell}\lambda^{n_{k}}a_{n_{k}}w=\sum_{k=1}^{\ell}\left[ A^{n_{k}}a_{n_{k}}w-z(n_{k})\right] \in\mathrm{int}\mathbf{R}_{n_{\ell}}(0)+E_{q-1}(\lambda)\subset\mathrm{int}\mathbf{R}(0).$$ If $\lambda^{n_{k}}a_{n_{k}}>0$ for all $k\in\{1,\dotsc,\ell\}$, then (the real number) $$\sum_{k=1}^{\ell}\lambda^{n_{k}}a_{n_{k}}>\ell\cdot\delta/2\geq1.$$ For the $k$ with $\lambda^{n_{k}}a_{n_{k}}<0$, replace $a_{n_{k}}$ by $-a_{n_{k}}$, to get the same conclusion. This shows that $w$ is a convex combination of the points $0$ and $\sum_{k=1}^{\ell}\lambda^{n_{k}}a_{n_{k}}w$ in $\mathrm{int}\mathbf{R}(0)$, thus convexity of this set implies $w\in\mathrm{int}\mathbf{R}(0)$ completing the induction step $E_{q}(\lambda)\subset\mathrm{int}\mathbf{R}(0)$. Hence we have shown that $E^{uc}\subset\mathrm{int}\mathbf{R}(0)$. It remains to construct a set $K$ as in the assertion. Define $K_{0}:=\mathrm{int}\mathbf{R}(0)\cap E^{s}$. Then it follows that $$K_{0}+E^{uc}=(\text{$\mathrm{int}$}\mathbf{R}(0)\cap E^{s})+E^{uc}\subset\text{$\mathrm{int}$}\mathbf{R}(0)+E^{uc}\subset\text{$\mathrm{int}$}\mathbf{R}(0).$$ For the converse inclusion, let $v\in\mathrm{int}\mathbf{R}(0)$, then $v=x+y$ where $x\in E^{s}$ and $y\in E^{uc}$, hence by Lemma \[Lemma\_cone\], $$x=v-y\in\text{$\mathrm{int}$}\mathbf{R}(0)+E^{uc}=\text{$\mathrm{int}$}\mathbf{R}(0),$$ which shows that $x\in K_{0}$ and therefore $v\in K_{0}+E^{s}$. This shows that$$K_{0}+E^{uc}=\text{$\mathrm{int}$}\mathbf{R}(0). \label{eq1}$$ In order to show that $K_{0}$ is bounded, consider the projection $\pi:\mathbb{C}^{d}=E^{s}\oplus E^{uc}\rightarrow$ $E^{s}$ along $E^{uc}$. Since $E^{s}$ and $E^{uc}$ are $A$-invariant, $\pi$ commutes with $A$ and we have $\pi A^{n}=A^{n}\pi$, for all $n\in\mathbb{N}_{0}$. For each $x\in K_{0}=\mathrm{int}\mathbf{R}(0)\cap E^{s}$, there are $k\in\mathbb{N}$ and $u=(u_{i})\in\mathcal{U}$ such that $$x=\sum_{i=0}^{k-1}A^{k-1-i}Bu_{i}.$$ Since $A|_{E^{s}}$ is a linear contraction, there exist constants $a\in(0,1)$ and $c\geq1$ such that $\Vert A^{n}x\Vert\leq ca^{n}\Vert x\Vert$ for all $n\in\mathbb{N}$ and $x\in E^{s}$. Since $U$ is compact, there is $M>0$ such that $\Vert\pi Bu\Vert\leq M$, for all $u\in U$, so $$x=\pi(x)=\pi\left( \sum_{i=0}^{k-1}A^{k-1-i}Bu_{i}\right) =\sum_{i=0}^{k-1}\pi A^{k-1-i}Bu_{i}=\sum_{i=0}^{k-1}A^{k-1-i}\pi Bu_{i},$$ hence $$\Vert x\Vert\leq\sum_{i=0}^{k-1}\left\Vert A^{k-1-i}\pi Bu_{i}\right\Vert \leq\sum_{i=0}^{k-1}\left\Vert A^{k-1-i}\Vert\Vert\pi Bu_{i}\right\Vert \leq cM\sum_{i=0}^{k-1}a^{k-1-i}=cM\dfrac{1-a^{k}}{1-a}$$ showing that $K_{0}$ is bounded. As a consequence, $K:=\overline{K_{0}}=\overline{\mathrm{int}\mathbf{R}(0)\cap E^{s}}$ is a compact convex set which has nonvoid interior relative to $E^{s}$. Moreover, $K+E^{uc}$ is closed, because $K$ is compact. Therefore it follows from Proposition \[prop1.1.11\] and (\[eq1\]) that $$\overline{\mathbf{R}(0)}=\overline{\text{$\mathrm{int}$}\mathbf{R}(0)}=\overline{K_{0}+E^{uc}}=K+E^{uc}.$$ \(ii) Consider the time reversed system (\[linsys\_rev\]). Note that $\mathbb{C}^{d}=E_{-}^{s}\oplus E_{-}^{c}\oplus E_{-}^{u}$, where $E_{-}^{s}$, $E_{-}^{c}$ and $E_{-}^{u}$ are the sums of the generalized eigenspaces for the eigenvalues $\mu$ of $A^{-1}$ with $\left\vert \mu\right\vert <1$, $\left\vert \mu\right\vert =1$ and $\left\vert \mu\right\vert >1$, respectively. Now $\lambda$ is an eigenvalue of $A$ (note that $\lambda\neq0$ since $A\in Gl(d,\mathbb{C})$), if and only if $\mu=\lambda^{-1}$ is an eigenvalue of $A^{-1}$. Hence we have $E_{-}^{s}=E^{u}$, $E_{-}^{c}=E^{c}$ and $E_{-}^{u}=E^{s}$. By (i) there exists a compact and convex set $F\subset \mathbb{C}^{d}$ which has nonvoid interior with respect to $E_{-}^{s}=E^{u}$ such that $\overline{\mathbf{R}^{-}(0)}=F+E_{-}^{uc}$, $0\in F$ and $E_{-}^{uc}\subset\mathrm{int}\mathbf{R}^{-}(0)$. By Proposition \[prop1.1.8\], $$E^{sc}=E_{-}^{uc}\subset\text{$\mathrm{int}$}\mathbf{R}^{-}(0)=\text{$\mathrm{int}$}\mathbf{C}(0)$$ and $$\overline{\mathbf{C}(0)}=F+E_{-}^{uc}=F+E^{sc}.$$ This completes the proof of the theorem for the case $\mathbb{K}=\mathbb{C}$. It remains to prove the theorem for the case $\mathbb{K}=\mathbb{R}$. Note that if $A\in Gl(d,\mathbb{R})$, then $u-\imath v\in E^{s},u,v\in \mathbb{R}^{d}$, implies $u+\imath v,v+\imath u\in E^{s}$ and a similar implication holds for $E^{uc}$. Hence$$\begin{aligned} \operatorname{Re}E^{s} & =E^{s}\cap\mathbb{R}^{d},\operatorname{Re}E^{uc}=E^{uc}\cap\mathbb{R}^{d},\label{HP16}\\ E^{s} & =\operatorname{Re}E^{s}+\imath\operatorname{Re}E^{s},E^{uc}=\operatorname{Re}E^{uc}\oplus\imath\operatorname{Re}E^{uc}\nonumber\end{aligned}$$ Let $U_{\mathbb{C}}:=U+\imath U$ and apply the result above for $\mathbb{K}=\mathbb{C}$. Clearly $(A,B)$ is controllable, when considered as a system with state space $\mathbb{C}^{d}$ and $U_{\mathbb{C}}$ is a convex compact neighborhood of $0\in\mathbb{C}^{m}$ with $U\subset\overline{\mathrm{int}U}$. Denote the reachable set from $0$ of the real and complex system by $\mathbf{R}_{\mathbb{R}}$ and $\mathbf{R}_{\mathbb{C}}$, respectively. It follows from the complex version of the theorem that the compact convex set $K_{\mathbb{C}}:=\overline{\mathrm{int}(\mathbf{R}_{\mathbb{C}})\cap E^{s}}$ has non-empty interior relative to $E^{s}$ and satisfies $\overline {\mathbf{R}_{\mathbb{C}}}=K_{\mathbb{C}}\cap E^{uc}$. Since every $u\in\mathcal{U}_{\mathbb{C}}$ is of the form $u=v+\imath w$, where $v,w\in\mathcal{U}$, and $\varphi(k,0,u)=\varphi(k,0,v)+\imath\varphi (k,0,w),k\in\mathbb{N}$, we have$$\mathcal{U}_{\mathbb{C}}=\mathcal{U}_{\mathbb{R}}+\imath\mathcal{U}_{\mathbb{R}}\text{ and }\mathbf{R}_{\mathbb{C}}=\mathbf{R}_{\mathbb{R}}+\imath\mathbf{R}_{\mathbb{R}}. \label{HP20}$$ It follows that$$\mathbf{R}_{\mathbb{R}}=\operatorname{Re}\mathbf{R}_{\mathbb{C}},\mathrm{int}\mathbf{R}_{\mathbb{R}}=\operatorname{Re}\mathrm{int}\mathbf{R}_{\mathbb{C}},$$ where the interior of $\mathbf{R}_{\mathbb{R}}$ is relative to $\mathbb{R}^{d}$ and the interior of $\mathbf{R}_{\mathbb{C}}$ is relative to $\mathbb{C}^{d}$. Now, if $W,Z$ $\subset\mathbb{C}^{d}$ are subsets of the form$$W=W_{1}+\imath W_{2},Z=Z_{1}+\imath Z_{2},$$ where $W_{1},W_{2},Z_{1},Z_{2}\subset\mathbb{R}^{d}$ and $W\cap Z\not =\varnothing$, then $W\cap Z=\left( W_{1}\cap Z_{1}\right) +\imath\left( W_{2}\cap Z_{2}\right) $ and so $\operatorname{Re}(W\cap Z)=\operatorname{Re}W\cap\operatorname{Re}Z$. Applying this equality to $W=\mathrm{int}\mathbf{R}_{\mathbb{C}}$ and $Z=E^{s}$ we obtain from (\[HP20\]) and (\[HP16\]) that$$K=\overline{(\operatorname{Re}(\mathrm{int}\mathbf{R}_{\mathbb{C}}))\cap\operatorname{Re}E^{s}}=\overline{\operatorname{Re}(\mathrm{int}\mathbf{R}_{\mathbb{C}})\cap E^{s})}=\operatorname{Re}K_{\mathbb{C}}.$$ Hence $K$ is a compact convex subset of $\mathbb{R}^{d}$, which has a non-empty interior relative to $\operatorname{Re}E^{s}$. Using (\[HP20\]) for the second equality we get$$\overline{\mathbf{R}_{\mathbb{R}}}=\overline{\operatorname{Re}\mathbf{R}_{\mathbb{C}}}=\operatorname{Re}\overline{\mathbf{R}_{\mathbb{C}}}=\operatorname{Re}(K_{\mathbb{C}}+E^{u,s})=K+\operatorname{Re}E^{u,s}.$$ This concludes the proof. Next we present a necessary and sufficient condition for controllability in $\mathbb{R}^{d}$. This consequence of Theorem \[prop1.1.12\] illustrates that controllability only holds under very strong assumptions on the spectrum of the matrix $A$. In the next section, we will instead consider subsets of the state space where complete controllability holds, i.e., control sets. Recall that the system without control restriction is controllable in $\mathbb{R}^{d}$ if and only if $(A,B)$ is controllable. \[corollary16\]Consider the discrete-time linear system given in (\[linsys\]). \(i) The reachable set $\mathbf{R}(0)=\mathbb{K}^{d}$ if and only if $(A,B)$ is controllable and $A$ has no eigenvalues with absolute value less than $1$. \(ii) The controllable set $\mathbf{C}(0)=\mathbb{K}^{d}$ if and only if $(A,B)$ is controllable and $A$ has no eigenvalues with absolute value greater than $1$. \(iii) The system is controllable in $\mathbb{R}^{d}$ if and only if $(A,B)$ is controllable and all eigenvalues of $A$ have absolute value equal to $1$. \(i) If $\mathbf{R}(0)=\mathbb{K}^{d}$, then the pair $(A,B)$ is controllable, since $\mathbf{R}(0)$ is contained in the image of Kalman’s matrix $[B\ \ AB\ \ \ldots\ \ A^{d-1}B]$. Moreover, if there is an eigenvalue $\lambda$ of $A$ with $|\lambda|<1$, then $E^{s}\neq\{0\}$ and $E^{u}$ is a proper subset of $\mathbb{K}^{d}$. By Theorem \[prop1.1.12\] (ii), there is a nonvoid compact set $F\subset E^{u}$ such that $E^{sc}+F=\overline {\mathbf{R}(0)}=\mathbb{K}^{d}$, a contradiction. Conversely, if $(A,B)$ is controllable and all eigenvalues $\lambda$ of $A$ satisfy $\left\vert \lambda\right\vert \geq1$, then by Theorem \[prop1.1.12\] (i) we have $\mathbb{K}^{d}=E^{uc}\subset\mathrm{int}\mathbf{R}(0)\subset\mathbf{R}(0)$. \(ii) This follows analogously. \(iii) This is a consequence of assertions (i) and (ii) observing that $\mathbf{R}(0)=\mathbf{C}(0)=\mathbb{K}^{d}$ holds if and only if for all $x,y\in\mathbb{K}^{d}$ there are a control $u\in\mathcal{U}$ and a time $k\in\mathbb{N}$ with $\varphi(k,x,u)=y$. In the continuous-time case, a result analogous to Corollary \[corollary16\] is given e.g. in Sontag [@Son98 Section 3.6]. For the discrete-time case, we are not aware of a result in the literature covering Corollary \[corollary16\]. In the special case of two inputs (i.e., $m=2$) the characterization of null-controllability in Corollary \[corollary16\] (ii) is given in Wing and Desoer [@WinD63 Section V, Theorem 2]. Control sets for linear systems\[section4\] =========================================== Next we analyze linear control systems in $\mathbb{R}^{d}$ of the form$$x_{k+1}=Ax_{k}+Bu_{k},u_{k}\in U\subset\mathbb{R}^{m} \label{lin}$$ with $A\in Gl(d,\mathbb{R})$ and $B\in\mathbb{R}^{d\times m}$ and suppose that $U$ is a convex compact neighborhood of $0\in\mathbb{R}^{m}$ with $U=\overline{\mathrm{int}U}$. Recall that the system without control restrictions is controllable in $\mathbb{R}^{d}$ if and only if $\mathrm{rank}[B~AB\dotsc A^{d-1}B]=d$, i.e., the pair $(A,B)$ is controllable. \[theorem\_existence\]There exists a unique control set $D$ with nonvoid interior of system (\[lin\]) if and only if the system without control restriction is controllable in $\mathbb{R}^{d}$. In this case $0\in D_{0}\cap\mathrm{int}D$. The controllability condition for $(A,B)$ is necessary for the existence of $D$, since guarantees that accessibility condition (\[access0\]) holds for all $x\in\mathbb{R}^{d}$ and, for the system without control constraints, the reachable and the null-controllable subspaces coincide with $\mathbb{R}^{d}$. Since $0\in\mathrm{int}U$, one verifies that for $k\geq d-1$ $$0\in\mathrm{int}(\mathbf{C}_{k}(0))\cap\mathrm{int}(\mathbf{R}_{k}(0))=:D^{\prime}.$$ Then every point $x\in D^{\prime}$ can be steered to any other point $z\in D^{\prime}$ (first steer $x$ to the origin in time $k$ and then the origin to $z$ in time $k$) and $0\in\mathrm{int}(\mathbf{C}(0))$. Hence $D^{\prime}$ is contained in a control set $D$. Thus we have established the existence of a control set $D$ with nonvoid interior, and $0\in D_{0}\cap\mathrm{int}D$. It remains to show uniqueness. Let $\tilde{D}\subset\mathbb{R}^{d}$ be an arbitrary control set with nonvoid interior. By Proposition \[proposition\_transitivity\] its transitivity set $\tilde{D}_{0}$ is nonvoid and hence by Proposition \[lem1.1.2\] there is $x_{0}\in\tilde{D}$ with$$\tilde{D}=\overline{\mathbf{R}(x_{0})}\cap\mathbf{C}(x_{0}).$$ By linearity, we have $\varphi(k,x_{1},u)=x_{2}$ for $k\in\mathbb{N}$ and $x_{1},x_{2}\in\mathbb{R}^{d}$ implies $\varphi(k,\alpha x_{1},\alpha u)=\alpha x_{2}$ for any $\alpha\in(0,1]$. Here the control $\alpha u$ has values in $U$, since $U$ is convex and $0\in U$. This implies that $\alpha\tilde{D}$ is contained in some control set $D^{\alpha}$ and $\mathrm{int}(\alpha\tilde{D})$ is contained in the interior of $D^{\alpha}$. Now choose any $x\in\mathrm{int}\tilde{D}$ and suppose, by way of contradiction, that $$\alpha_{0}:=\inf\{\alpha\in(0,1]\left\vert \forall\beta\in\lbrack \alpha,1]:\beta x\in\tilde{D}\right. \}>0.$$ Then $\alpha_{0}x\in\partial\tilde{D}$ and $\alpha_{0}x\in\mathrm{int}D^{\alpha_{0}}$. Therefore $\tilde{D}\cap\mathrm{int}D^{\alpha_{0}}\not =\varnothing$, and it follows that $\tilde{D}=D^{\alpha_{0}}$ and $\alpha_{0}x\in\mathrm{int}\tilde{D}$. This is a contradiction and so $\alpha_{0}=0$. Choosing $\alpha>0$ small enough such that $\alpha x\in D$, we obtain $\alpha x\in\tilde{D}\cap D\not =\varnothing$. Now it follows that $\tilde{D}=D$. We know that in the hyperbolic case$$D=K_{0}+F^{\prime} \label{D_sum}$$ with $K_{0}\subset E^{s},F^{\prime}\subset F\subset E^{u}$, where $K_{0}$ and $F$ are compact sets with $0\in K_{0}\cap F$. In particular, it follows that $K_{0},F\subset D$. The following theorem gives a spectral characterization of boundedness of the control set. Recall that $A$ is called hyperbolic if all eigenvalues $\lambda$ of $A$ satisfy $\left\vert \lambda\right\vert \neq1$. \[theorem\_bounded\]Assume that $(A,B)$ is controllable. Then the control set $D$ with nonvoid interior of system (\[lin\]) is bounded if and only if $A$ is hyperbolic. By Theorem \[prop1.1.12\] there are compact sets $K\subset E^{s}$, $F\subset E^{u}$ such that $$\overline{\mathbf{R}(0)}=K+E^{c}+E^{u}\ \mbox{ and }\ \overline{\mathbf{C}(0)}=F+E^{c}+E^{s}.$$ By Proposition \[lem1.1.2\], $D=\overline{\mathbf{R}(0)}\cap\mathbf{C}(0)$, because $0\in D_{0}\subset\mathrm{int}D$, and hence every element $x\in D$ can be represented in the following two ways: $$x=k+x_{1}+x_{+}=f+x_{1}+x_{-},$$ where $k\in K\subset E^{s}$, $f\in F\subset E^{u}$, $x_{1}\in E^{c}$, $x_{-}\in E^{s}$ and $x_{+}\in E^{u}$. Since $\mathbb{R}^{d}=E^{s}\oplus E^{c}\oplus E^{u}$ we get $k=x_{-}$, $f=x_{+}$. As $E^{c}=E^{sc}\cap E^{uc}\subset\mathbf{R}(0)\cap\mathbf{C}(0)\subset D$, we conclude that $E^{c}\subset D\subset K+E^{c}+F$, and so the control set $D$ is bounded if and only if $E^{c}=\{0\}$. Next we present a simple example illustrating control sets. Consider for $d=2$ and $m=1$$$\left[ \begin{array} [c]{c}x_{k+1}\\ y_{k+1}\end{array} \right] =\left[ \begin{array} [c]{cc}2 & 0\\ 0 & \frac{1}{2}\end{array} \right] \left[ \begin{array} [c]{c}x_{k}\\ y_{k}\end{array} \right] +\left[ \begin{array} [c]{c}1\\ 1 \end{array} \right] u_{k},~u_{k}\in U=[-1,1].$$ We claim that for this hyperbolic matrix $A$ the unique control set with nonvoid interior is $D=(-1,1)\times\lbrack-2,2]$. The stable subspace associated with the eigenvalue $\frac{1}{2}$ of $A$ is the $y$-axis, the unstable subspace associated with the eigenvalue $2$ is the $x$-axis. For a constant control $u\in\lbrack-1,1]$, one computes the equilibrium as $(x(u),y(u))^{\top}=(u,2u)^{\top}$. In particular. for $u=1$ and $u=-1$ one obtains the equilibria$$\left[ \begin{array} [c]{c}x(1)\\ y(1) \end{array} \right] =\left[ \begin{array} [c]{c}-1\\ 2 \end{array} \right] \text{ and }\left[ \begin{array} [c]{c}x(-1)\\ y(-1) \end{array} \right] =\left[ \begin{array} [c]{c}1\\ -2 \end{array} \right] ,$$ resp. It is clear that for all $u\in(-1,1)$ the equilibrium $(-u,2u)^{\top}$ is in the interior of the control set $D$. Furthermore, observe that for $x_{0}>1$ one has in the next step $2x_{0}+u>x_{0}$ and for $x_{0}<-1$ one has $2x_{0}+u<x_{0}$. If $y_{0}>2$, then $\frac{1}{2}y_{0}+u<\frac{1}{2}y_{0}+1\leq y_{0}$ and if $y_{0}<-2$, then $\frac{1}{2}y_{0}+u\geq\frac{1}{2}y_{0}-1>y_{0}$. Hence solutions starting left of the vertical line $x=-1$ and right of $x=1$ have to go to the left and to the right, respectively. Solutions which start above the horizontal line$\ y=2$ and below $y=-2$, have to go down and up, respectively. This shows that the control set must be contained in $(-1,1)\times\lbrack-2,2]$. The controllability property within $D$ can be seen by the following analysis. If we start in an equilibrium $(x(\alpha),y(\alpha))^{\top}=(-\alpha,2\alpha)^{\top},\alpha\in\left( -1,1\right) $, we get e.g.$$\left[ \begin{array} [c]{c}x_{1}\\ y_{1}\end{array} \right] =\left[ \begin{array} [c]{c}-2\alpha\\ \alpha \end{array} \right] +\left[ \begin{array} [c]{c}1\\ 1 \end{array} \right] u_{0},~\left[ \begin{array} [c]{c}x_{2}\\ y_{2}\end{array} \right] =\left[ \begin{array} [c]{c}-4\alpha\\ \frac{1}{2}\alpha \end{array} \right] +\left[ \begin{array} [c]{c}2\\ \frac{1}{2}\end{array} \right] u_{0}+\left[ \begin{array} [c]{c}1\\ 1 \end{array} \right] u_{1}.$$ For the reachable set, we see that after one step the line segment $S=\{(u,u)^{\top},\allowbreak u\in\lbrack-1,1]\}$ is shifted to $(-2\alpha ,\alpha)^{\top}$. After two time steps the line segment $S$ is shifted to $(-4\alpha,\frac{1}{2}a)^{\top}$ and at every point the line segment $\{(2u,\frac{1}{2}u)^{\top}\left\vert u\in\lbrack-1,1]\right. \}$ is added. One can show that the equilibrium $(0,0)^{\top}$ can be reached. If we start in $(0,0)^{\top}$, we compute $$\begin{aligned} \left[ \begin{array} [c]{c}x_{1}\\ y_{1}\end{array} \right] & =\left[ \begin{array} [c]{c}1\\ 1 \end{array} \right] u_{0},\left[ \begin{array} [c]{c}x_{2}\\ y_{2}\end{array} \right] =\left[ \begin{array} [c]{c}2\\ \frac{1}{2}\end{array} \right] u_{0}+\left[ \begin{array} [c]{c}1\\ 1 \end{array} \right] u_{1},\\ \left[ \begin{array} [c]{c}x_{3}\\ y_{3}\end{array} \right] & =\left[ \begin{array} [c]{c}4\\ \frac{1}{4}\end{array} \right] u_{0}+\left[ \begin{array} [c]{c}2\\ \frac{1}{2}\end{array} \right] u_{1}+\left[ \begin{array} [c]{c}1\\ 1 \end{array} \right] u_{2}.\end{aligned}$$ Proceeding in this way one finds that one can get approximately to all points in $D$ and, in particular, to the equilibria $(-1,2)^{\top}$ and $(1,-2)^{\top}$. Connecting appropriately the controls, one finally shows that $D=(-1,1)\times\lbrack-2,2]$ is a control set. Invariance pressure\[section5\] =============================== In this section we recall the concept of invariance pressure considered in [@Cocosa1], [@Cocosa2], [@ZHuag19] where potentials are defined on the control range. Furthermore, we introduce the generalized version of total invariance pressure, where the potentials are defined on the product of the state space and the control range. Again we consider the general system (\[nonlinear\]). A pair $(K,Q)$ of nonvoid subsets of $M$ is called admissible if $K\subset Q$ is compact and for each $x\in K$ there exists $u\in\mathcal{U}$ such that $\varphi(\mathbb{N},x,u)\subset Q$. For an admissible pair $(K,Q)$ and $\tau>0$, a $(\tau,K,Q)$-spanning set $\mathcal{S}$ of controls is a subset of $\mathcal{U}$ such that for all $x\in K$ there is $u\in\mathcal{S}$ with $\varphi(k,x,u)\in Q$ for all $k\in\left\{ 1,\dotsc,\tau\right\} $. Denote by $C(U,\mathbb{R})$ the set of continuous function $f:U\rightarrow\mathbb{R}$ which we call potentials. For a potential $f\in C(U,\mathbb{R})$ denote $(S_{\tau}f)(u):=\sum _{i=0}^{\tau-1}f(u_{i}),u\in\mathcal{U}$, and $$a_{\tau}(f,K,Q)=\inf\left\{ \sum_{u\in\mathcal{S}}e^{(S_{\tau}f)(u)}\left\vert \mathcal{S}\text{ }(\tau,K,Q)\text{-spanning}\right. \right\} .$$ The invariance pressure $P_{inv}(f,K,Q)$ of control system (\[nonlinear\]) is defined by$$P_{inv}(f,K,Q)=\overline{\underset{\tau\rightarrow\infty}{\lim}}\frac{1}{\tau }\log a_{\tau}(f,K,Q).$$ For the potential $f=\mathbf{0}$, this reduces to the notion of invariance entropy, $P_{inv}(\mathbf{0},K,Q)=h_{inv}(K,Q)$. In order to define the total invariance pressure associate to every control $u$ in a $(\tau,K,Q)$-spanning set $\mathcal{S}$ of controls an initial value $x_{u}\in K$ with $\varphi(k,x_{u},u)\in Q$ for all $k\in\left\{ 1,\dotsc,\tau\right\} $. Then a set of state-control pairs of the form$$\mathcal{S}_{tot}=\{(x_{u},u)\in K\times\mathcal{S}\left\vert \varphi (k,x_{u},u)\in Q\text{ for all }k\in\left\{ 1,\dotsc,\tau\right\} \right. \}$$ is called totally $(\tau,K,Q)$-spanning. Denote by $C(Q\times U,\mathbb{R})$ the set of continuous function $f:Q\times U\rightarrow\mathbb{R}$ which we again call potentials. For a potential $f\in C(Q\times U,\mathbb{R})$ and $(x,u)\in M\times\mathcal{U}$ denote $(S_{\tau}f)(x,u):=\sum_{i=0}^{\tau -1}f(\varphi(i,x,u),u_{i})$ and $$a_{\tau}(f,K,Q):=\inf\left\{ \sum_{(x,u)\in\mathcal{S}}e^{(S_{\tau}f)(x,u)}\left\vert \mathcal{S}_{tot}\text{ totally }(\tau,K,Q)\text{-spanning}\right. \right\} .$$ The total invariance pressure $P_{tot}(f,K,Q;\Sigma)$ of control system (\[nonlinear\]) is defined by$$P_{tot}(f,K,Q)=\underset{\tau\rightarrow\infty}{\overline{\lim}}\frac{1}{\tau }\log a_{\tau}(f,K,Q). \label{tip}$$ Note that by continuity and monotonicity of the logarithm,$$P_{tot}(f,K,Q)=\underset{\tau\rightarrow\infty}{\overline{\lim}}\inf\left\{ \frac{1}{\tau}\log\sum_{(x,u)\in\mathcal{S}}e^{(S_{\tau}f)(x,u)}\left\vert \mathcal{S}\text{ totally }(\tau,K,Q)\text{-spanning}\right. \right\} . \label{tip_alt}$$ Furthermore $-\infty<P_{tot}(f,K,Q)\leq\infty$ for every admissible pair $(K,Q)$ and all potentials $f$ if every countable totally spanning set contains a finite totally spanning subset. If $f(x,u)$ is independent of $x$, i.e., it is a continuous function on $U$, the total invariance pressure coincides with the invariance pressure. The definition of totally $(\tau,K,Q)$-spanning sets is inspired by the definition of spanning sets for $(K,Q)$ in Wang, Huang, and Sun [@WangHS19 p. 313], where a similar notion is introduced in the context of invariant partitions which provide an alternative definition of invariance entropy.. The next elementary proposition presents some properties of the function $P_{tot}(\cdot,K,Q):C(Q\times U,\mathbb{R})\rightarrow\mathbb{R}\cup \{\pm\infty\}$. \[propert\]The following assertions hold for an admissible pair $(K,Q)$, functions $f,g\in C(Q\times U,\mathbb{R})$ and $c\in\mathbb{R}$: \(i) For $f\leq g$ one has $P_{tot}(f,K,Q)\leq P_{tot}(g,K,Q)$. \(ii) $P_{tot}(f+c,K,Q)=P_{tot}(f,K,Q)+c$. This follows easily from the definition, cf. also [@Cocosa1 Proposition 13]. The following proposition shows that, in the definition of total invariance pressure, we can take the limit superior over times which are integer multiples of some fixed time step $\tau\in\mathbb{N}$. The proof is analogous to the proof given in [@Cocosa2 Theorem 20] for invariance pressure of continuous-time systems. \[discretization\]For all $f\in C(Q\times U,\mathbb{R})$ with $\inf_{(x,u)\in Q\times U}f(x,u)>-\infty$ the total invariance pressure satisfies for $\tau\in\mathbb{N}$$$P_{tot}(f,K,Q)=\underset{n\rightarrow\infty}{\overline{\lim}}\frac{1}{n\tau }\log a_{n\tau}(f,K,Q).$$ For every $f\in C(Q\times U,\mathbb{R})$, the inequality$$P_{tot}(f,K,Q)\geq\underset{n\rightarrow\infty}{\overline{\lim}}\frac{1}{n\tau}\log a_{n\tau}(f,K,Q) \label{4.1c}$$ is obvious. For the converse note that the function $g(x,u):=f(x,u)-\inf f$ is nonnegative (if $f\geq0$, we may consider $f$ instead of $g$). Let $\tau _{k}\in(0,\infty)$ with $\tau_{k}\rightarrow\infty$ for $k\rightarrow\infty$. Then for every $k\geq1$ there exists $n_{k}\in\mathbb{N}_{0}$ such that $n_{k}\tau\leq\tau_{k}<(n_{k}+1)\tau$ and $n_{k}\rightarrow\infty$ for $k\rightarrow\infty$. Since $g\geq0$ it follows that$$a_{\tau_{k}}(g,K,Q)\leq a_{(n_{k}+1)\tau}(g,K,Q)$$ and consequently $$\frac{1}{\tau_{k}}\log a_{\tau_{k}}(g,K,Q)\leq\frac{1}{n_{k}\tau}\log a_{(n_{k}+1)\tau}(g,K,Q).$$ This yields $$\underset{k\rightarrow\infty}{\overline{\lim}}\frac{1}{\tau_{k}}\log a_{\tau_{k}}(g,K,Q)\leq\underset{k\rightarrow\infty}{\overline{\lim}}\frac {1}{n_{k}\tau}\log a_{(n_{k}+1)\tau}(g,K,Q).$$ Since $\frac{1}{n_{k}\tau}=\frac{n_{k}+1}{n_{k}}\frac{1}{(n_{k}+1)\tau}$ and $\frac{n_{k}+1}{n_{k}}\rightarrow1$ for $k\rightarrow\infty$, we obtain $$\begin{aligned} \underset{k\rightarrow\infty}{\overline{\lim}}\frac{1}{\tau_{k}}\log a_{\tau_{k}}(g,K,Q) & \leq\underset{k\rightarrow\infty}{\overline{\lim}}\frac{1}{(n_{k}+1)\tau}\log a_{(n_{k}+1)\tau}(g,K,Q)\\ & \leq\underset{n\rightarrow\infty}{\overline{\lim}}\frac{1}{n\tau}\log a_{n\tau}(g,K,Q).\end{aligned}$$ Together with Proposition \[propert\] (ii) and (\[4.1c\]) applied to $f-\inf f$, this shows that$$\begin{aligned} P_{tot}(f,K,Q) & =P_{tot}(f-\inf f,K,Q)+\inf f\\ & =\underset{n\rightarrow\infty}{\overline{\lim}}\frac{1}{n\tau}\log a_{n\tau}(f-\inf f,K,Q)+\inf f\\ & =\underset{n\rightarrow\infty}{\overline{\lim}}\frac{1}{n\tau}\log a_{n\tau}(f,K,Q).\end{aligned}$$ The following result is given in [@Cocosa2 Corollary 15] for continuous-time systems. The discrete-time case is proved analogously.. \[compact\]Let $K_{1},K_{2}$ be two compact sets with nonvoid interior contained in a control set $D\subset M$ and assume that every point in $D$ is accessible. Then $(K_{1},D)$ and $(K_{2},D)$ are admissible pairs and for all $f\in C(U,\mathbb{R})$ we have $$P_{inv}(f,K_{1},D)=P_{inv}(f,K_{2},D).$$ Invariance pressure for linear systems\[section6\] ================================================== The main result of this section presents a formula for the invariance pressure of the unique control set with nonvoid interior for hyperbolic linear control systems of the form (\[lin\]). We start with a proposition providing an upper bound for the total invariance pressure of the unique control set with nonvoid interior, cf. Theorems \[theorem\_existence\] and \[theorem\_bounded\]. The proof uses arguments from [@Cocosa3] which in turn are based on a construction by Kawan [@Kawa11b Theorem 4.3], [@Kawa13 Theorem 5.1] (for the discrete-time case cf. also [@Kawa13 Remark 5.4] and Nair, Evans, Mareels, Moran [@NEMM04 Theorem 3]). Let $A^{+}$ be the restriction of $A$ to the unstable subspace $E^{u}$. The unstable determinant of $A$ is$$\det A^{+}=\prod\limits_{\lambda\in\sigma(A)}\lambda^{n_{\lambda}}\text{ and }\log\left\vert \det A^{+}\right\vert =\sum_{\lambda\in\sigma(A)}n_{\lambda }\max\{0,\log\left\vert \lambda\right\vert \},$$ where $n_{\lambda}$ denotes the algebraic multiplicity of an eigenvalue $\lambda$ of $A$. \[prop\_upper\_tot\]Consider a linear control system of the form (\[lin\]) and assume that the pair $(A,B)$ is controllable with a hyperbolic matrix $A$. Let $D$ be the unique control set with nonvoid interior and let $f\in C(\overline{D}\times U,\mathbb{R})$. Then there exists a compact set $K\subset D$ with nonvoid interior such that the total invariance pressure satisfies$$P_{tot}(f,K,D)\leq\log\left\vert \det A^{+}\right\vert +\inf_{(\tau,x,u)}\frac{1}{\tau}\sum_{i=0}^{\tau-1}f(\varphi(i,x,u),u_{i}),$$ where the infimum is taken over all $\tau\in\mathbb{N}$ with $\tau\geq d$ and all $\tau$-periodic controls $u$ with a $\tau$-periodic trajectory $\varphi(\cdot,x,u)$ in $\mathrm{int}D$ such that $u_{i}\in\mathrm{int}U$ for $i\in\{0,\dotsc,\tau-1\}$. We will construct a compact subset $K\subset D$ with nonvoid interior such that the inequality above holds. We may suppose that $A$ has real Jordan form $R=T^{-1}AT$. In fact, writing $x=Tx^{\prime}$ one obtains$$x_{k+1}^{\prime}=T^{-1}ATx_{k}^{\prime}+T^{-1}Bu_{k}=Rx_{k}^{\prime}+B^{\prime}u_{k} \label{transform}$$ with $B^{\prime}:=T^{-1}B$. Then with $f^{\prime}(x^{\prime},u)=f(Tx^{\prime },u)=:f(x,u),K^{\prime}:=T^{-1}K$, and $D^{\prime}:=T^{-1}D$ the total invariance pressure $P_{inv}(f,K,Q)$ coincides with the total invariance pressure $P_{inv}(f^{\prime},K^{\prime},D^{\prime})$ of (\[transform\]). Consider a $\tau^{0}$-periodic control $u^{0}(\cdot)$ with $\tau^{0}$-periodic trajectory $\varphi(\cdot,x^{0},u^{0})$ as in the statement of the theorem, hence$$x^{0}=R^{\tau^{0}}x^{0}+\sum_{i=0}^{\tau^{0}-1}R^{\tau^{0}-i}B^{\prime}u_{i}. \label{periodic}$$ **Step 1:** Choose a basis $\mathcal{B}$ of $\mathbb{R}^{d}$ adapted to the real Jordan structure of $R$ and let $L_{1}(R),\dotsc,L_{r}(R)$ be the Lyapunov spaces of $R$, that is, the sums of the generalized eigenspaces corresponding to eigenvalues $\lambda$ with the absolute value $\left\vert \lambda\right\vert =\rho_{j}$. This yields the decomposition$$\mathbb{R}^{d}=L_{1}(R)\oplus\cdots\oplus L_{r}(R).$$ Let $d_{j}=\dim L_{j}(R)$ and denote the restriction of $R$ to $L_{j}(R)$ by $R_{j}$. Now take an inner product on $\mathbb{R}^{d}$ such that the basis $\mathcal{B}$ is orthonormal with respect to this inner product and let $\left\Vert \cdot\right\Vert $ denote the induced norm. **Step 2:** We fix some constants: Let $S_{0}$ be a real number which satisfies$$S_{0}>\sum\limits_{j=1}^{r}\max\{1,d_{j}\rho_{j}\}=\log\left\vert \det A^{+}\right\vert ,$$ and choose $\xi=\xi(S_{0})>0$ such that$$0<d\xi<S_{0}-\sum\limits_{j=1}^{r}\max\{1,d_{j}\rho_{j}\}$$ and such that $\rho_{j}<1$ implies $\rho_{j}+\xi<1$ for all $j$. Let $\delta\in(0,\xi)$. It follows that there exists a constant $c=c(\delta)\geq1$ such that for all $j\ $and for all $k\in\mathbb{N}$$$\left\Vert R_{j}^{k}\right\Vert \leq c(\rho_{j}+\delta)^{k}.$$ For every $m\in\mathbb{N}$ we define positive integers by$$M_{j}(m):=\left\{ \begin{array} [c]{ccc}\left\lfloor (\rho_{j}+\xi)^{m}\right\rfloor +1 & \text{if} & \rho_{j}\geq1\\ 1 & \text{if} & \rho_{j}<1 \end{array} \right.$$ and a function $\beta:\mathbb{N}\rightarrow(0,\infty)$ by$$\beta(m):=\max_{1\leq j\leq r}\left\{ (\rho_{j}+\delta)^{m}\frac{\sqrt{d_{j}}}{M_{j}(m)}\right\} ,m\in\mathbb{N}.$$ If $\rho_{j}<1$, then $\rho_{j}+\delta<1$ and $M_{j}(m)\equiv1$, and hence $(\rho_{j}+\delta)^{m}/M_{j}(m)$ converges to zero for $m\rightarrow\infty$. If $\rho_{j}\geq1$, we have $M_{j}(m)\geq(\rho_{j}+\xi)^{m}$ and hence $$(\rho_{j}+\delta)^{m}\frac{\sqrt{d_{j}}}{M_{j}(m)}\leq(\rho_{j}+\delta )^{m}\frac{\sqrt{d_{j}}}{(\rho_{j}+\xi)^{m}}=\left( \frac{\rho_{j}+\delta }{\rho_{j}+\xi}\right) ^{m}\sqrt{d_{j}}. \label{beta2}$$ Since $\delta\in(0,\xi)$, we have $\frac{\rho_{j}+\delta}{\rho_{j}+\xi}<1$ showing that also in this case $\beta(m)\rightarrow0$ for $m\rightarrow\infty$. Since we assume controllability of $(A,B)$ and $\tau^{0}\geq d$ there exists $C_{0}>0$ such that for every $x\in\mathbb{R}^{d}$ there is a control $u\in\mathcal{U}$ with$$\varphi(\tau^{0},x,u)=R^{\tau^{0}}x+\sum_{i=0}^{\tau^{0}-1}R^{\tau^{0}-i}B^{\prime}u_{i}=0\text{ and }\left\Vert u\right\Vert _{\infty}\leq C_{0}\left\Vert x\right\Vert . \label{Kawan5.9MODIFIED0}$$ The inequality follows by the inverse mapping theorem. For the corresponding trajectory we find a constant $C_{1}>0$ such that for $k\in\{1,\ldots,\tau ^{0}\}$$$\left\Vert \varphi(k,x,u)\right\Vert \leq\left\Vert R\right\Vert ^{k}\left\Vert x\right\Vert +\sum_{i=0}^{k-1}\left\Vert R\right\Vert ^{k-i}\left\Vert B^{\prime}\right\Vert C_{0}\left\Vert x\right\Vert \leq C_{1}\left\Vert x\right\Vert . \label{Kawan5.9MODIFIED}$$ For $b_{0}>0$ let $\mathcal{C}$ be the $d$-dimensional compact cube $\mathcal{C}$ in $\mathbb{R}^{d}$ centered at the origin with sides of length $2b_{0}$ parallel to the vectors of the basis $B$. Choose $b_{0}$ small enough such that $$K:=x^{0}+\mathcal{C}\subset D$$ and $\overline{B(u^{0}(k),Cb_{0})}\subset U$ for all $k\in\{0,\dotsc,\tau ^{0}\}$. This is possible, since $x^{0}\in\mathrm{int}D$ and all values $u^{0}(k)$ are in the interior of $U$. **Step 3.** Let $\varepsilon>0$ and $\tau=m\tau^{0}$ with $m\in \mathbb{N}$.  By Theorem \[theorem\_bounded\], the closure $\overline{D}$ is compact, hence for the continuous function $f$ on the compact set $\overline{D}\times U$ there is $\varepsilon_{1}>0$ such that for all $(x,u),(x^{\prime},u^{\prime})\in\overline{D}\times U$$$\max\left\{ \left\Vert x-x^{\prime}\right\Vert ,\left\Vert u-u^{\prime }\right\Vert \right\} <\varepsilon_{1}\text{ implies }\left\vert f(x,u)-f(x^{\prime},u^{\prime})\right\vert <\varepsilon. \label{one}$$ We may take $m\in\mathbb{N}$ large enough such that$$\frac{d}{\tau}\log2=\frac{d}{m\tau^{0}}\log2<\varepsilon. \label{two}$$ Furthermore, we may choose $b_{0}$ small enough such that$$C_{0}b_{0}<\varepsilon_{1}\text{ and }C_{1}b_{0}<\varepsilon_{1}. \label{three}$$ Partition $\mathcal{C}$ by dividing each coordinate axis corresponding to a component of the $j$th Lyapunov space $L_{j}(R)$ into $M_{j}(\tau)$ intervals of equal length. The total number of subcuboids in this partition of $\mathcal{C}$ is $\prod_{j=1}^{r}M_{j}(\tau)^{d_{j}}$. Next we will show that it suffices to take $\prod_{j=1}^{r}M_{j}(\tau)^{d_{j}}$ control functions to steer the system from all states in $x^{0}+\mathcal{C}$ back to $x^{0}+\mathcal{C}$ in time $\tau$ such that the controls are within distance $\varepsilon_{1}$ to $u^{0}$ and the corresponding trajectories remain within distance $\varepsilon_{1}$ from the trajectory $\varphi(\cdot,x^{0},u^{0})$. Let $y$ be the center of a subcuboid. By (\[Kawan5.9MODIFIED0\]) there exists $u=(u_{0},\ldots,u_{\tau^{0}-1})$ such that$$\varphi(\tau^{0},y,u)=0\text{ and }\left\Vert u\right\Vert _{\infty}\leq C_{0}\left\Vert y\right\Vert \leq C_{0}b_{0}<\varepsilon_{1}. \label{four}$$ For $k\geq t_{0}$ let $u_{k}=0$. Hence $\varphi(\tau,y,u)=0$ and $u(t)\in U$ for all $k\in\{0,\dotsc,\tau\}$. Using (\[periodic\]) and linearity, we find that $x^{0}+y$ is steered by $u^{0}+u$ in time $\tau=m\tau^{0}$ to $x^{0}$,$$\varphi(\tau,x^{0}+y,u^{0}+u)=\varphi(\tau,x^{0},u^{0})+\varphi(\tau ,y,u)=x^{0}. \label{periodic2}$$ Now consider an arbitrary point $x\in\mathcal{C}$. Then it lies in one of the subcuboids and we denote the corresponding center of this subcuboid by $y$ with associated control $u=u(y)$. We will show in **Step 4** that $u^{0}+u$ also steers $x^{0}+x$ back to $x^{0}+\mathcal{C}$ and in **Step 5** that the corresponding trajectory $\varphi(k,x^{0}+x,u^{0}+u)$ remains within distance $\varepsilon_{1}$ of $\varphi(k,x^{0},u^{0}),k\in\{0,\ldots,\tau\}$. **Step 4.** Observe that$$\left\Vert x-y\right\Vert \leq\frac{b_{0}}{M_{j}(\tau)}\sqrt{d_{j}}.$$ By (\[beta2\]) this implies that$$\left\Vert R^{\tau}x-R^{\tau}y\right\Vert \leq\left\Vert R_{j}^{m\tau^{0}}\right\Vert \left\Vert x-y\right\Vert \leq c(\rho_{j}+\delta)^{m\tau^{0}}\frac{b_{0}}{M_{j}(m\tau^{0})}\sqrt{d_{j}}\rightarrow0\text{ for }m\rightarrow\infty,$$ and hence for $m$ large enough $\left\Vert R^{\tau}x-R^{\tau}y\right\Vert \leq b_{0}$. This implies that the solution $\varphi(k,x^{0}+x,u^{0}+u),k\in \mathbb{N}$, satisfies for $m$ large enough by (\[periodic2\]) and linearity,$$\begin{aligned} & \left\Vert \varphi(\tau,x^{0}+x,u^{0}+u)-x^{0}\right\Vert \\ & =\left\Vert R^{\tau}(x^{0}+x)+\sum_{i=0}^{\tau-1}R^{\tau-i}B^{\prime}(u_{i}^{0}+u_{i})-x^{0}\right\Vert \\ & \leq\left\Vert R^{\tau}(x^{0}+x)-R^{\tau}(x^{0}+y)\right\Vert +\left\Vert R^{\tau}(x^{0}+y)+\sum_{i=0}^{\tau-1}R^{\tau-i}B^{\prime}(u_{i}^{0}+u_{i})-x^{0}\right\Vert \\ & \leq\left\Vert R^{\tau}x-R^{\tau}y\right\Vert +\left\Vert \varphi (\tau,x^{0}+y,u^{0}+u)-x^{0}\right\Vert \\ & \leq b_{0}+0.\end{aligned}$$ This shows that $\varphi(\tau,x^{0}+x,u^{0}+u)\in x^{0}+\mathcal{C}$ and it also follows that $\varphi(\tau,x^{0}+x,u^{0}+u)\in D$ for all $k\in \{0,1,\ldots,\tau\}$. **Step 5.** By linearity and formulas (\[Kawan5.9MODIFIED0\]), (\[Kawan5.9MODIFIED\]), and (\[three\]) we can estimate for $k\in \{0,1,\ldots,\tau^{0}\}$$$\begin{aligned} & \left\Vert \varphi(k,x^{0}+x,u^{0}+u)-\varphi(k,x^{0},u^{0})\right\Vert \\ & =\left\Vert R^{k}(x^{0}+x)+\varphi(k,0,u^{0}+u)-R^{k}x^{0}-\varphi (k,0,u^{0})\right\Vert \\ & =\left\Vert R^{k}x+\varphi(k,0,u)\right\Vert =\left\Vert \varphi (k,x,u)\right\Vert \leq C_{1}\left\Vert x\right\Vert \leq C_{1}b_{0}<\varepsilon_{1}.\end{aligned}$$ Together with (\[four\]) and (\[one\]) this shows that for $k\in \{0,1,\ldots,\tau\}$$$\left\vert f\left( \varphi(k,x^{0}+x,u^{0}+u),u_{k}^{0}+u_{k})-f(\varphi (k,x^{0},u^{0}),u_{k}^{0})\right) \right\vert <\varepsilon. \label{five}$$ **Step 6.** We have constructed $\prod_{j=1}^{r}M_{j}(\tau)^{d_{j}}$ control functions that allow us to steer the system from all states in $K=x^{0}+\mathcal{C}$ back to $x^{0}+\mathcal{C}$ in time $\tau$ and satisfy (\[five\]). By iterated concatenation of these control functions we obtain a totally $(n\tau,K,D)$-spanning set $\mathcal{S}$ for each $n\in\mathbb{N}$ with cardinality$$\#\mathcal{S=}\left( \prod_{j=1}^{r}M_{j}(\tau)^{d_{j}}\right) ^{n}=\left( \prod_{j:\rho_{j}\geq0}\left( \left\lfloor (\rho_{j}+\xi)^{\tau}\right\rfloor +1\right) ^{d_{j}}\right) ^{n}.$$ By (\[five\]) it follows that$$\begin{aligned} \log a_{n\tau}(f,K,Q) & \leq\log\left( \sum\nolimits_{(x,u)\in\mathcal{S}}e^{(S_{n\tau}f)(x,u)}\right) \\ & =\log\left( \sum\nolimits_{(x,u)\in\mathcal{S}}e^{(S_{n\tau}f)(x^{0},u^{0})}\cdot e^{(S_{n\tau}f)(x,u)-(S_{n\tau}f)(x^{0},u^{0})}\right) \\ & \leq\log\sum\nolimits_{(x,u)\in\mathcal{S}}e^{(S_{n\tau}f)(x^{0},u^{0})}+\log e^{\sum_{i=0}^{n\tau-1}\varepsilon}\\ & \leq\log\left( \#\mathcal{S\cdot}e^{(S_{n\tau}f)(x^{0},u^{0})}\right) +n\tau\varepsilon.\end{aligned}$$ This implies, using also (\[two\]),$$\begin{aligned} \frac{1}{n\tau}\log a_{n\tau}(f,K,Q) & \leq\frac{1}{\tau}\sum_{j:\rho _{j}\geq0}d_{j}\log(\left\lfloor e^{(\rho_{j}+\xi)\tau}\right\rfloor +1)+\frac{1}{n\tau}\sum_{i=0}^{n\tau-1}f(\varphi(i,x^{0},u^{0}),u_{i}^{0})+\varepsilon\\ & \leq\frac{1}{\tau}\sum_{j:\rho_{j}\geq0}d_{j}\log(2e^{(\rho_{j}+\xi)\tau })+\frac{1}{\tau^{0}}\sum_{i=0}^{\tau^{0}-1}f(\varphi(i,x^{0},u^{0}),u_{i}^{0})+\varepsilon\\ & \leq\frac{d}{\tau}\log2+\frac{1}{\tau}\sum_{j:\rho_{j}\geq0}d_{j}(\rho _{j}+\xi)\tau+\frac{1}{\tau^{0}}\sum_{i=0}^{\tau^{0}-1}f(\varphi(i,x^{0},u^{0}),u_{i}^{0})+\varepsilon\\ & \leq\varepsilon+d\xi+\sum_{j:\rho_{j}\geq0}d_{j}\rho_{j}+\frac{1}{\tau^{0}}\sum_{i=0}^{\tau^{0}-1}f(\varphi(i,x^{0},u^{0}),u_{i}^{0})+\varepsilon\\ & <S_{0}+\frac{1}{\tau^{0}}\sum_{i=0}^{\tau^{0}-1}f(\varphi(i,x^{0},u^{0}),u_{i}^{0})+2\varepsilon.\end{aligned}$$ Since $\varepsilon$ can be chosen arbitrarily small and $S_{0}$ arbitrarily close to $\log\left\vert \det A^{+}\right\vert $, the assertion of the proposition follows. For the invariance pressure, we obtain the following consequence. \[cor\_upper\]Consider a linear control system of the form (\[lin\]) and assume that the pair $(A,B)$ is controllable with a hyperbolic matrix $A$. Let $D$ be the unique control set with nonvoid interior and let $f\in C(U,\mathbb{R})$. Then for every compact set $K\subset D$ with nonvoid interior the invariance pressure satisfies$$P_{inv}(f,K,D)\leq\log\left\vert \det A^{+}\right\vert +\inf_{(\tau,x,u)}\frac{1}{\tau}\sum_{i=0}^{\tau-1}f(u_{i}),$$ where the infimum is taken over all $\tau\in\mathbb{N}$ with $\tau\geq d$ and all $\tau$-periodic controls $u$ with a $\tau$-periodic trajectory $\varphi(\cdot,x,u)$ in $\mathrm{int}D$ such that $u_{i}\in\mathrm{int}U$ for $i\in\{0,\dotsc,\tau-1\}$. The assertion follows from Proposition \[prop\_upper\_tot\], since every compact subset of $D$ is contained in a compact subset $K$ of $D$ with nonvoid interior and the invariance pressure is independent of the choice of such a set $K$ by Proposition \[compact\]$.$ Kawan [@Kawa11b Theorem 3.1] derives for the outer invariance entropy $h_{inv,out}(K,Q)$, which is a lower bound for the invariance entropy, the formula$$h_{inv,out}(K,Q)=\log\left\vert \det A^{+}\right\vert .$$ Then, for the potential $f=0$, Corollary \[cor\_upper\] shows that the invariance entropy satisfies$$h_{inv}(K,Q)\leq\log\left\vert \det A^{+}\right\vert =h_{inv,out}(K,Q)\leq h_{inv}(K,Q)$$ implying that $$h_{inv}(K,Q)=\log\left\vert \det A^{+}\right\vert . \label{h_inv}$$ We proceed to prove a lower bound for the invariance pressure. Recall that with respect to $A$ the state space $\mathbb{R}^{d}$ can be decomposed into the direct sum of the center-stable subspace $E^{sc}$ and the unstable subspace $E^{u}$ which are the direct sums of all generalized real eigenspaces for the eigenvalues $\lambda$ with $\left\vert \lambda\right\vert \leq1$ and $\left\vert \lambda\right\vert >1$, resp. Let $\pi:\mathbb{R}^{d}\rightarrow E^{u}$ be the projection along $E^{sc}$. \[prop\_lower\]Let $K\subset D$ be compact and assume that both $K$ and $D$ have positive and finite Lebesgue measure. Then for every $f\in C(U,\mathbb{R})$ $$P_{inv}(f,K,Q)\geq\log\left\vert \det A^{+}\right\vert +\inf_{(\tau,x,u)}\frac{1}{\tau}\sum_{i=0}^{\tau-1}f(u_{i}),$$ where the infimum is taken over all $(\tau,x,u)\in\mathbb{N}\times D\times\mathcal{U}$ with $\tau\geq d$ and $\pi\varphi(i,x,u)\in\pi D$ for $i\in\{0,1,\dotsc,\tau-1\}$. Every $(\tau,K,Q)$-spanning set $\mathcal{S}$ satisfies$$\log\sum_{u\in\mathcal{S}}e^{(S_{\tau}f)(u)}\geq\log\inf_{u\in\mathcal{S}}e^{(S_{\tau}f)(u)}+\log\#\mathcal{S}. \label{24b}$$ First suppose that the unstable subspace of $A$ is trivial, $E^{u}=0$. Formula (\[h\_inv\]) implies that $$\underset{\tau\rightarrow\infty}{\overline{\lim}}\frac{1}{\tau}\inf\left\{ \log\#\mathcal{S}\left\vert \mathcal{S}\text{ }(\tau,K,Q)\text{-spanning}\right. \right\} =h_{inv}(K,D)=\log\left\vert \det A^{+}\right\vert =0.$$ Now (\[tip\_alt\]) and (\[24b\]) implies$$\begin{aligned} & P_{inv}(f,K,Q)=\underset{\tau\rightarrow\infty}{\overline{\lim}}\frac {1}{\tau}\inf\left\{ \log\sum_{u\in\mathcal{S}}e^{(S_{\tau}f)(u)}\left\vert \mathcal{S}\text{ }(\tau,K,Q)\text{-spanning}\right. \right\} \\ & \geq\underset{\tau\rightarrow\infty}{\overline{\lim}}\frac{1}{\tau}\inf\left\{ \log\inf_{u\in\mathcal{S}}e^{(S_{\tau}f)(u)}+\log\#\mathcal{S}\left\vert \mathcal{S}\text{ }(\tau,K,Q)\text{-spanning}\right. \right\} \\ & \geq\underset{\tau\rightarrow\infty}{\overline{\lim}}\frac{1}{\tau}\inf\left\{ \inf_{u\in\mathcal{S}}\sum_{i=0}^{\tau-1}f(u_{i})\left\vert \mathcal{S}\text{ }(\tau,K,Q)\text{-spanning}\right. \right\} +0\\ & \geq\underset{\tau\rightarrow\infty}{\overline{\lim}}\inf_{u\in\mathcal{S}}\frac{1}{\tau}\sum_{i=0}^{\tau-1}f(u_{i})\geq\inf_{u\in\mathcal{S}}\frac {1}{\tau}\sum_{i=0}^{\tau-1}f(u_{i}).\end{aligned}$$ Since for $u\in\mathcal{S}$ there is $x\in K$ with $\pi\varphi(i,x,u)=0\in\pi D$ for $i\in\{0,1,\dotsc,\tau-1\}$, the assertion for trivial unstable subspace $E^{-}$ follows. Now suppose that $E^{u}$ is nontrivial. We may assume that $P_{inv}(f,K,Q)<\infty$ and hence and all considered spanning sets are countable. Note that by invariance of $E^{sc}$ and $E^{u}$ the induced system on $E^{u}$ is well defined with trajectories $\pi\varphi(k,x,u),k\in\mathbb{N}$. For each $u$ in a $(\tau,K,D)$-spanning set $\mathcal{S}$ define $$\pi K_{u}:=\{x\in\pi K\left\vert \pi\varphi(i,x,u)\in\pi D,i=1,\dotsc ,\tau-1\right. \}.$$ Thus $\pi K={\textstyle\bigcup\nolimits_{u\in\mathcal{S}}} \pi K_{u}$. Since $D$ is measurable, each set $\pi K_{u}$ is measurable as the countable intersection of measurable sets, $$\pi K_{u}=\pi K\cap\bigcap_{t=0}^{\tau-1}\left( \pi\varphi_{t,u}\right) ^{-1}(D).$$ We denote the Lebesgue measure in $\mathbb{R}^{d}$ by $\mu^{d}$ and the induced measure on $E^{u}$ by $\mu$. The linear part of the affine-linear map $\pi\varphi_{\tau,u}(x)$ is given by $(A^{+})^{\tau}$, hence it follows that $$\mu(\pi D)\geq\mu(\pi\varphi_{\tau,u}(\pi K_{u}))=\int\limits_{\pi \varphi_{\tau,u}(\pi K_{u})}\mathrm{d}\mu=\int\limits_{\pi K_{u}}\left\vert \det(A^{+})^{\tau}\right\vert \mathrm{d}\mu=\mu(\pi K_{u})\left\vert \det A^{+}\right\vert ^{\tau}.$$ Abbreviate $~\beta(\tau)=\inf_{(x,u)}(S_{\tau}f)(u)$, where the infimum is taken over all $(\pi x,u)\in\pi K\times\mathcal{U}$ with $\pi\varphi (i,x,u)\in\pi D$ for $i=0,\dotsc,\tau-1$. Then we find $$\begin{aligned} e^{\beta(\tau)}\mu(\pi K) & \leq\sum_{u\in\mathcal{S}}e^{(S_{\tau}f)(u)}\mu(\pi K_{u})\leq\sup_{u\in\mathcal{S}}\mu(\pi K_{u})\sum_{u\in\mathcal{S}}e^{(S_{\tau}f)(u)}\\ & \leq\frac{\mu(\pi D)}{\left\vert \det A^{+}\right\vert ^{\tau}}\sum _{u\in\mathcal{S}}e^{(S_{\tau}f)(u)}.\end{aligned}$$ Since this holds for every $(\tau,K,D)$-spanning set $\mathcal{S}$ and $\mu^{d}(D)>0$ implies $\mu(\pi D)>0$, we find $$a_{\tau}(f,K,D)=\inf\{\sum_{u\in\mathcal{S}}e^{(S_{\tau}f)(u)}\left\vert \mathcal{S}\text{ }(\tau,K,D)\text{-spanning}\right. \}\geq\frac{\mu(\pi K)}{\mu(\pi D)}e^{\beta(\tau)}\left\vert \det A^{+}\right\vert ^{\tau},$$ implying$$\begin{aligned} & P_{inv}(f,K,D)=\underset{\tau\rightarrow\infty}{\overline{\lim}}\frac {1}{\tau}\log a_{\tau}(f,K,D)\geq\inf_{\tau}\frac{1}{\tau}\beta(\tau )+\log\left\vert \det A^{+}\right\vert \\ & =\inf_{(\tau,x,u)}\frac{1}{\tau}(S_{\tau}f)(u)+\log\left\vert \det A^{+}\right\vert ,\end{aligned}$$ where the infimum is taken over all $(\tau,x,u)\in\pi K\times\mathcal{U}$ with $\pi\varphi(i,x,u)\in\pi D$ for $i=0,\dotsc,\tau-1$. The next theorem is the main result of this paper. For linear discrete-time control systems it provides a formula for the invariance pressure of control sets. \[main\]Consider a linear control system of the form (\[lin\]) and assume that the system without control restriction is controllable in $\mathbb{R}^{d}$, the matrix $A$ is hyperbolic, and the control range $U$ is a compact convex neighborhood of the origin with $U=\overline{\mathrm{int}U}$. Let $D$ be the unique control set with nonvoid interior. Then $D$ is bounded and for every compact set $K\subset D$ with nonvoid interior and every potential $f\in C(U,\mathbb{R})$, the invariance pressure is given by $$P_{inv}(f,K,D)=\log\left\vert \det A^{+}\right\vert +\min_{u\in U}f(u)=h_{inv}(K,D)+\min_{u\in U}f(u).$$ Theorems \[theorem\_existence\] and \[theorem\_bounded\] imply existence, uniqueness, and boundedness of the control set $D$. Formula (\[h\_inv\]) implies that $h_{inv}(K,D)=\log\det A^{+}$ showing the second equality above. Proposition \[prop\_lower\] and Corollary \[cor\_upper\] yield the bounds,$$\inf_{(\tau^{\prime},x^{\prime},u^{\prime})}\frac{1}{\tau}\sum_{i=0}^{\tau^{\prime}-1}f(u_{i})\leq P_{inv}(f,K,Q)-\log\left\vert \det A^{+}\right\vert \leq\inf_{(\tau,x,u)}\frac{1}{\tau}\sum_{i=0}^{\tau-1}f(u_{i}), \label{M0}$$ where the first infimum is taken over all $(\tau^{\prime},x^{\prime},u^{\prime})\in\mathbb{N}\times D\times\mathcal{U}$ with $\tau^{\prime}\geq d$ and $\pi\varphi(i,x^{\prime},u^{\prime})\in\pi D$ for $i\in\{0,\dotsc ,\tau^{\prime}-1\}$ and the second infimum is taken over all $\tau \in\mathbb{N}$ with $\tau\geq d$ and all $\tau$-periodic controls $u$ with a $\tau$-periodic trajectory $\varphi(\cdot,x,u)$ in $\mathrm{int}D$ such that $u_{i}\in\mathrm{int}U$ for $i\in\{0,\dotsc,\tau-1\}$. Note that there is a control value $u^{0}\in U$ with $f(u^{0})=\min_{u\in U}f(u)$. Consider $$f(u^{0})=\frac{1}{d}\sum_{i=0}^{d-1}f(u^{0})\leq\inf_{(\tau^{\prime},x^{\prime},u^{\prime})}\frac{1}{\tau^{\prime}}\sum_{i=0}^{\tau^{\prime}-1}f(u_{i}^{\prime}), \label{M2}$$ where the infimum is taken over all triples $(\tau^{\prime},x^{\prime },u^{\prime})\in\mathbb{N}\times K\times\mathcal{U}$ with $\tau^{\prime}\geq d$ and $\pi\varphi(i,x^{\prime},u^{\prime})\in\pi D$ for $i\in\{0,\dotsc ,\tau^{\prime}-1\}$. Let $\varepsilon>0$. Then there is a control function $u^{1}$ with values in a compact subset of $\mathrm{int}U$ such that$$\frac{1}{d}\sum_{i=0}^{d-1}f(u_{i}^{1})\leq\frac{1}{d}\sum_{i=0}^{d-1}f(u^{0})+\varepsilon. \label{M3}$$ By hyperbolicity of $A$ the matrix $I-A^{d}$ is invertible, and hence there exists a unique solution $x^{1}$ of $$\left( I-A^{d}\right) x^{1}=\varphi(d,0,u^{1}).$$ Now by linearity $$x^{1}=A^{d}x^{1}+\varphi(d,0,u^{1})=\varphi(d,x^{1},u^{1}).$$ Since the values of $u^{1}$ are in $\mathrm{int}U$ and $(A,B)$ is controllable, it follows that a neighborhood of $x^{1}$ can be reached in time $d$ from $x^{1}$. Analogously, $x^{1}$ can be reached from every point in a neighborhood of $x^{1}$ in time $d$. Hence in the intersection of these two neighborhoods every point can be steered in time $2d$ into every other point. This shows that $x^{1}$ is in the interior of the control set $D$, and the corresponding trajectory $\varphi(i,x^{1},u^{1}),i\in\{0,\dotsc,d-1\}$, remains by Proposition \[proposition\_in\] in the interior of $D$. Extending $u^{1}$ to a $d$-periodic control again denoted by $u^{1}$ we find that the control-trajectory pair $(u^{1}(\cdot),\varphi(\cdot,x^{1},u^{1}))$ is $d$-periodic, the trajectory is contained in $\mathrm{int}D$ and all values $u_{i}^{1}$ are in a compact subset of $\mathrm{int}U$. It follows that$$\begin{aligned} & \inf_{(\tau^{\prime},x^{\prime},u^{\prime})}\frac{1}{\tau^{\prime}}\sum_{i=0}^{\tau^{\prime}-1}f(u_{i}^{\prime})\overset{(\ref{M2})}{\geq}f(u^{0})=\frac{1}{d}\sum_{i=0}^{d-1}f(u^{0})\overset{(\ref{M3})}{\geq}\frac {1}{d}\sum_{i=0}^{d-1}f(u_{i}^{1})-\varepsilon\\ & \geq\inf_{(\tau,x,u)}\frac{1}{\tau}\sum_{i=0}^{\tau-1}f(u_{i})-\varepsilon,\end{aligned}$$ where the first infimum is taken over all triples $(\tau^{\prime},x^{\prime },u^{\prime})\in\mathbb{N}\times K\times\mathcal{U}$ with $\tau^{\prime}\geq d$ and $\pi\varphi(i,x^{\prime},u^{\prime})\in\pi D$ for $i\in\{0,\dotsc ,\tau^{\prime}-1\}$ and the second infimum is taken over all $(\tau ,x,u)\in\mathbb{N}\times D\times\mathcal{U}$ such that the control-trajectory pair $(u,\varphi(\cdot,x,u))$ is $\tau$-periodic with $\tau\geq d$, the trajectory is contained in $\mathrm{int}D$, and the control values $u_{i}$ are in a compact subset of $\mathrm{int}U$. Using this in (\[M0\]) we get $$\begin{aligned} \inf_{(\tau^{\prime},x^{\prime},u^{\prime})}\frac{1}{\tau^{\prime}}\sum _{i=0}^{\tau^{\prime}-1}f(u_{i}^{\prime}) & \leq P_{inv}(f,K,Q)-\log \left\vert \det A^{+}\right\vert \leq f(u^{0})+\varepsilon\\ & \leq\inf_{(\tau^{\prime},x^{\prime},u^{\prime})}\frac{1}{\tau^{\prime}}\sum_{i=0}^{\tau^{\prime}-1}f(u_{i}^{\prime})+\varepsilon.\end{aligned}$$ Since $\varepsilon>0$ is arbitrary, the assertion of the theorem follows. For partially hyperbolic control systems, Da Silva and Kawan prove in [@KawaDS18] relations between invariance entropy and topological pressure for the unstable determinant. In contrast to our framework, they consider the topological pressure (with respect to the fibers) of associated random dynamical systems obtained by endowing the space of controls with shift invariant probability measures. [99]{} F. Colonius, J.A.N. Cossich and A. Santana, Invariance pressure for control systems, J. Dyn. Diff. Equations 31(1) (2019), 1–23. F. Colonius, A. Santana and J.A.N. Cossich, Invariance pressure of control sets, SIAM J. Control Optim. 56(6) (2018), 4130-4147. F. Colonius, J.A.N. Cossich and A. Santana, Bounds for invariance pressure (2019), submitted. F. Colonius, Invariance entropy, quasi-stationary measures and control sets, Discrete and Continuous Dynamical Systems (DCDS-A) 38(4) (2018), 2093-2123. D. Hinrichsen and A.J. Pritchard, Mathematical Systems Theory, Vol. 2, Springer, 2020, in preparation. A. Da Silva and C. Kawan, Invariance entropy of hyperbolic control sets, Discrete and Continuous Dynamical Systems (DCDS-A) 36(1) (2016), 97-136. Y. Huang and X. Zhong, Carathéodory–Pesin structures associated with control systems, Systems and Control Letters 112 (2018), pp. 36-41. C. Kawan, Invariance entropy of control sets, SIAM J. Control Optim. 49 (2011), 732-751. C. Kawan, Invariance Entropy for Deterministic Control Systems. An Introduction. LNM Vol. 2089, Springer, Berlin, 2013. C. Kawan and A. Da Silva, Invariance entropy for a class of partially hyperbolic sets, Mathematics of Control, Signals and Systems (2018), https://doi.org/10.1007/s00498-018-0224-2. C. Kawan and A. Da Silva, Lyapunov exponents and partial hyperbolicity of chain control sets on flag manifolds, Israel Journal of Mathematics (2019), DOI: 10.1007/s11856-019-1893-3. G. Nair, R. J. Evans, I. Mareels, and W. Moran, Topological feedback entropy and nonlinear stabilization, IEEE Trans. Aut. Control 49 (2004), 1585–1597. M. Patrão and L. San Martin, Semiflows on topological spaces: Chain transitivity and semigroups, J. Dyn. Diff. Equations, 19 (2007), 155–180. E. Sontag, Mathematical Control Theory. Deterministic Finite Dimensional Systems, 2nd ed., Springer-Verlag, New York 1998. Tao Wang, Yu Huang, and Hai-Wei Sun, Measure-theoretic invariance entropy for control systems, SIAM J. Control Optim. 57(1) (2019), 310-333. J. Wing and C. A. Desoer, The multiple-input minimal-time regulator problem (general theory), IEEE Trans. Automatic Control AC-8(2) (1963), 125-136. F. Wirth, Dynamics and controllability of nonlinear discrete-time control systems, IFAC Proceedings Volumes 31 (1998), 267-272. X. Zhong, Y. Huang, Invariance pressure dimensions for control systems, J. Dyn. Diff. Equations (2018), https://doi.org/10.1007/s10884-018-9701-z. [^1]: We have announced some results of the present paper in Invariance pressure for linear discrete-time systems, Proceedings of the 2019 IEEE Information Theory Workshop (IEEE ITW 2019), Visby, Sweden, 24-26 Aug. 2019.
--- abstract: 'Caching at the wireless edge can be used to keep up with the increasing demand for high-definition wireless video streaming. By prefetching popular content into memory at wireless access points or end-user devices, requests can be served locally, relieving strain on expensive backhaul. In addition, using network coding allows the simultaneous serving of distinct cache misses via common coded multicast transmissions, resulting in significantly larger load reductions compared to those achieved with traditional delivery schemes. Most prior works simply treat video content as fixed-size files that users would like to fully download. This work is motivated by the fact that video can be coded in a scalable fashion and that the decoded video quality depends on the number of layers a user receives in sequence. Using a Gaussian source model, caching and coded delivery methods are designed to minimize the squared error distortion at end-user devices in a rate-limited caching network. The framework is very general and accounts for heterogeneous cache sizes, video popularities and user-file play-back qualities. As part of the solution, a new decentralized scheme for lossy cache-aided delivery subject to preset user distortion targets is proposed, which further generalizes prior literature to a setting with file heterogeneity.' author: - bibliography: - 'References2.bib' title: '[Rate-Distortion-Memory Trade-offs in Heterogeneous Caching Networks]{}' --- Caching networks, coded multicast, scalable coding, successive refinement, lossy source coding Introduction ============  \[sec: Introduction\] With the recent explosive growth in cellular video traffic, wireless operators are heavily investing in making infrastructural improvements such as increasing base station density and offloading traffic to Wi-Fi. Caching is a technique to reduce traffic load by exploiting the high degree of asynchronous content reuse and the fact that storage is cheap and ubiquitous in today’s wireless devices [@molisch14caching]. During off-peak periods when network resources are abundant, popular content can be stored at the wireless edge, so that peak hour demands can be met with reduced access latencies and bandwidth requirements. The simplest form of caching is to store the most popular video files at every edge cache [@wang14cache]. Requests for popular cached files can then be served locally, while cache misses need to be served by the base station, achieving what is referred to as a local caching gain. However, replicating the same content on many devices can result in an inefficient use of the aggregate cache capacity [@golrezaei12femtocaching]. In fact, recent studies [@maddah14fundamental; @maddah14decentralized; @ji15order; @yu2017characterizing] have shown that making users store different portions of the video files creates coded multicast opportunities that enable a global caching gain. In [@yu2017characterizing], the memory-rate trade-off for the worst-case and average demand is characterized within a factor of two of an information theoretic lower bound for uniformly popular files. Caching networks have been extended to various settings including setting with random demands [@niesen2017coded; @ji15order], online caching [@pedarsani2016online], noisy channels [@bidokhti2018noisy], and correlated content [@ITjournal; @JSAC2018]. A comprehensive review of existing work on caching networks can be found in [@paschos2018role]. While existing work on wireless caching is motivated by video applications, the majority do not exploit specific properties of video in the caching and delivery phases. The cache-aided delivery schemes available in literature are based on fixed-to-variable source encoding, designed to minimize the aggregate rate on the shared link so that the requested files are recovered in a lossless manner [@maddah14fundamental; @ji15order; @yu2017characterizing; @maddah14decentralized]. However, all video coders allow for lossy recovery [@wang2002video]. In particular, in scalable video coding (SVC) [@SVC], video files are encoded into layers such that the base layer contains the lowest quality level and additional enhancement layers allow successive improvement of the video streaming quality. SVC strategies are especially suitable for heterogeneous wired and wireless networks, since they encode video into a scalable bitstream such that video reconstructions of different spatial and temporal resolutions, and hence different qualities, can be generated by simply truncating the scalable bitstream. This scalability accommodates network requirements such as bandwidth limitations, user device capability, and quality-of-service restrictions in video streaming applications [@sun2007overview]. In this work, we consider a lossy cache-aided network where the caches are used to enhance video reconstruction quality at user devices. We consider a scenario in which users store compressed files at different encoding rates (e.g., video layers in SVC). Upon delivery of requests, depending on the available network resources, users receive additional layers that successively refine the reconstruction quality. By exploiting scalable compression, we investigate the fundamental limits in caching networks with throughput limitations. We allow users to have different preferences in reconstruction quality for each library file, and assume that files have possibly different distortion-rate functions. These assumptions further account for the diversity of multimedia applications being consumed in wireless networks (e.g., YouTube videos vs 3D videos or augmented reality applications), and with respect to requesting users’ device capabilities (e.g., 4K vs 1080p resolution). Our goal is to design caching schemes that, for a given broadcast rate, minimize the average distortion experienced at user devices. Related Work ------------ As discussed above, most literature on caching considers lossless recovery of files with the goal of minimizing the total rate transmitted over the shared link, in order to recover all requested fixed-size video files in whole [@maddah14fundamental; @maddah14decentralized; @ji14average; @ji15order; @yu2017characterizing]. There are only a few works that study the lossy cache-aided broadcast network [@timo2016rate; @yang2018coded; @ibrahim2018coded]. In [@timo2016rate] the authors study the delivery rate, cache capacity and reconstruction distortion trade-offs in a network with arbitrarily correlated sources for the single-user network and some special cases of a two-user problem. Similarly to this paper, [@yang2018coded] and [@ibrahim2018coded] assume successively refinable sources in a setting where receivers have heterogeneous distortion requirements. In [@yang2018coded], the authors study the problem of minimizing the worst-case delivery rate for Gaussian sources and heterogeneous distortion requirement at the users. They characterize the optimal delivery rate for the two-file two-user case, and propose efficient centralized and decentralized caching schemes based on successive refinement coding for the general case. The work in [@ibrahim2018coded] extends [@yang2018coded] to a setting where the server not only designs the users’ cache contents, but also optimizes their cache sizes subject to a total memory budget. Contributions ------------- Our work differs from [@timo2016rate; @yang2018coded; @ibrahim2018coded] in a number of ways. Compared to [@timo2016rate] which considers a single-cache network, we have a large network with arbitrary number of receivers, each equipped with a cache memory of different capacity. The works in [@yang2018coded; @ibrahim2018coded] minimize the worst-case rate transmitted over the broadcast link for a set of predetermined reconstruction distortion requirements at each user, while we minimize the expected distortion across the network subject to a given broadcast rate for a more general setting as elaborated below. Our main contributions are summarized as follows: 1. We formulate the problem of efficient lossy delivery of sources over a heterogeneous rate-limited broadcast caching network via information-theoretic tools, and study the trade-off between user cache sizes, broadcast rate and the expected reconstruction distortion across users and demands. We allow for sources to have different distortion-rate functions, and for users to have different cache sizes and different demand distributions. 2. We propose a class of cache-aided delivery schemes, in which, to limit the computational complexity and reduce the communication overhead, the sender only takes into account users’ local cached content during the delivery phase, and generates the transmit message independently for each receiver without exploring multicast coding opportunities. We refer to this scheme, presented in Sec. \[sec: Unicast\], as the *Local Cache-aided Unicast (LC-U)* scheme. We show that the optimal caching policy in LC-U admits a reverse water-filling type solution, which can be implemented locally and independently across users, without the need of global coordination. 3. We propose another class of schemes in Sec. \[sec: Multicast\] referred to as the *Cooperative Cache-aided Coded Multicast (CC-CM)* scheme. In CC-CM, the sender designs the caching and delivery phases jointly across all receivers based on global network knowledge (user cache contents and demand distributions, and file rate-distortion functions), and compresses the files accordingly. In this scheme global network knowledge is used to fill user caches and to construct codes that fully exploit the multicast nature of a wireless system. 4. In Sec. \[sec:RAP\], we present a coded delivery scheme that can be adopted by CC-CM to implement the caching phase and to deliver a portion of the multicast message. We refer to this scheme, which is a generalization of the scheme proposed in [@ji15order] to a setting with heterogeneous cache sizes, demand distributions, and where users are interested in receiving possibly [degraded versions (different-length portions) of a given file in the library]{}, as the [*Random Fractional caching with Greedy Constrained Coloring (RF-GCC)*]{}. We provide upper bounds on the per-demand and average delivery rates achieved with RF-GCC. We note that RF-GCC allows the generalization of the problem studied in [@yang2018coded] where (i) files have different distortion-rate functions, and (ii) users have different reconstruction distortion targets for each library file. When specialized to the setting in [@yang2018coded], our results show that RF-GCC achieves equal or better worst-case delivery rate compared to the decentralized scheme proposed in [@yang2018coded]. 5. In Sec. \[sec:RAP-GCCOptimization\], we describe how RF-GCC presented in Sec. \[sec:RAP\] can be used to deliver part of the transmitted message in CC-CM, introduced in Sec. \[sec: Multicast\]. The remaining part is delivered via unicast, and based on these two components we characterize the rate-distortion-memory trade-off achieved with CC-CM. In Sec. \[sec:Simulations\], we numerically show that CC-CM offers notable performance improvements over LC-U in terms of average file reconstruction distortion. System Model and Problem Statement ==================================  \[sec: ProblemSetting\] Source Model {#subsec:source} ------------ Consider a library composed of $N$ independent files indexed by $\{1,\dots,N \} \triangleq[N]$ and generated by an $N$-component memoryless source (N-MS) over finite alphabets $\mathcal W_1,\dots,\mathcal W_N$ with a pmf $p(w_1,\dots,w_N)=$ $ p(w_1),\dots, p(w_N)$. For a block length $F$, file $n\in[N]$ is represented by a sequence $W_n^F = (W_{n1},\dots, W_{nF})$, where $W_n^F \in {\mathcal W}_n^F $. For a given reconstruction alphabet $\widehat{\mathcal W}_n$, an estimate of file $W_n^F$, $n\in[N]$, is represented by ${\widehat W}_n^F\in\widehat{\mathcal W}_n^F$, and the distortion between the file and its reconstruction is measured by a single letter distortion function $D_n: \mathcal W_n \times \widehat{\mathcal W}_n \rightarrow {\mathbb R}^{+}$, as $D_n( {W}_{n}^F,\widehat{W}_{n}^F) = \frac{1}{F} \sum\limits_{i=1}^F D_n( {W}_{n,i},\widehat{W}_{n,i})$. We consider successively refinable sources, as defined in [@SuccessiveRefin], where each source can be compressed in multiple stages such that the optimal distortion is achieved at each stage without incurring rate loss relative to its single-description representation. Specifically, in the case of two stages, consider a first description of the file $W_n^F$ compressed at rate $R^{(1)}$ bits/source-sample incurring distortion $D^{(1)}$, and an additional description that is compressed at rate $R^{(2)}-R^{(1)}$ bits/source-sample, such that the reconstruction resulting from the two-stage description has distortion $D^{(2)} \leq D^{(1)}$. Then, the underlying N-MS is successively refinable if it is possible to construct codes such that $D^{(1)} = D(R^{(1)})$ and $D^{(2)} = D(R^{(2)})$, where $D(R)$ denotes the source distortion-rate function. This suggests that the descriptions at each stage are optimal and the distortion-rate limit at both stages can be simultaneously achieved. Without loss of generality, and based on the fact that Gaussian sources with squared error distortion are successively refinable, we assume that the source distribution is Gaussian with variance $\sigma_{n}^{2}$ and distortion-rate function $D_{n}(r) = \sigma_{n}^{2} 2^{-2r}$[@Cover]. Note that the compression setting considered in this paper is applicable to a video streaming application, in which each file represents a video segment compressed using SVC [@SVC]. In SVC, the single-stream video is encoded into multiple components, referred to as [*layers*]{}, such that the scalable video content is a combination of one [*base layer*]{} and multiple additional [*enhancement layers*]{}. The base layer contains the lowest spatial, temporal and quality representation of the video, while enhancement layers can improve the quality of the video file reconstructed at the receiver. Note that an enhancement layer is useless unless the receiver has access to the base layer and all preceding enhancement layers. The reconstructed video quality (distortion) in SVC depends on the total number of layers received in sequence. Cache-Aided Content Distribution Model -------------------------------------- Consider a cache-aided broadcast system, where one sender (e.g., base station) is connected through an error-free rate-limited shared link to $K$ receivers (e.g., access points or user devices). The sender has access to a content library generated by an $N$-MS source as described in Sec. \[subsec:source\]. Receiver $k\in[K]$ has a cache of size $M_{k}$ bits/sample, or equivalently, $M_{k}F$ bits, as shown in Fig. \[fig: System\]. Receiver $k\in[K]$ requests files from the library independently according to demand distribution $\qbf_k = (q_{k,1},\dots,q_{k,N})$, assumed to be known at the sender, where $q_{k,n} \in [0,1]$ for all $n\in[N]$, $\sum_{n=1}^N q_{k,n} = 1$, and $q_{k,n}$ denotes the probability that receiver $k$ requests file $n$. The cache-aided content distribution system operates in two phases: - [[*Caching Phase:*]{}]{} This phase occurs during a period of low network traffic. In this phase, all receivers have access to the entire library for filling their caches. Designing the cache content can be done locally by the receivers based on their local information, or globally in a cooperative manner either directly by the sender, or by the receiver itself based on information from the overall network. As in [@maddah14fundamental; @maddah14decentralized; @ji14average; @ji15order], we assume that library files and their popularity change at a much slower time-scale compared to the file delivery time-scale. - [[*Delivery Phase:*]{}]{} After the caching phase, only the sender has access to the library and the network is repeatedly used in a time slotted fashion. At the beginning of each time slot, the sender is informed of the demand realization vector, denoted by $\dbf = (d_1,\dots,d_K) \in \mathfrak D\equiv [N]^K$, where $d_k \in[N]$ denotes the index of the file requested by receiver $k\in [K]$. ![Caching is used for reducing the distortion of requested content in a broadcast network.[]{data-label="fig: System"}](networkdist.pdf){width="0.5\linewidth"} The goal of this paper is to design caching and delivery strategies, referred to as [*caching schemes*]{}, that result in the lowest expected distortion across the network, taken over the source distribution and demand distributions, under the condition that the rate (measured in bits/sample as defined in ) required to satisfy the demand is within a given rate budget $R$, for given receiver cache capacities $M_1,\dots,M_K$. As a result, when a file from the library is requested, we allow for different versions of the file, encoded at different rates and with different reconstruction distortions, to be delivered to the receivers. More formally, the caching scheme is composed of the following components: - [**Cache Encoder:**]{} The cache encoder at the sender computes the content to be cached at receiver $k\in[K]$, denoted by $Z_k$, using a function $f^{\mathfrak C}_{k}: \prod\limits_{n=1}^N{\mathcal W}_n^{F} \rightarrow [1: { 2^{M_kF}} )$ as $Z_{k} = f^{\mathfrak C}_{k}\Big( \{W_n^F\}_{n=1}^N\Big) $. - [**Multicast Encoder:**]{} During the delivery phase, the sender is informed of the demand realization $\mathbf{d} =(d_1, \ldots, d_K)\in\mathfrak D$. The sender uses the function $f^{\mathfrak M}: {\mathfrak D} \times \prod\limits_{n=1}^N{\mathcal W}_n^{F}\times \prod\limits_{k=1}^K [1: 2^{M_kF}) \rightarrow \mathcal Y^\star$ to compute and transmit a multicast codeword $Y_{\dbf} = f^{\mathfrak M}\Big( \dbf ,\, \{W_n^F\}_{n=1}^N,\, \{Z_{k}\}_{k=1}^K \Big) $, where we use $\star$ to denote variable length. - [**Multicast Decoders:**]{} Receiver $k\in[K]$ uses a mapping $g^{\mathfrak M}_{k} : \mathfrak D \times \mathcal Y^\star \times [1: 2^{M_kF}) \rightarrow \widehat{\mathcal W}_{d_k}^F$ to reconstruct its requested file using its cached content $Z_k$ and the received multicast codeword $Y_{\dbf}$, as $\widehat{W}_{d_{k}}^F = g^{\mathfrak M}_{k}(\dbf, Y_{\dbf} ,Z_{k})$. For a given demand $\dbf\in\mathfrak D$, the rate transmitted over the shared link, $R_{\dbf}^{(F)}$, is defined as $$\begin{aligned} R_{\dbf}^{(F)} = \frac{ \mathbb{E} [ L(Y_{\dbf})] }{F} , \label{eq: demand rate} \end{aligned}$$ where $L(Y )$ denotes the length (in bits) of the multicast codeword $Y$, and the expectation is over the source distribution. The expected distortion, over all demands, receivers and the source distribution, is defined as $D^{ (F)} = \mathbb{E} \bigg[ \frac{1}{K}\sum_{k=1}^{K} D_{d_k}( {W}_{d_k}^F,\widehat{W}_{d_k}^F)\bigg]$, which is a function of the cached content $\{Z_k\}$ and the multicast codeword $Y_{\dbf}$. For Gaussian sources we have $$\begin{aligned} D^{(F)} = \mathbb{E} \bigg[\frac{1}{K} \sum_{k=1}^{K}\sigma_{{d}_{k}}^{2} 2^{-2 \,{ \Xi} (M_{k,d_k},\, R_{k, \dbf})}\bigg], \label{averDist}\end{aligned}$$ where $M_{k,d_k}$ is the size (in bits/sample) of receiver $k$’s cache assigned to storing file $d_k$, $R_{k, \dbf}$ is the total rate (in bits/sample) delivered to receiver $k$ for demand $\dbf$, which we refer to as the [*per-receiver*]{} rate, and function $\Xi(.)$ determines the effective rate available to the receiver useful for reconstructing its requested file $d_k$, which we refer to as the [*effective rate function*]{}. As shown in [@maddah14fundamental], due to the broadcast nature of the wireless transmitter in cache-aided networks, by capitalizing on the spatial reuse of the cached information several different demands can be satisfied with a single coded multicast transmission, resulting in global caching gains. Therefore, for a general caching scheme, the overall rate received by receiver $k\in[K]$ in demand $\dbf$, i.e,. the per-receiver rate $R_{k,\dbf}$, can be different from the total rate multicasted over the shared link by the sender, $R^{(F)}_{\dbf}$. Furthermore, due to the successive refinability of the files, not all messages received and decoded by the receivers are useful for the reconstruction of requested files. Only the cached and received bits that are in sequence determine the reconstruction distortion, translating to the effective rate of $\Xi(M_{k,d_k}, R_{k, \dbf})$ bits/sample. \[def:1\] For a given demand $\dbf$, a distortion-rate-memory tuple $(D,R,M_1,\dots,M_K)$ is [*achievable*]{} if there exists a sequence of caching schemes for cache capacities $M_1,\dots,M_K$, and increasing file size $F$ such that $\limsup_{F\rightarrow \infty} { D}^{(F)} \leq D$ and $\limsup_{F\rightarrow \infty} R_{\dbf}^{(F)} \leq R$. The distortion-rate-memory region $ \mathfrak R^*$ is the closure of the set of achievable distortion-rate-memory tuples $(D,R,M_1,\dots,M_K)$, and the optimal distortion-rate-memory function is given by $$\begin{aligned} D^*(R,\{M_k\}_{k=1}^K) = \inf\Big\{ D: ( D,R,M_1,\dots,M_K) \in \mathfrak R^*\Big\}.\end{aligned}$$ [Later in Sec. \[sec:RAP-GCCOptimization\], in order to make the optimization problem in tractable we use the expected multicast rate rather than the multicast rate defined in , defined as ${\bar R}^{(F)} = \mathbb{E} [R_{\dbf}^{(F)}] $, where the expectation is over the demand distribution. ]{} Local Cache-aided Unicast (LC-U) Scheme {#sec: Unicast} ======================================= In this section, we present LC-U, an achievable scheme that despite its simplicity, serves as a benchmark for caching schemes that exploit coding opportunities during multicast transmissions, which are studied in Sec. \[sec: Multicast\]. Furthermore, LC-U is useful in refining the multicast content distribution scheme of Sec. \[sec: Multicast\]. LC-U determines the content to be placed in each receiver cache and the multicast codeword that is transmitted over the shared link with rate budget $R$ (bits/sample) for each demand, independently across the receivers. The multicast encoder is equivalent to $K$ independent fixed-to-variable source encoders each depending only on the local cache of the corresponding receiver, resulting in $K$ unicast transmissions. Let ${\bf M}_k=(M_{k,1},\dots,M_{k,N})$ denote the [*cache allocation*]{} at receiver $k\in[K]$, i.e., the portion of memory designated to storing information from each file. LC-U operates as follows: 1. Caching Phase: Receiver $k\in[K]$ computes the optimal cache allocation that minimizes the expected distortion across the network, assuming that it will not receive further transmissions from the sender, i.e., $R_{k, \dbf} = 0$ for any $\dbf\in\mathfrak D$. Since receivers are not expecting to receive additional refinements during the delivery phase, each receiver caches content independently based on its own demand distribution. Receiver $k \in [K]$ solves the following convex optimization problem $$\begin{aligned} &\min & & \mathbb E\Big[D_{n} (W_{n}^F , {\widehat W}_{n}^F)\Big] = \sum_{n=1}^{N} q_{k,n} \sigma_{n}^{2} 2^{-2M_{k,n}} \\ & \text{s.t} & & \sum_{n=1}^{N} M_{k,n} \leq M_{k} , \; \;\;M_{k,n} \geq 0, \;\; \forall n \in [N] \end{aligned}\label{eq: LCU-Cache}$$ resulting in a cache allocation given as $$M_{k,n}^{*}= \left( \log_{2}\sqrt{\frac{2\ln({2}q_{k,n}\sigma_{n}^{2})}{\lambda_{k}^{*}}}\right)^{+} \label{eq: LCU-CacheSol},$$ with $\lambda_{k}^{*}$ such that $\sum_{n=1}^NM_{k,n}^{*}=M_{k}$, and $(x)^+$ is used to denote $\max\{x,0 \}$. The solution admits the well-known reverse water-filling form [@Cover], in which receiver $k$ only stores portions of those files that satisfy $q_{k,n}\sigma_{n}^{2}\leq \frac{\lambda_{k}^{*}}{2\ln{2}}$; hence, $q_{k,n}M_{k,n}^{*} = \min\{\frac{\lambda_{k}^{*}}{2\ln{2}}, \; q_{k,n}\sigma_{n}^{2}\}$, as illustrated in Fig. \[fig: WaterFilling\]. ![Cache allocation at receiver $k$ with the LC-U scheme in a network with $N=6$ Gaussian sources. []{data-label="fig: WaterFilling"}](waterfilling.pdf){width="2.1in"} 2. Delivery Phase: For a given demand realization $\dbf\in \mathfrak D$, the sender computes the optimal per-receiver delivery rates $\{R_{k,\dbf}\}_{k=1}^K$ jointly across all the receivers in the network by solving the following problem $$\begin{aligned} &\text{min} & & \frac{1}{K} \sum_{k=1}^{K} D_{d_k} (W_{d_k}^F , {\widehat W}_{d_k}^F) = \frac{1}{K} \sum_{k=1}^{K}\sigma_{d_{k}}^{2}2^{-2 (M_{k,d_{k}}^{*} +R_{k,\dbf})} \\ & \text{s.t.} & & \sum_{k=1}^{K} R_{k,\dbf} \leq R, \;\;\; R_{k,\dbf} \geq 0, \;\;\forall k \in [K] \end{aligned}\label{eq: LCU-Rate}$$ which results in $$R_{k,\dbf}^{*}=\left( \log_{2}\sqrt{\frac{2\ln{2}(\sigma_{d_{k}}^{2})}{\gamma_{\dbf}^{*}}}-M_{k,d_{k}}^{*}\right)^{+},$$ with $\gamma_{\dbf}^{*}$ chosen such that $\sum_{k=1}^KR_{k,\dbf}^{*}=R$. The caching and delivery strategy described above are such that the receivers cache and receive sequential bits of successively refinable files. Hence, all bits transmitted to the receivers are useful for file reconstruction, i.e., the effective rate delivered to receiver $k\in[K]$ is $(M_{k,d_{k}}^{*},R_{k,\dbf}^*) = M_{k,d_{k}}^{*}+R_{k,\dbf}^*$. LC-U is a scalable coding scheme described by two layers, one base layer and one enhancement layer. During the caching phase receiver $k\in[K]$ stores the base layer of file $n\in[N]$ with rate $M_{k,n}$ bits/sample. In the delivery phase, the sender unicasts the enhancement layers of the requested files to the corresponding receivers with rates $\{R_{k,\dbf}\}_{k=1}^K$, using $K$ disjoint multicast encoders. In LC-U, the caching process is decentralized and it does not require any coordination from the sender, since receivers fill their caches based on their own preferences, $\{q_{k,n}\}$, and file characteristics, $\{\sigma_{n}^{2}\}$. On the other hand, the requested files are delivered in a centralized manner. Using the cache placements at all receivers, the sender jointly optimizes the per-receiver rates. Cooperative Cache-aided Coded Multicast (CC-CM) Scheme {#sec: Multicast} ====================================================== In this section, we present an achievable caching scheme, referred to as CC-CM, that determines the cache allocations $\{ {\bf M}_k\}_{k=1}^K$, and the per-receiver delivery rates $ \{R_{k,\dbf}\}_{k=1}^K$, jointly across all receivers and demand realizations $\dbf\in\mathfrak D$, based on the global network information. In addition, CC-CM uses the broadcast nature of the wireless transmitter. This allows for more efficient use of the shared link compared with LC-U. Similar to LC-U, our goal is to minimize the expected distortion across the network for a given rate budget $R$. To this end, we solve the following problem for cache allocations $\{M_{k,n} \}$ and per-receiver delivery rates $\{R_{k,\dbf}\}$ $$\begin{aligned} &\min & & {\mathbb E} \Big[ \frac{1}{K} \sum_{k=1}^{K}\sigma_{d_{k}}^{2}2^{-2\,\Xi(M_{k,d_{k}} , R_{k,\dbf})} \Big]\\ & \text{s.t.} & & R_{\ach}\Big( \dbf, \{M_{k,n}\} , \{R_{k,\dbf}\} \Big ) \leq R,\quad\forall \dbf\in\mathfrak D\\ &&& \sum\limits_{n=1}^N M_{k,n}\leq M_k , \;\; M_{k,n},\, R_{k,\dbf} \geq 0, \qquad\; \forall (k,n,\dbf) \in [K]\times[N]\times\mathfrak D \end{aligned}\label{eq: Main Opt}$$ where $R_{\ach}\Big( \dbf, \{M_{k,n}\} , \{R_{k,\dbf}\} \Big ) $ denotes the aggregate multicast rate achieved by the CC-CM scheme for demand realization $\dbf\in\mathfrak D$. The rate $R_{\ach}\Big( \dbf, \{M_{k,n}\} , \{R_{k,\dbf}\} \Big ) $ depends on the architecture of the cache encoder, multicast encoder and multicast decoders described in Sec. \[sec: ProblemSetting\] used by CC-CM, and can be difficult to compute in general. In order to make its computation analytically tractable, we focus on a subclass of CC-CM schemes by imposing further restrictions on the per-receiver rates $R_{k,\dbf}$, which could possibly result in a suboptimal solution. Specifically, we assume that for any $(k,\dbf) \in [K]\times\mathfrak D$ the per-receiver rate $R_{k, \dbf}$ is composed of two portions, i.e., $R_{k, \dbf}= \widetilde R_{k, d_{k}} + \widehat R_{k, \dbf}$: (i) a portion, $\widetilde R_{k, d_{k}} $, delivered via [*coded*]{} multicast transmissions, which can be evaluated in a closed-form expression and which depends only on the receiver-file index pair $(k,d_k)$, i.e., each receiver and its requested file, and (ii) a portion, $\widehat R_{k, \dbf}$, delivered via [*uncoded*]{} muticast transmissions, which depends on the entire demand vector $\dbf$. The advantage of this approach, as further explained in Sec. \[subsec: RAPP-GCC with SVC\], is that the first multicast portion of the rate, namely $\widetilde{R}_{k, d_{k}}$, can be optimized jointly based on the global information, whereas the second unicast portion, namely $\widehat{R}_{k, \dbf}$, can utilize a solution similar to that of LC-U of Sec. \[sec: Unicast\] to exhaust the remaining portion of the total rate budget of $R$ once the aggregate multicast rate is accounted for. Note that the reconstruction distortion at receiver $k\in[K]$ for demand $\dbf$ is determined by $M_{k,d_k}, \widetilde R_{k,d_k}$ and $\widehat R_{k,\dbf}$, which can vary across different receivers. The optimization problem in , as well as its simplified form, is exponential in the number of receivers since $R_{\ach}\Big( \dbf,\{M_{k,n}\} , \{R_{k,\dbf}\} \Big )$ depends on the demand realization $\dbf\in \mathfrak D \equiv [N]^K$. Later in Sec. \[subsec:performance of cc-cm\], we simplify by replacing the exponential number of per-demand rate constraints with an average rate contraint. Throughout this paper, we use *aggregate coded rate* to refer to the overall rate sent over the shared link through coded multicast transmissions, which is a function of the per-receiver coded rates $\{\widetilde R_{k, d_{k}}\}$. Additionally, we use *aggregate uncoded rate* to refer to the overall rate transmitted through uncoded transmissions, which is a function of the per-receiver uncoded rates $\{ \widehat R_{k, \dbf}\}$. While the aggregate uncoded rate can be upper bounded by the sum rate $ \sum_{k=1}^K \widehat R_{k, \dbf}$, the aggregate coded rate depends on the specific scheme adopted for the coded multicast transmission and its multiplicative coding gains. In the remainder of this paper, in order to characterize the aggregate coded rate, and to implement the caching phase and the portion of the delivery phase corresponding to coded multicast transmissions, we adopt a generalization of the cache-aided coded multicast delivery scheme proposed in [@ji15order], which we refer to as RF-GCC. Then, we use this scheme and a variation of the LC-U of Sec. \[sec: Unicast\] to quantify $R_{\ach}\Big( \dbf, \{M_{k,n}\} , \{R_{k,\dbf}\} \Big )$ in problem . In Sec. \[sec:RAP\], we first describe RF-GCC, obtained by generalizing [*Random Aggregate Popularity caching with Greedy Constrained Coloring (RAP-GCC)*]{} [@ji15order] to a setting with heterogeneous cache sizes, where receivers are interested in receiving different-length portions of the same file that map to the possibly different per-receiver reconstruction distortions resulting from problem . We then derive an upper bound on the aggregate coded rate achieved with this scheme. We start Sec. \[sec:RAP-GCCOptimization\] by describing how RF-GCC can be adopted by CC-CM to fill receiver caches and to deliver the coded portion of the multicast codeword. We then use the upper bound on the rate achieved by RF-GCC to characterize $R_{\ach}\Big( \dbf, \{M_{k,n}\} , \{R_{k,\dbf}\} \Big ) $, and to solve optimization . Coded Multicast Delivery Through RF-GCC Scheme {#sec:RAP} ============================================== This section presents RF-GCC, a generalization of RAP-GCC proposed in [@ji15order]. In Sec. \[sec:RAP-GCCOptimization\], we discuss how RF-GCC is adopted by CC-CM to implement the caching phase and the portion of the delivery phase corresponding to coded multicast transmissions. Recall that per Sec. \[sec: Multicast\], the coded multicast transmission corresponds to rate $\widetilde{R}_{k,d_k}$ delivered to receiver $k\in[K]$. In the system model considered in [@ji15order], the files are generated from sources with the same distribution, they are requested by all receivers according to the same demand distribution, and the goal is to design a caching scheme that minimizes the expected multicast rate. Therefore, in the setting of [@ji15order], the authors propose a scheme where the caching phase is designed only based on the [*aggregate demand distribution*]{}, i.e., the probability that a file is requested by at least one user. In this paper, our goal is to minimize the expected distortion and we adopt a generalization of this caching policy where the caches are filled based on not only the demand distributions but also the distortion-rate functions of the files. To this end, we use the more general term [*Random Fractional (RF)*]{} caching rather than RAP caching used in [@ji15order]. Consider a cache-aided system with a library of $N$ files with length $\tau$ bits[^1] indexed by $\{ 1,\dots,N\}$ and $K$ receivers $\{1,\dots,K\}$, where receiver $k\in[K]$ has a cache of size $\mu_k\tau$ bits. For each file $n\in[N]$ in the library, there are $K$ different fixed-size [versions]{} available one for each receiver, such that [*version*]{} $k$ of file $n$ is composed of the [*first*]{} $\length_{k,n}\factor$ bits of file $n$. For two indices $k_1$ and $k_2$, we say that version $k_1$ of file $n$, with length $\length_{k_1,n}\factor$, is a [*degraded version*]{} of version $k_2$ of file $n$, with length $\length_{k_2,n}\factor$, if $\length_{k_1,n}\leq\length_{k_2,n}$. Receivers request files from the library following the demand distributions described in Sec. \[sec: ProblemSetting\], and when receiver $k\in[K]$ requests file $n\in[N]$, the sender delivers version $k$ of file $n$ with length $\Omega_{k,n}$. Note that a version of a file is composed of a fixed number of successive bits, which corresponds to a file having a predetermined reconstruction distortion (e.g., video playback quality)[^2]. Throughout the remainder of this section we assume that the version lengths $\{\Omega_{k,n}\}$ are fixed, and later in Sec. \[sec:RAP-GCCOptimization\] we determine the optimal version lengths based on . \[rmk:compare to QD\] The setting considered in this section is similar to the one considered in [@yang2018coded], where receivers have predefined distortion requirements. In [@yang2018coded], the objective is to design an efficient caching scheme that minimizes the worst-case delivery rate over the shared-link for a given set of receiver cache capacities and distortion requirements. In our setting, the version lengths $\{\Omega_{k,n}\tau\}$ can be interpreted as receiver distortion requirements, which further generalizes the problem in [@yang2018coded] to each receiver having different distortion requirements for each file in the library. Differently from [@yang2018coded], as defined in , our ultimate goal is to minimize the expected distortion across the network for a given set of cache capacities and a given shared-link rate budget. As a means to solving the general problem in , our proposed solution in this subsection extends that of [@yang2018coded] to a setting with heterogeneity across files in addition to across receivers. We solve the problem defined in [@ji15order Sec. II], by finding an upper bound on the rate-memory trade-off in cache-aided networks, for the setting described above, where degraded versions of files with different lengths are delivered to the receivers. In the following, we (i) describe a decentralized caching scheme in Sec. \[subsec:description\], and (ii) characterize its achievable rate for a given demand in Theorem \[thm:demand\], and on average over all demands in Theorem \[thm:general\]. Scheme Description {#subsec:description} ------------------ As in conventional caching schemes, a fractional cache encoder divides each file into packets and determines the subset of packets from each file that are stored in each receiver cache. For each demand realization in the delivery phase, the multicast encoder generates a multicast codeword by computing an [*index code*]{} based on a coloring of the index coding conflict graph [@birk1998informed; @IndexCoding]. The RF-GCC scheme operates as follows: 1. Caching Phase: All the versions of the library files are partitioned into equal-size packets of lengths $ T$ bits. The cache encoder is characterized by $K$ vectors, $\pbf_k=(p_{k,1},\ldots, p_{k,N})$, $k = 1,\dots,K$, referred to as the [*caching distributions*]{}, such that $ p_{k,n}\in[0,1/\mu_k]$ and $\sum_{n=1}^N p_{k,n} = 1$, for any $k\in[K]$. Element $p_{k,n}$ represents the portion of receiver $k$’s cache capacity that is assigned to storing packets from version $k$ of file $n\in[N]$. Receiver $k\in[K]$ selects and stores a subset of $ p_{k,n}\cache_k \factor /T$ distinct packets from version $k$ of file $n$, uniformly at random. The caching distributions, $\{\pbf_1,\dots,\pbf_K\}$, are optimally designed based on an objective function, for example to minimize the rate of the corresponding index coding delivery scheme as in [@ji15order], or to minimize the expected network distortion as in Sec. \[sec:RAP-GCCOptimization\] of this paper. In the following, we denote by $\Cbf =\{\Cbf_1,\dots,\Cbf_K \}$ the packet-level cache configuration, where $\Cbf_k$ denotes the set of packets cached at receiver $k\in[K]$, which correspond to the packets from version $k$ of all library files. 2. Delivery Phase: For a given demand realization $\dbf$, we denote the packet-level demand realization by $\Qbf = \{\Qbf_1,\dots,\Qbf_K \}$, where $\Qbf_k$ denotes the set of packets from the file version requested by receiver $k\in[K]$, i.e., version $k$ of file $d_k$ with length $\Omega_{k,d_k}\tau$, that are not cached at it. In order to determine the set of packets that need to be delivered, the sender constructs an index coding [*conflict graph*]{}, which is the complement of the side information graph as described in [@birk1998informed; @IndexCoding]. For a given packet-level cache configuration $\Cbf$ and demand realization $\Qbf$, the conflict graph, denoted by $\mathcal H_{\Cbf,\Qbf}$, is constructed as follows: 1. For each requested packet in $\Qbf$, there is a vertex $v$ in the graph uniquely identified by the label $\{\alpha(v),\beta(v),\eta(v)\}$, where $\alpha(v)$ indicates the packet identity associated to $v$, $\beta(v)$ is the receiver requesting it and $\eta(v)$ is the set of all receivers that have cached the packet. 2. For any two vertices $v_1$, $v_2$, we say that vertex $v_1$ interferes with vertex $v_2$ if: $1)$ the packet associated with $v_1$, $\alpha(v_1)$, is not in the cache of the receiver associated with $v_2$, $\beta(v_2)$; and if $2)$ $\alpha(v_1)$ and $\alpha(v_2)$ do not represent the same packet. There exists an undirected edge between $v_1$ and $v_2$ if $v_1$ interferes with $v_2$ or if $v_2$ interferes with $v_1$. Given a valid vertex coloring[^3] of the conflict graph $\mathcal H_{\Cbf, \Qbf}$, the multicast encoder generates the multicast codeword by concatenating the XOR of the packets with the same color. A chromatic number [*index code*]{} for this graph results from generating the multicast codeword based on the valid coloring that results in the shortest codeword. Computing the index code based on graph coloring is NP-complete and quantifying its performance can be quite involved. In order to quantify the achievable rate, as in [@ji15order], we adopt a greedy approximation of the algorithm referred to as Greedy Constrained Coloring (GCC), which has polynomial-time complexity in the number of receivers and packets. Due to space limitations, we refer the reader to [@ji15order Algorithms 1 and 2] for the pseduo code of GCC. This coloring results in a possibly larger multicast codeword compared to the chromatic number index code, but as shown in [@ji15order], for very large block lengths ($\tau\rightarrow\infty$), its achievable rate: (i) can be evaluated in a closed-form expression, and (ii) it provides a tight upper bound on the rate achieved with the chromatic number index code (i.e., it is asymptotically order-optimal). Let $R^{C}\Big(\dbf, \{\mu_k \}, \{\pbf_{k}\}, \{{\length}_{k,n}\} \Big)$ denote the asymptotic coded multicast rate achieved by RF-GCC, as $\factor\rightarrow\infty$, for a given demand $\dbf$, caching distributions $\{\pbf_k\}$ and file version lengths $\{\length_{k,n}\factor\}$. As in caching literature, $R^{C}\Big(\dbf, \{\mu_k \}, \{\pbf_{k}\}, \{{\length}_{k,n}\} \Big)$ is defined as the limiting value ($\factor\rightarrow\infty$) of the length (in bits) of the multicast codeword nominalized by $\factor$. Next, we provide an upper bound on the achievable rate $R^{C}\Big(\dbf, \{\mu_k \}, \{\pbf_{k}\}, \{{\length}_{k,n}\} \Big)$, which is used in Sec. \[sec:RAP-GCCOptimization\] to solve optimization problem , and to derive the optimal values $\{M_{k,n}^*\}$, $\{\widetilde R_{k,n}^*\}$ and $\{\widehat R_{k, \dbf}^*\}$. Achievable Rate {#sec:Achievable Rate} --------------- The following theorems provide closed-form upper bounds on the delivery rate. Specifically, Theorem \[thm:demand\] characterizes the achievable rate for demand $\dbf\in\mathfrak D$, i.e., $R^C\Big(\dbf, \{\mu_k \}, \{\pbf_{k}\}, $ $\{\length_{k,n}\} \Big)$, while Theorem \[thm:general\] upper bounds the expected rate over all demand realizations, denoted by ${\bar R}^C\Big(\{\qbf_k\} , \{\mu_k \},$ $\{\pbf_{k}\},\{\length_{k,n}\} \Big)$. \[thm:demand\] In a network with $K$ receivers and $N$ files, for a given demand $\dbf\in\mathfrak D$, a given set of cache capacities $\{\mu_k\}_{k=1}^K$ and caching distributions $\{\pbf_{k} \}_{k=1}^K$, the asymptotic coded multicast rate required to deliver the requested file versions with length $\{\length_{k,d_k} \factor\}_{k=1}^K$, is upper bounded as $$\begin{aligned} R^C \Big(\dbf, \{\mu_k \} ,\{\pbf_{k}\},\{{\length}_{k,n}\} \Big) \leq \min \bigg\{ \Psi_{\dbf}^{(1)}\Big( \{\mu_k \} ,\{\pbf_{k}\},\{{\length}_{k,n}\} \Big), \Psi_{\dbf}^{(2)}\Big( \{\mu_k \} ,\{\pbf_{k}\},\{{\length}_{k,n}\} \Big) \bigg\},\notag \end{aligned}$$ where $$\begin{aligned} & \Psi_{\dbf}^{(1)}\Big( \{\mu_k \} , \{\pbf_{k}\},\{{\length}_{k,n}\} \Big) = \sum_{i=1}^{K} \sum_{\ell=1}^{ K-i+1} \sum_{\mathcal K_{\ell} \subseteq \{\order_i,\dots,\order_{K}\} } \; \Big( \length_{\order_i,d_{\order_i}} - \length_{\order_{i-1},d_{\order_{i-1}} } \Big) \, \max\limits_{k\in\mathcal K_\ell} { \lambda_i }(\mathcal K_\ell,k,d_{k}) , \notag\\ & \Psi_{\dbf}^{(2)} \Big( \{\mu_k \} ,\{\pbf_{k}\},\{{\length}_{k,n}\}\Big)= \sum_{n =1}^N \mathbbm{1} \{n \ni \dbf\} \Big( \max\limits_{k: d_k = n} \, {\length}_{k,n} - \min\limits_{k: d_k = n} p_{k,n}\mu_k\Big) , \label{eq:psi 2} \\ & { \lambda_i} (\mathcal K_\ell,k,n) = (1-p^c_{k,n}) \prod\limits_{u\in \mathcal{K}_{\ell}\backslash \{k\}}p^c_{u,n} \prod\limits_{u\in { \{\order_i,\dots,\order_{K}\}}\setminus \mathcal{K}_{\ell}}{(1-p^c_{u,n})},\label{eq:lambda thm}\\ & p^c_{k,n} =p_{k,n} \frac{ \mu_k}{ \length_{k,n}} , \label{eq: pc is prob} \end{aligned}$$ where $\mathcal{K}_{\ell}$ denotes a given set of $\ell$ receivers, and for a given demand $\dbf$, $\order_1,\dots,\order_K$ denotes an ordered permutation of receiver indices such that $\Omega_{\order_1,d_{\order_1}}\leq\dots\leq\Omega_{\order_K, d_{\order_K}}$, where ${\length}_{\order_0,\order_{d_0}}=0$ and $\{\order_1,\order_0 \} = \emptyset$. In , $p^c_{k,n}$ denotes the probability that a packet from version $k\in[N]$ of file $n\in[N]$ is cached at receiver $k$. We use $i\ni \xbf$ to indicate that $i$ is one of the elements of $\xbf$. The proof is given in Appendix \[App:demand\]. By averaging over all possible demand realizations $\dbf\in\mathfrak D$ we obtain the following result. \[thm:general\] In a network with $K$ receivers and $N$ files, for a given set of demand distributions $\{\qbf_k\}_{k=1}^{K}$, cache capacities $\{\mu_k \}_{k=1}^K$, and caching distributions $\{\pbf_{k} \}_{k=1}^K$, the asymptotic expected coded multicast rate required to deliver the requested file versions with length $\{\length_{k,n} \factor \}_{(k,n)\in[K]\times[N]}$, is upper bounded as $$\begin{aligned} & \text{where\quad} {\bar \Psi}^{(1)}\Big( \{\qbf_{k}\}, \{\mu_k \} , \{\pbf_{k}\}, \{ \length_{k,n} \} \Big) \notag\\ &\qquad\qquad= \sum_{i=1}^{K} \sum_{\ell=1}^{K-i+1} \sum_{n =1}^{N}\sum_{\mathcal K_{\ell} \subseteq \{\order_{i}^*,\dots,\order_K^* \}} \sum_{k \in {\mathcal K}_\ell } \Big(\length_{\order_i^*}^* - \length_{\order_{i-1}^*}^* \Big) \lambda_i(\mathcal K_\ell,k, n) \,\Gamma_i(\mathcal K_\ell,k,n), \label{eq:psi 1 exp} \\ & {\bar \Psi}^{(2)} \Big( \{\qbf_{k}\},\{\mu_k \} , \{\pbf_{k}\}, \{ \length_{k,n} \} \Big) = \sum_{n=1}^N \Big(1-\prod_{k=1}^K (1-q_{k,n})\Big)\Big(\max\limits_{k\in[K]} \length_{k,n} - \min\limits_{k\in[K]} p_{k,n}\mu_k \Big) , \label{eq:psi 2 exp} \\ & \Gamma_i({\mathcal K_\ell,k,n} ) = \mathbb P\Big( (k,n)= \argmax\limits_{ ( s,t): s\in\mathcal K_\ell, \,t = f_s }\,\lambda_i(\mathcal K_\ell,s,t) \Big),\label{eq:gammaa}\\ & \length^*_{k} = \max_{n\in[N]} \length_{k,n} , \end{aligned}$$ and with $\lambda_i (\mathcal K_\ell,k,n) $ defined in , and where $\order_1^*,\dots,\order_K^*$ denotes an ordered permutation of receiver indices $\{1,\dots,K \}$ such that $\length_{\order_1^*}^*\leq\dots\leq\Omega_{\order_K^*}^*$. In , $\fbf$ denotes the $\ell$-dimensional sub-vector of demand $\dbf$ corresponding to receivers in set $\mathcal K_\ell$, and $\Gamma_i(\mathcal K_\ell,k,n) $ denotes the probability that file $n \ni \fbf$ requested by receiver $k\in\mathcal K_\ell$ maximizes the quantity $\lambda_i(\mathcal K_\ell,s,t) $. The proof is given in Appendix \[App:general\]. Special Cases for RF-GCC {#subsec: comparison to QD} ------------------------ In this section, we focus on two specialized settings with symmetry across the library files or across receivers. In Sec. \[subsec:Symmetrical file\], we describe how, under file-symmetry, the proposed RF-GCC scheme is applicable to the problem studied in [@yang2018coded], and Sec. \[subsec:Symmetrical rec\] considers symmetry across receivers, which is used in Sec. \[subsec:performance of cc-cm\] to solve the optimization problem in . ### **Symmetry Across Files** {#subsec:Symmetrical file} As explained in Remark \[rmk:compare to QD\], the RF-GCC proposed in Sec. \[subsec:description\] can be adopted for the problem studied in [@yang2018coded]. The network in [@yang2018coded] is composed of $N$ independent files and $K$ receivers with cache sizes $\{\mu_1,\dots,\mu_K\}$. Each receiver has a preset distortion requirement, $\{D_1,\dots,D_K\}$, i.e., any of the library files requested by receiver $k$ need to be delivered with distortion less than $D_k$, and the objective is to characterize the rate-memory trade-off for the worst-case demand. Then, for a given distortion-rate function, the distortion requirements of receivers can be mapped to a given set of minimum compression rates. The minimum compression rates are equivalent to the normalized (by constant $\tau$) version lengths $\{\Omega_{k,n}\}$ defined in Sec. \[sec:RAP\], when $\Omega_{k,1}=\dots=\Omega_{k,N}$ for any $k\in[K]$. Therefore, the setting considered in [@yang2018coded] is a specialization of our network model in Sec. \[subsec:description\] to the case where each receiver is interested in getting equal length versions of the files in the library. Note that we characterize the rate-memory trade-off by deriving an upper bound on the rate of RF-GCC for any given demand in Theorem \[thm:demand\], from which we then characterize the average rate-memory trade-off in Theorem \[thm:general\], while this trade-off is only provided for the worst-case scenario in [@yang2018coded Sec IV]. Specializing Theorem \[thm:demand\] to equal version lengths leads to Corollary \[thm:demand QD\] below. The worst-case rate-memory trade-off is given by $\max\limits_{\dbf\in\mathfrak D}R^C \Big(\dbf, \{\mu_k \} ,\{p_{k}\},\{{\length}_{k}\} \Big) $, and the average trade-off can be derived as in Appendix \[App:general\] by taking expectation of the rate over all demands. \[thm:demand QD\] In a network with $K$ receivers and $N$ files, for a given demand realization $\dbf\in\mathfrak D$, and a given set of cache capacities $\{\mu_k\}$ and caching distributions with parameters $\{p_{k} \}_{k=1}^K$, the asymptotic coded multicast rate required to deliver the requested file versions with length $\{\length_{k} \factor\}_{k=1}^K$, is upper bounded as $$\begin{aligned} R^C \Big(\dbf, \{\mu_k \} ,\{p_{k}\},\{{\length}_{k}\} \Big) \leq \min \bigg\{ \Psi_{\dbf}^{(1)}\Big( \{\mu_k \} ,\{p_{k}\},\{{\length}_{k} \} \Big), \Psi_{\dbf}^{(2)}\Big( \{\mu_k \} ,\{p_{k}\},\{{\length}_{k}\}\Big) \bigg\}, \end{aligned}$$ where $$\begin{aligned} & \Psi_{\dbf}^{(1)}\Big( \{\mu_k \} , \{p_{k}\},\{{\length}_{k}\} \Big) = \sum_{i=1}^{K} \Big( \length_{\order_i} - \length_{\order_{i-1}} \Big) \sum_{\ell=1}^{K-i+1} \sum_{\mathcal K_{\ell} \subseteq \{\order_i,\dots,\order_{K}\}} \max\limits_{k\in\mathcal K_\ell} \lambda_i (\mathcal K_\ell,k) , \label{eq:psi 1 QD} \\ & \Psi_{\dbf}^{(2)} \Big( \{\mu_k \} ,\{p_{k}\},\{{\length}_{k}\} \Big)= \sum_{n =1}^N \mathbbm{1} \{n \ni \dbf\} \Big( \max\limits_{k: d_k = n} \, {\length}_{k} - \min\limits_{k: d_k = n} \, {\mu}_{k}p_k \Big) , \label{eq:psi 2 QD} \\ & \lambda_i (\mathcal K_\ell,k) = (1-p^c_{k})\prod\limits_{u\in \mathcal{K}_{\ell}\backslash \{k\}} p^c_{u} \prod\limits_{u\in\{\order_i,\dots,\order_{K}\}\setminus \mathcal{K}_{\ell}}{(1-p^c_{u})},\label{eq:lambda thm QD}\\ & p^c_{k} =p_{k} \frac{ \mu_k}{ \length_{k}}\label{eq: pc is prob QD} \end{aligned}$$ where $\mathcal{K}_{\ell}$ denotes a given set of $\ell$ receivers, and $\order_1,\dots,\order_K$ denotes an ordered permutation of receiver indices such that $\Omega_{\order_1}\leq\dots\leq\Omega_{\order_K}$. In , $p^c_{k}$ denotes the probability that a packet from version $k\in[N]$ of any file $n\in[N]$ is cached at receiver $k$. For the setting considered in [@yang2018coded] with $\length_1\leq\dots,\leq\length_K$, we have observed that when $N\geq K$, the worst-case delivery rate computed based on Corollary \[thm:demand QD\] is equal to the rate provided in [@yang2018coded Theorem 5]. Our numerical results show slight improvement in delivery rate compared to the rate in [@yang2018coded Theorem 5] for the less common setting of $N<K$. ### **Symmetry Across Receivers**  \[subsec:Symmetrical rec\] Consider a network with symmetric receivers where all receivers have equal-size caches and request files according to the same demand distribution, i.e. $M_{k} = M$, $q_{k,n} = q_{n}$, for all $(k,n) \in [k]\times[N]$. In this network, it is immediate to verify that the optimal caching distributions $\{\pbf_k \}$, and the corresponding cache allocations $\{ M_{k,n}\}$ are uniform across all the receivers, i.e., $p_{k,n} = p_n$ and $M_{k,n} = M_n$, for all $(k,n) \in [k]\times[N]$. Furthermore, all receivers have the same storing range for file $n\in[N]$, i.e., $\length_{k,n} = \length_{n}$. The following theorem characterizes the asymptotic expected coded multicast rate achieved with the RF-GCC scheme in this symmetric setting, and provides a tighter upper bound on the expected multicast rate compared to the one resulting from specializing Theorem \[thm:general\] to a setting with symmetric receivers. Theorem \[thm:Sym user\] generalizes the results in [@ji15order], and characterizes the expected coded multicast rate achieved in a network composed of symmetric receivers and non-symmetric files with unequal popularities $\qbf$ and lengths $\{\length_n \tau \}$. \[thm:Sym user\] In a network with $K$ symmetric receivers and $N$ files, demand distribution $\qbf$, cache capacity $\mu$ and caching distribution $\pbf$, the asymptotic expected coded multicast rate required to deliver the requested file versions with length $\{\length_{n} \factor \}_{n=1}^{ N}$, is upper bounded as $$\label{eq:demanrate special case} {\bar R}^C \Big(\qbf,\mu,\pbf,\{{\length}_{n}\} \Big) \leq \min \bigg\{ \bar\Psi^{(1)}\Big( \qbf,\mu,\pbf,\{{\length}_{n}\} \Big), \bar\Psi^{(2)}\Big(\qbf,\mu,\pbf,\{{\length}_{n}\} \Big) \bigg\},$$ where $$\begin{aligned} & {\bar \Psi}^{(1)}\Big(\qbf,\mu,\pbf,\{{\length}_{n}\} \Big) = \sum_{i=1}^{N} (\length_{\zeta_i} - \length_{\zeta_{i-1}} ) \sum_{\ell=1}^{{\widetilde K}_i} \binom{{\widetilde K}_i }{\ell}\sum_{ n\in\{\zeta_i,\dots,\zeta_N \}} \Gamma_i({\widetilde K}_i,\ell,n) \, \lambda({\widetilde K}_i,\ell,n) , \label{eq:psi 1 special case} \\ & {\bar \Psi}^{(2)}\Big(\qbf,\mu,\pbf,\{{\length}_{n}\} \Big) = \sum_{n=1}^N \Big(1- (1-q_{n})^K\Big)\Big( {\length}_{n} - p_{n}\mu\Big) , \label{eq:psi 2 special case} \\ & \lambda(K,\ell,n)= ( p_n^c )^{\ell-1} ( 1-p_n^c )^{K-\ell+1} , \label{eq:lambda special case} \\ & \Gamma_i({\ell,n} ) = \mathbb P\Big( n= \argmax\limits_{t\in\mathcal F_\ell} \;\; ( p_t^c )^{\ell-1} ( 1-p_t^c )^{K-\ell+1} \Big),\label{eq:gammaa special case}\\ & p^c_{n} =p_{n} \frac{ \mu}{ \length_{n}} , \label{eq: pc is prob special case} \end{aligned}$$ where $\zeta_1,\dots,\zeta_N$ denotes an ordered permutation of file indices $\{1,\dots,N \}$ such that $\length_{\zeta_1}\leq\dots\leq\Omega_{\zeta_N}$, and ${\widetilde K}_i = K \sum_{j=i}^N q_{\zeta_j}$ denotes the expected number of receivers requesting a file with version length larger than $\Omega_i \tau$ bits. In , $\mathcal F_\ell$ denotes a random set of $\ell$ files chosen from $\{\zeta_i,\dots, \zeta_N\}$ (with replacement) in an i.i.d manner according to $\qbf$, and $ \Gamma_i(K,{\ell,n} ) $ denotes the probability that file $n \in \mathcal F_\ell$ requested by a set of $\ell$ receives maximizes the quantity $\lambda(K,\ell,n)$. The proof is given in Appendix \[app:Sym user\]. The CC-CM Scheme Implemented with RF-GCC {#sec:RAP-GCCOptimization} ======================================== In this section, we describe how RF-GCC of Sec. \[sec:RAP\] can be adopted by the CC-CM scheme of Sec. \[sec: Multicast\] to fill the receiver caches and to deliver the coded multicast portion of the transmissions in the delivery phase. The RF-GCC scheme is designed to be applicable to the scalable coding-based content delivery setting considered in this paper. In fact, in line with Sec. \[sec:RAP\], a version of a file used by RF-GCC is the combination of its base layer and a given number of its successive enhancement layers. We use the rate upper bounds achieved with RF-GCC, provided in Sec. \[sec:Achievable Rate\], to solve the optimization problem in and to characterize the rate-distortion-memory trade-off of the CC-CM scheme. Adopting RF-GCC for CC-CM: Scheme Description {#subsec: RAPP-GCC with SVC} --------------------------------------------- The CC-CM scheme adopts the RF-GCC for the scalable delivery of files as follows: - For a given set of cache allocations $\{M_{k,n} \}$ and per-receiver coded multicast rates $\{\widetilde R_{k,n} \} $, let $\Omega_{k,n} = M_{k,n} + \widetilde R_{k,n}$, $k=1,\dots,K$ and $n=1,\dots,N$, which we refer to as the [*storing range*]{} of receiver $k$ for file $n$. The storing range $\Omega_{k,n}$ is the rate with which file $n\in[N]$ is guaranteed to be delivered to receiver $k\in[K]$, upon request, through coded transmissions for any demand $\dbf$. The number of source-samples $F$ and the storing range $\Omega_{k,n}$ (bits/sample) play the roles of parameter $\tau$ and $\Omega_{k,n}$ described in Sec. \[sec:RAP\], respectively. - All versions of the library files are partitioned into equal-length packets of $T$ bits. - During the caching phase, receiver $k\in[K]$ selects $M_{k,n} F/T $ distinct packets uniformly at random from the $ \Omega_{k,n} F/T$ packets of version $k$ of file $n\in [N]$, where $\Omega_{k,n}$ determines the range of packets of file $n$ from which receiver $k$ is allowed to cache, hence the name storing range. Then, a packet from version $k$ of file $n$ is cached at receiver $k$ with probability $$\begin{aligned} p^c_{k,n} = \frac{M_{k,n}}{M_{k,n}+ \widetilde R_{k,n}} ,\label{eq:P cache} \end{aligned}$$ which is in line with for $p_{k,n}\mu_k=M_{k,n}$ and $\Omega_{k,n} = M_{k,n}+\widetilde R_{k,n}$. The optimal values of $\{M_{k,n}\}$ and $\{\widetilde R_{k,n}\}$ are derived in terms of the rate budget $R$, cache sizes $\{ M_k\}$, and the demand distributions $\{\qbf_k\}$, by solving , which we explain in Sec. \[sec:RAP-GCCOptimization\]. - In the delivery phase, for a given demand $\dbf\in\mathfrak D$, the sender delivers the remaining $\widetilde R_{k,d_k}F/T$ missing (i.e., not cached) packets from the version requested by receiver receiver $k\in [K]$, via coded transmissions using the GCC scheme described in Sec. \[sec:RAP\]. Finally, the sender utilizes the remaining available rate from the total rate budget $R$ to transmit an additional layer, with rate $\widehat R_{k,{\bf d}}$, of file $d_k$ requested by receiver $k$ via uncoded transmissions. For a given demand $\dbf$, the per-receiver uncoded rates $\{\widehat R_{k,{\bf d}} \}$ can be determined based on a reverse water-filling approach similar to LC-U described in Sec. \[sec: Unicast\]. Since Gaussian sources are successively refinable, receiver $k $ is able to successfully recover file $d_k$ with rate $ \Omega_{k,d_k}+\widehat R_{k,{\bf d}}$. Based on the results in Sec. \[sec:RAP\], as $F\rightarrow\infty$, for any $\dbf\in\mathfrak D$, the aggregate multicast rate achieved by CC-CM is upper bounded by $R^C \Big( \dbf, \{\mu_k \}, \{\pbf_{k}\},\{ \Omega_{k,n} \} \Big) +\sum\limits_{k=1}^K\widehat R_{k, \dbf}$, where $R^C\Big( \dbf,$ $ \{\mu_k \}, \{\pbf_{k}\}, \{ \length_{k,n} \} \Big)$ is the aggregate coded rate achieved by RF-GCC given in Theorem \[thm:demand\]. In Sec. \[subsec:performance of cc-cm\], we use this upper bound to replace the first constraint of optimization , as $$\begin{aligned} R^C \Big( \dbf, \{\mu_k \}, \{\pbf_{k}\},\{ \Omega_{k,n} \} \Big) +\sum\limits_{k=1}^K\widehat R_{k, \dbf} \leq R . \label{eq:constraint} \end{aligned}$$ Discussion ---------- In this section, we briefly discuss some of the choices we made when designing the caching and delivery phases of CC-CM that adopts RF-GCC. As explained in Sec. \[sec: Multicast\], we partition the demand-dependent per-receiver rates $\{R_{k,\dbf}\}$ into two portions: a portion delivered through coded multicast that depends only on individual demands, $\{\widetilde R_{k,d_k}\}$, and another portion delivered through uncoded transmissions that depends on the entire demand, $\{\widehat R_{k,\dbf}\}$, which allows us to analytically evaluate the aggregate coded rate delivered by CC-CM. Introducing demand-independent per-receiver rates $\{\widetilde R_{k,d_k}\}$ allows us to exploit coding opportunities during multicast transmissions while supporting a minimum reconstruction quality for each receiver request. This is achieved via defining the storing range. When adopting the RF caching strategy for CC-CM, each receiver selects and caches various packets of a file version uniformly at random among a set of packets dictated by its storing range defined in \[subsec: RAPP-GCC with SVC\]. This random population of the caches is a simple strategy to increase the distribution of distinct packets in the caches across the network, which is key for increasing the coding opportunities in the delivery phase compared to traditional caching schemes that are based on local file popularity such as the Least Frequently Used (LFU) strategy[^4][@ji15order]. Recall that in scalable encoding, an enhancement layer can not be used to improve the video quality without the base layer and all preceding enhancement layers. Hence, packets from a layer of a given file version can be potentially useless if all packets corresponding to its preceding layers are not received in their entirety. Using a caching strategy where receivers fill their caches starting from the lowest layer would limit the coding opportunities during the delivery phase, and result in a lower number of delivered enhancement layers. However, with random caching only a subset of packets from different layers are available at a receiver. Therefore, due to scalable encoding all packets missing from these layers and preceding layers need to be delivered during the delivery phase in order to prevent packets that are cached from being futile. To this end, we determine the minimum number of layers that we guarantee to fully deliver to each receiver based on the network setting, which maps to the storing range, i.e., the lowest compression rate with which a file version can be delivered to that receiver, and utilize the remaining rate budget to deliver additional layers through uncoded transmissions by solving an optimization similar to LC-U. Rate-Distortion-Memory Trade-off with CC-CM {#subsec:performance of cc-cm} ------------------------------------------- In this section, our objective is to solve the optimization problem in . To this end, we adopt RF-GCC for the CC-CM scheme as in Sec. \[subsec: RAPP-GCC with SVC\], which is equivalent to replacing the first constraint in with . Then, the optimal cache allocation $\{M_{k,n}^*\}$, per-receiver coded rates $\{\widetilde R_{k,n}^*\}$, and per-receiver uncoded rates $\{\widehat R_{k, \dbf}^*\}$ are derived from \[eq: Het Opt\] $$\begin{aligned} {4} & \text{min } && {\mathbb E} \bigg[\frac{1}{K} \sum_{k=1}^{K}\sigma_{d_{k}}^{2}2^{-2(M_{k, d_{k}}+\widetilde R_{k, d_{k}}+ \widehat R_{k, \dbf})}\bigg]\label{eq: ObjectiveGeneral}\\ & \text{s.t.} && { R}^C \Big( \dbf, \{M_k \}, \{\pbf_{k}\},\{\length_{k,n} \} \Big) +\sum\limits_{k=1}^K\widehat R_{k, \dbf} \leq R, \hspace{0.5cm} \forall \dbf \in \mathfrak D \label{eq: Rate d} \\ &&& \length_{k,n} = M_{k,n} + \widetilde R_{k, n} , \;\; M_{k,n} = p_{k,n} M_{k}\hspace{1cm} \forall (k,n) \in [K] \times [N] \\ &&& \sum_{k=1}^{K}M_{k,n} \leq M_{k}, \hspace{1.7cm} \forall k \in [K]\label{eq: CacheGeneral}\\ &&& M_{k,n}, \widetilde R_{k,n}, \widehat R_{k, \dbf}\geq 0, \hspace{1cm} \forall (k,n, \dbf) \in [K] \times [N] \times \mathfrak D \label{eq: VariableGeneral} \end{aligned}$$ The optimization problem in is highly non-convex and has an exponential number of constraints due to , which depends on the cardinality of $\mathfrak D$. We simplify the solution by relaxing and allowing the rate constraint to be satisfied on average over all demands, and we replace with $$\label{eq:constraint relaxed} { \bar R}^C \Big(\{\qbf_{k}\}, \{ M_k\}, \{\pbf_{k}\},\{ \length_{k,n} \} \Big) + {\mathbb E}\Big [\sum\limits_{k=1}^{K} \widehat R_{k, \dbf}\Big]\leq R,$$ where ${ \bar R}^C \Big(\{\qbf_{k}\},\{ M_k\}, \{\pbf_{k}\},\{ \length_{k,n} \} \Big)$ is given in Theorem \[thm:general\]. In the following, we analyze the solution to the relaxed version of for settings with symmetry across receivers or files. ### **Symmetry Across Receivers**  \[sec:Symmetrical cc-cm\] As described in Sec. \[subsec:Symmetrical rec\], for symmetric receivers with equal-size caches and the same demand distribution, all receivers have the same storing range for file $n\in[N]$, i.e., $\length_{k,n} = \length_{n}$, or equivalently, they have the same per-receiver coded rate $\widetilde R_{k,n} =\widetilde R_{n}$. The asymptotic expected coded multicast rate achieved with the RF-GCC scheme in this setting is given in Theorem \[thm:Sym user\]. The performance of CC-CM depends on both the distortion-rate function of the sources according to which the files are generated and the file popularities. In order to see this dependency consider the following two cases. Consider a setting where files are generated in an i.i.d. fashion according to the same source distribution, and hence, they have the same distortion-rate function. In this case, CC-CM prioritizes the caching of more popular files. In order to simplify the analysis, as in [@ji15order], we could use a caching distribution such that a set of the most popular files are cached with uniform probability, while all other less popular files are not cached at all. Using this caching policy for RF-GCC is proved in [@ji15order] to result in performance that is within a constant factor of the optimal one. Alternatively, consider a setting where all files are equally popular but have different distortion-rate functions, which corresponds to different variances $\{\sigma_n^2 \}$ for Gaussian sources. In this case, CC-CM prioritizes the caching of files that have higher distortion. Similarly to [@ji15order], one could consider a simplified caching strategy, where a set of the files that are generated from sources with larger variance are cached with uniform probability, while all other files generated from sources with smaller variance are not cached at all. In line with [@ji15order], we propose a simplified caching placement that takes into account both the popularity of the files and their distortion-rate functions. Let us divide the library files into two groups $\mathcal G_1$ and $\mathcal G_2$, with sizes $\widetilde N$ and $N-\widetilde N$, respectively, and assign fixed storing ranges $ \widetilde\Omega_1$ and $ \widetilde\Omega_2$ to all version of the files in groups $\mathcal G_1$ and $\mathcal G_2$, respectively. Then, the receivers fill their caches according to a truncated uniform caching distribution given as follows [2]{} $$p_n = \begin{cases} {1}/ {\widetilde N} , &\hspace{0.5cm} n \in \mathcal G_1 \\ 0, \hspace{2cm}&\hspace{0.5cm} n \in \mathcal G_2 \end{cases},\notag$$ $$\Omega_n = \begin{cases} \widetilde\Omega_1, &\hspace{0.5cm} n \in \mathcal G_1 \\ \widetilde\Omega_2 , \hspace{2cm}&\hspace{0.5cm} n \in \mathcal G_2 \end{cases},$$ where the cut-off index $\widetilde N \geq M$ and values $\widetilde\Omega_1$ and $\widetilde\Omega_2$ are a function of the system parameters, and are derived from solving . We refer to the resulting caching strategy as the Truncated Random Fractional (TRF) caching. Intuitively, it is more likely that group $\mathcal G_1$ contains the more popular files that are also generated from sources with higher variances. Group $\mathcal G_2$ contains all other files that are less popular and that are generated from sources with lower variances. Then, [2]{} $$\nonumber M_{n} = \begin{cases} \widetilde M =M/ {\widetilde N} &\hspace{0.1cm} n \in \mathcal G_1 \\ 0 \hspace{1cm}&\hspace{0.2cm} n \in \mathcal G_2 \end{cases},$$ $$\nonumber \widetilde R_{n} = \begin{cases} \widetilde R_1 = \widetilde \Omega_1 -\widetilde M &\hspace{0.2cm} n \in \mathcal G_1 \\ \widetilde R_2 = \widetilde \Omega_2 \hspace{1cm}&\hspace{0.2cm} n \in \mathcal G_2 \end{cases},$$ and from , a packet of file $n\in[N]$ is cached at any receiver with probability $$p_n^c = \begin{cases} {\widetilde M}/ (\widetilde M+\widetilde R_1) &\hspace{0.5cm} n \in \mathcal G_1 \\ 0 \hspace{2cm}&\hspace{0.5cm} n \in \mathcal G_2 \end{cases}.$$ The optimal values for $\widetilde N$ (and hence $\widetilde M$), $\widetilde R_1$, $\widetilde R_2$ and $\{ \widehat R_{k,\dbf}\}$ are derived from \[eq: RLFU\] $$\begin{aligned} {4} &{\text{min}} \hspace{0.2cm} && \sum_{\dbf \in \mathfrak D} \Pi_{\dbf} \bigg(\frac{1}{K} \sum_{k=1}^{K}\sigma_{d_k}^{2}2^{-2(M_{d_k}+\widetilde R_{d_k} + \widehat R_{k, \dbf})} \bigg) \\ & \text{s.t.} && \min \bigg\{ \widetilde \Psi^{(1)}\Big( M, \widetilde N, \widetilde R_1, \widetilde R_2, \widetilde G \Big), \bar\Psi^{(2)}\Big( \qbf,\{ \length_{n}\} \Big) \bigg\} + \sum_{\dbf \in \mathfrak D} \Pi_{\dbf}\sum\limits_{k=1}^{K} \widehat R_{k, \dbf} \leq R,\label{subeq:RateRLFU }\\ &&& \widetilde \Psi^{(1)}\Big( M, \widetilde N, \widetilde R_1, \widetilde R_2, \widetilde G \Big) = \frac{\widetilde R_1 ( M + \widetilde N\widetilde R_1)}{ M} \left (1-\left (\frac{\widetilde N\widetilde R_1}{ M + \widetilde N\widetilde R_1} \right)^{K\widetilde G } \right ) + K (1-\widetilde{G})\widetilde R_2 \nonumber \\ &&& \widetilde N, \widetilde R_1,\widetilde R_2, \widehat R_{k, \dbf}\geq 0, \hspace{0.3cm} \forall (k, \dbf) \in [K] \times \mathfrak D \end{aligned}$$ where $\widetilde G = \sum\limits_{n\in\mathcal G_1} q_{n}$, and $\bar\Psi^{(2)}\Big( {\bar \Psi}^{(2)}\Big(\qbf,\mu,\pbf,\{{\length}_{n}\} \Big) \Big)$ is defined in . The first term in , $\widetilde \Psi^{(1)}\Big( M, \widetilde N, \widetilde R_1, \widetilde R_2, \widetilde G \Big)$, is the expected coded multicast rate achieved by TRF-GCC for files in group $\mathcal G_1$, derived using Theorem \[thm:Sym user\] and by applying Jensen’s inequality as explained in [@ji15order Appendix B]. The second term is the expected uncoded multicast rate for files in group $\mathcal G_2$, from which no packet has been cached in the network. We refer to the resulting scheme as the CC-CM scheme that adopts TRF-GCC. ### **Symmetry Across Receivers and Files**  \[sec: Uniform\] The simplest network setting consists of all receivers having equal-size caches, uniform demand distributions, and all files (sources) having the same distribution, i.e. $$M_{k} = M,\; q_{k,n} = \frac{1}{N}, \;\sigma_{n}^{2} = \sigma^{2} , \quad \text{for all } (k,n) \in [K]\times[N].$$ Due to the symmetry, it can be immediately verified that both the optimization problem in and the relaxed version of result in uniform caching distribution $p_{k,n} = \frac{1}{N}$, and a unique storing range $\Omega_{k,n} = \widetilde \Omega$ for all $(k,n)\in[K]\times[N]$. In this setting, $M_{k,n}=\widetilde M$ and $R_{k, \dbf}= \widetilde R_{k, d_{k}}=\widetilde R$, and therefore from we have $p^c = {\widetilde M}/{(\widetilde M+\widetilde R)}$. It is immediate to see that in this setting the optimal solution assigns $\widehat R_{k, \dbf}=0$ for any demand $\dbf\in\mathfrak D$, and that we only need to account for the per-receiver coded rates. The optimal values of $\widetilde M^*$ and $\widetilde R^*$ are derived using the relaxed version of problem by further particularizing the expected coded multicast rate given in Theorem \[thm:Sym user\] to symmetric files, as follows $$\begin{aligned} &{\text{min}} & & \sigma^{2}2^{-2(\widetilde M+\widetilde R)} \\ & \text{s.t.} & & \Big(\widetilde M+\widetilde R\Big)\, \min\bigg\{ \frac{\widetilde R}{\widetilde M} \bigg (1- \Big (\frac{\widetilde R}{\widetilde M+\widetilde R} \Big)^K \bigg) ,\; \bigg(1-\Big(1-\frac{1}{N}\Big)^K\bigg)^N \bigg\} \leq R,\\ &&& \widetilde M \leq M, \;\;\; \widetilde M, \widetilde R \geq 0. \end{aligned}\label{eq: Uniform}$$ Numerical Results =================  \[sec:Simulations\] In this section, we numerically compare the performance of the LC-U and CC-CM content delivery schemes proposed in Secs. \[sec: Unicast\] and \[sec: Multicast\] using the asymptotic closed-form results ($F\rightarrow\infty$) provided in Secs. \[sec:Symmetrical cc-cm\] and \[sec: Uniform\]. We consider a network composed of $K= 20$ receivers and a library with $N = 100$ files, which are requested by all receivers according to a Zipf distribution $\qbf$ with parameter $\alpha$, where $q_{n} = {n^{-\alpha}}/{\sum_{n= 1}^{N} n^{-\alpha}}$ for $n =1,\dots,N$. Fig. \[fig:asymptotic\] (a), displays the expected distortion achieved with the LC-U scheme [(exact)]{} and the CC-CM scheme [(upper bound)]{} using TRF-GCC. In order to reduce the complexity of problem , we assume that the per-receiver uncoded rates $\{\widehat R_{k,\dbf}\}$ are independent of the demand and only depend on the file indices. Therefore, the CC-CM curve shown in Fig. \[fig:asymptotic\] (a) provides an upper bound on the one resulting from solving . It is assumed that all receivers have the same cache size, $\alpha = 0.6$, and $\sigma_{n}^{2}$ is uniformly distributed in the interval $[0.7,1.6]$. The distortions have been plotted (on a logarithmic scale) for rate budget values of $R \in \{2,5,8\}$ bits/sample as receiver cache sizes vary from $5$ to $100$ bits/sample. As expected, CC-CM significantly outperforms LC-U in terms of expected distortion. This means that for a given rate budget $R$, CC-CM is able to deliver higher-rate file versions to the receivers, reducing their reconstruction distortions. Specifically, for rate budget $R= 2$ and cache size $M=50$, CC-CM achieves a $2.1\times$ reduction in expected distortion compared to LC-U, and for larger rate budget $R=8$ the gain of CC-CM increases to $5.4$ for the same cache size $M=50$. [0.45]{} ![Distortion-memory trade-off in a network with $K = 20$ receivers, $N= 100$ files, and Zipf demand distribution with parameter (a) $\alpha = 0.6$, and (b) $\alpha = 0$ (uniform demands).[]{data-label="fig:asymptotic"}](MulticastSymmetricApproxLOG.pdf "fig:"){width="0.87\linewidth"} [0.45]{} ![Distortion-memory trade-off in a network with $K = 20$ receivers, $N= 100$ files, and Zipf demand distribution with parameter (a) $\alpha = 0.6$, and (b) $\alpha = 0$ (uniform demands).[]{data-label="fig:asymptotic"}](MulticastUniformUsersFilesLOG.pdf "fig:"){width="0.87\linewidth"} In Fig. \[fig:asymptotic\] (b), we consider a homogeneous network with uniform file popularity ($\alpha = 0$) and $\sigma_{n}^{2} = 1.5 $, for all $ n\in [N]$, $N = 100$. The expected distortions achieved for LC-U and CC-CM (using RF-GCC) are plotted for the rate budget values of $R \in \{2,5,10\}$ bits/sample as receiver cache sizes vary from $5$ to $100$ bits/sample. It is observed that the gains achieved by CC-CM are even higher in this scenario, which result from the increased coded multicast opportunities that arise when files have uniform popularity [@ji15order]. In this case, for $R=10$ and $M=50$, the expected distortion achieved with CC-CM is $9.5$ times less than with LC-U, and the improvement factor increase up to $14\times$ for cache capacity $M=70$. Conclusion ==========  \[sec: Conclusion\] In this paper, we have investigated the use of caching in broadcast networks for enhancing video streaming quality, or in a more abstract sense, reducing source distortion. During low traffic hours, receivers cache low rate versions of the video files they are interested in, and during high traffic hours further enhancement layers are delivered to enhance the video playback quality. We have proposed two cache-aided content delivery schemes that differ in performance, computational complexity and required coding overhead. We have shown that while local caching and unicast transmission can be used to improve reconstruction distortion without the need of global coordination, the use of cooperative caching and coded multicast transmission is able to provide $10\times$ improvement in expected achievable distortion in a network with $20$ users and $100$ files by delivering more enhancement video layers with the same available broadcast resources. We have characterized the distortion-memory trade-offs for both schemes, and our numerical results have confirmed the gains that can be achieved by exploiting coding across the cached and requested content during multicast transmissions. As a subproblem to our main problem, we have generalized the setting in [@ji15order] to one that delivers different versions of library files to the users, thereby providing a solution to the lossy caching problem studied in [@yang2018coded]. Proof of Theorem \[thm:demand\] {#App:demand} =============================== The proof is based on a generalization of the proof in [@ji15order Appendix A] to a setting where receivers have different cache sizes, different file preferences, and where they request degraded versions of the same file. We upper bound the asymptotic ($\factor \rightarrow\infty$) coded multicast rate achieved by the GCC algorithm. As described in [@ji15order Sec III-B], the GCC algorithm applies two greedy graph coloring-based algorithms, GCC$_1$ and GCC$_2$, to the index coding conflict graph $\mathcal H_{\Cbf,\Qbf}$, constructed based on the packet-level cache configuration $\Cbf$ and demand realization $\Qbf$. Then, GCC determines the total number of distinct colors assigned by each algorithm to the graph vertices, and selects the coloring that results in a smaller number of distinct colors. Coded multicast rate achieved by GCC$_1$ for demand $\dbf$: ----------------------------------------------------------- For a given vertex $v$ in the conflict graph $\mathcal H_{\Cbf,\Qbf}$, corresponding to packet $\alpha(v)$ requested by receiver $\beta(v)$, we refer to the unordered set of receivers $\{\beta(v), \eta(v)\}$ as the [*receiver label*]{} of $v$, which corresponds to the set of receivers either requesting or caching packet $\alpha(v)$. Note that by definition of the conflict graph, two vertices with the same receiver label are not connected via an edge, i.e., they do not interfere. Let $\mathcal J(\Cbf,\Qbf)$ denote the number of distinct colors assigned by algorithm GCC$_1$ to graph $\mathcal H_{\Cbf,\Qbf}$, which, by definition, is the number of independent sets[^5] selected by the algorithm. By construction, GCC$_1$ generates independent sets that are composed of vertices with the same receiver label. We upper bound the number of independent sets in $\mathcal H_{\Cbf,\Qbf}$ by first splitting the graph into $K$ subgraphs, and upper bounding $\mathcal J(\Cbf,\Qbf)$ with the sum of the number of independent sets found by GCC$_1$ in each of the $K$ subgraphs. For demand $\dbf$, let the ordered set $\order_1,\dots,\order_K$ denote a permutation of receiver indices $\{1,\dots,K \}$ such that $\length_{\order_1,d_{\order_1}}\leq\dots\leq\Omega_{\order_K, d_{\order_K}}$. Then, $\mathcal H_{\Cbf,\Qbf}$ is split into possibly $K$ subgraphs such that subgraph $i\in[K]$ is composed of all the vertices in $\Qbf$, denoted by ${{\cal V}}^{(i)}$, [that represent the requested packets that belong to the portion of files from bit $\length_{\order_{i-1},d_{\order_{i-1}}} \factor$ to bit $\length_{\order_i,d_{\order_i}}\factor$ (Indexed from the beginning of a file) demanded by receivers $\{\order_i,\dots,\order_K\}$. Note that the first subgraph corresponding to $i=1$ is composed of all vertices that represent requested packets from the first $ \length_{\order_1,d_{\order_1}} \factor$ bits of all files in demand $\dbf$, and therefore we define $\length_{\order_0,d_{\order_0}} =0$. Subgraph $i$ is empty, i.e., has no vertices and edges, if $\length_{\order_{i-1},d_{\order_{i-1}}} = \length_{\order_i,d_{\order_i}}$, and consequently $\mathcal V^{(i)} = \emptyset$. By construction, subgraph $i$ only contains packets of files requested by receivers $\{\order_i,\dots,\order_K\}$, and after coloring graph $i$ there are no remaining packets requested by receiver $\order_i$ that need to be delivered.]{} Let us denote the number of independent sets in subgraph $i\in[K]$ by $\mathcal J_i(\Cbf,\Qbf)$. We find an upper bound on $\mathcal J_i(\Cbf,\Qbf)$, $i\in[K]$, proceeding as in [@ji15order Appendix A], by enumerating all possible receiver labels, and by further upper bounding the number of independent sets that GCC$_1$ generates for each receiver label. [We define $\eta_i(v)$ as the set of all receivers in $\{\order_{i},\dots,\order_{K}\}$ which have cached packet $\alpha(v)$ corresponding to vertex $v\in\mathcal V^{(i)}$; therefore, $\eta_i(v)=\eta(v)\setminus\{\order_i,\dots,\order_K\}$. Let $\mathcal K_{\ell} \subseteq \{\order_{i},\dots,\order_{K}\}$ denote a set of $\ell\in\{1,\dots, K-i+1 \}$ receivers,]{} and let $\mathcal J_{\Cbf,\Qbf}^{(i)}(\mathcal K_{\ell})$ denote the number of independent sets generated by GCC$_1$ with receiver label $\mathcal K_{\ell} $ for packet-level demand $\Qbf$ corresponding to subgraph $i$ with vertex set ${{\cal V}}^{(i)}$. As stated in [@ji15order Appendix A], a necessary condition for the existence of an independent set with receiver label $\mathcal K_{\ell} = \{\beta(v), \eta_i(v)\} $ is that for any receiver $k\in \mathcal K_\ell$, there exist a vertex $v\in{{\cal V}}^{(i)}$ such that: 1) $\beta(v) = k$, i.e., receiver $k$ is requesting packet $\alpha(v)$, and 2) $\eta_i(v) = \mathcal K_{\ell}\setminus \{k\}$, i.e., $\alpha(v)$ is cached by all receivers in $\mathcal K_{\ell}\setminus \{k\}$, and not by any other receiver in $\{\order_i,\dots,\order_{K}\} $. Then, for a given $\Cbf$ and $\Qbf$, the number of generated independent sets becomes $$\begin{aligned} \mathcal J_i(\Cbf,\Qbf) = \sum_{\ell=1}^{ K-i+1} \sum_{\mathcal K_{\ell} \subseteq \{\order_i,\dots,\order_{K}\}} \mathcal J_{\Cbf,\Qbf}^{(i)}(\mathcal K_{\ell}),\label{eq:indic}\end{aligned}$$ J\_[,]{}\^[(i)]{}(K\_) &= \_[kK\_]{} \_[ ]{} {\_i(v) = K\_{k} },&\[eq:indic2\] where $\mathbbm{1} \Big\{\eta_i(v) = \mathcal K_{\ell}\setminus \{k\}\Big\}$ is a random variable, whose expected value gives the probability that vertex $v$ corresponding to file $d_k$ requested by receiver $k$ belongs to an independent set associated with receiver label $\mathcal K_\ell$. In other words, it indicates whether packet $\alpha(v)$ can be encoded into a linear codeword intended for all the receivers in $\mathcal K_\ell$. For any vertex $v\in\mathcal V^{(i)}$, the indicator function $ Y_{\mathcal K_\ell,k} \triangleq \mathbbm{1}\Big\{\eta_i(v) = \mathcal K_{\ell}\setminus \{k\}\Big\}$ takes value 1 in the event that packet $\alpha(v)$ is cached at all the receivers in $\mathcal K_\ell\setminus \{k\}$, and is not cached at any of the receivers in $\{\order_i,\dots,\order_{K}\}\setminus \mathcal K_\ell$. $Y_{\mathcal K_\ell,k}$ is a Bernoulli random variable with parameter $${\phi_i }(\mathcal K_\ell,k,d_k) \triangleq \prod\limits_{u\in \mathcal{K}_{\ell}\backslash \{k\}} p^c_{u,d_{k}} \prod\limits_{u\in { \{\order_i,\dots,\order_{K}\}}\setminus \mathcal{K}_{\ell}}{ (1-p^c_{u,d_{k}})},\label{eq: phi app}$$ where $p^c_{k,n}$ denotes the probability that a packet from version $k$ of file $n\in[N]$ is cached at receiver $k\in[K]$, and is given by $$\begin{aligned} p^c_{k,n} ={ \binom{ \Omega_{k,n} F/T-1 }{ M_{k,n} F/T-1 } }\bigg/{ \binom {\Omega_{k,n} F/T}{ M_{k,n} F/T} } = \frac{M_{k,n}}{M_{k,n}+ \widetilde R_{k,n}} = p_{k,n }\frac{M_{k}}{\Omega_{k,n}} .\end{aligned}$$ Similar to [@ji15order Appendix A], it can be is shown that as $\factor\rightarrow \infty$ with fixed $T$, we have $$\lim\limits_{\factor/T \rightarrow\infty} \mathbb{P} \bigg(\bigg| \frac{Y_{\mathcal K_\ell,k} }{(\length_{\order_i,d_{\order_i}} -\length_{\order_{i-1},d_{\order_{i-1}}} ) \factor/T} \, -\, (1-p^c_{u,d_k}) \phi_i (\mathcal K_\ell,k,d_k) \bigg| \leq \epsilon \bigg) = 1.$$ As $\factor\rightarrow\infty$, the expected number of independent sets with label $\mathcal K_\ell$ is upper bounded by $$\begin{aligned} {\mathbb E}_{\Cbf}\bigg[ \mathcal J_{\Cbf,\Qbf}^{(i)}(\mathcal K_{\ell}) \Big|\Cbf \bigg] &= {\mathbb E}_{\Cbf}\bigg[ \max\limits_{k\in\mathcal K_{\ell}} \sum\limits_{ \substack{v \in {{\cal V}}^{(i)}:\\ \beta(v)=k} } \mathbbm{1} \Big\{\eta_i(v) = \mathcal K_{\ell}\setminus \{k\}\Big\} \Big|\Cbf \bigg] \notag\\ & \leq \max\limits_{k\in\mathcal K_{\ell}} \Big\{ (1-p^c_{k,d_{k}}) \phi_i(\mathcal K_\ell,k,d_k) \Big\} (\length_{\order_i,d_{\order_i}} -\length_{\order_{i-1},d_{\order_{i-1}}} ) \frac{\factor}{T} . \label{eq:J1}\end{aligned}$$ Therefore, an upper bound on the asymptotic coded multicast rate for demand $\dbf$ and a given set of caching distributions $\{\pbf_k \}_{k=1}^K$, can be derived from and as follows $$\begin{aligned} \Psi_{\dbf}^{(1)}\Big( \{\mu_k\} , \{\pbf_{k}\},& \{\length_{k,n} \}\Big) \triangleq \frac{1}{\factor/T}\,{\mathbb E}_{\Cbf} \sum_{i=1}^K \Big[ \mathcal J_i(\Cbf,\Qbf) \Big|\Cbf \Big]\notag\\ &\leq \sum_{i=1}^K \sum_{\ell=1}^{K-i+1} \sum_{\mathcal K_{\ell} \subseteq\{\order_i,\dots,\order_{K}\}} \; (\length_{\order_i,d_{\order_i}} -\length_{\order_{i-1},d_{\order_{i-1}}} ) \max\limits_{k\in\mathcal K_{\ell}} \; (1-p^c_{k,d_{k}} ) \phi_i (\mathcal K_\ell,k,d_k) \notag .\end{aligned}$$ Coded multicast rate achieved by GCC$_2$ for demand $\dbf$: ----------------------------------------------------------- Algorithm GCC$_2$ corresponds to uncoded multicast transmissions. As described in [@ji15order Sec III-B], GCC$_2$ randomly selects a vertex $v$ in the conflict graph $\mathcal H_{\Cbf,\Qbf}$ and generates independent sets composed of all vertices representing the same packet $\alpha(v)$ represented by vertex $v$. Then, it assigns the same color to all the vertices in each independent set. This corresponds to transmitting a total number of packets equal to the number of distinct requested packets. In order to evaluate this value for a given set of cache sizes $\{\mu_k\}_{k=1}^K$, we upper bound it with the number of packets that need to be delivered in a scheme where receiver $k\in[K]$ has cached the first $p_{k,d_k}\mu_k\,{\tau}/{T}$ packets from the total $\length_{k,d_k}\,{\tau}/{T}$ packets of version $k$ of file $d_k\in[N]$. In this case, for a requested file $n\ni\dbf$ the longest requested version of file $n$, i.e., $\argmax\limits_{k:d_k = n} \, \length_{k,d_k}$, needs to be transmitted. Given that receivers have heterogeneous cache sizes and caching distributions, only $\argmin\limits_{k:d_k = n} \, p_{k,d_k}\mu_k \, {\tau}/{T}$ packets of this file have been cached by all receivers requesting this file, and therefore, the multicast rate for demand $\dbf$ is upper bounded by $$\begin{aligned} \Psi_{\dbf}^{(2)}\Big( \{\mu_k\} , \{\pbf_{k}\}, \{\length_{k,n} \} \Big) \triangleq \sum_{n =1}^N \mathbbm{1} \{n \ni \dbf\} \Big(\max\limits_{k:d_k = n} \,\Omega_{k,n} - \min\limits_{k:d_k = n} \,p_{k,n}\mu_k \Big) . \label{eq:GCC2 app1}\end{aligned}$$ Proof of Theorem \[thm:general\] {#App:general} ================================ We derive an upper bound on the expected coded multicast rate required to deliver a version of file $n\in[N]$ with rate $\length_{k,n}$ to receiver $k\in[K]$, by taking the expected value of the rate given Theorem \[thm:demand\], and derived in Appendix \[App:demand\], over all possible demands $\dbf\in\mathfrak D$. Expected coded multicast rate achieved by GCC$_1$ ------------------------------------------------- Let $\lambda_i(\mathcal K_\ell,k,n ) {\stackrel{\Delta}{=}}(1-p^c_{k,n}) \phi_i(\mathcal K_\ell,k,n ) $ for $\phi_i(\mathcal K_\ell,k,n ) $ defined in , then by taking the expectation of the rate in Theorem \[thm:demand\], we have $$\begin{aligned} \mathbb E \Big[ \Psi_{\dbf}^{(1)}\Big( \{\mu_k \}, &\{\pbf_{k} \}, \{\length_{k,n}\} \Big) \Big] =\mathbb E \bigg[ \sum_{i=1}^{K} \sum_{\ell=1}^{K-i+1} \sum_{\mathcal K_{\ell} \subseteq \{\order_{i},\dots,\order_K \}} \Big(\length_{\order_i,d_{\order_i}} - \length_{\order_{i-1},d_{\order_{i-1}}} \Big) \max\limits_{k \in \mathcal K_\ell} \lambda_i(\mathcal K_\ell,k,d_k) \bigg] \notag\\& = \sum_{i=1}^{K} \sum_{\ell=1}^{K-i+1} \mathbb E \bigg[\sum_{\mathcal K_{\ell} \subseteq \{\order_{i},\dots,\order_K \}} \Big(\length_{\order_i,d_{\order_i}} - \length_{\order_{i-1},d_{\order_{i-1}}} \Big) \max\limits_{k \in \mathcal K_\ell} \lambda_i(\mathcal K_\ell,k,d_k) \bigg], \label{eq: avg first step}\end{aligned}$$ where the expectation is taken over all subsets $\mathcal K_\ell$ of the set $\{\order_i,\dots,\order_K \}$ which is a function of the random demand realization $\dbf$, and therefore, [the order of the expectation and summation can not be exchanged]{}. Consequently, we upper bound with the delivery rate in a network where receiver $k\in[K]$ requests equal-length versions of all file in the library each of size $ \length^*_{k} \tau$ bits with $\length^*_{k} \triangleq\max_{n\in[N]} \length_{k,n} $, i.e., it requests versions of files with the largest rate. For a given set of $\length^*_{1}, \dots, \length^*_{K}$, let the ordered set $\order_1^*,\dots,\order_K^*$ denote a permutation of receiver indices $\{1,\dots,K \}$ such that $\length_{\order_1^*}^*\leq\dots\leq\Omega_{\order_K^*}^*$. Note that the set $\order_1^*,\dots,\order_K^*$ is independent of the random demand $\dbf$. Then, from we have $$\begin{aligned} \mathbb E \Big[ &\Psi_{\dbf}^{(1)}\Big( \{\mu_k \}, \{\pbf_{k} \}, \{\length_{k,n}\} \Big) \Big] \leq \sum_{i=1}^{K} \sum_{\ell=1}^{K-i+1} \mathbb E \bigg[\sum_{\mathcal K_{\ell} \subseteq \{\order_{i}^*,\dots,\order_K^* \}} \Big(\length_{\order_i^*}^* - \length_{\order_{i-1}^*}^* \Big) \max\limits_{k \in \mathcal K_\ell} \lambda_i(\mathcal K_\ell,k,d_k) \bigg] \notag\\ &\stackrel{(a)}{=} \sum_{i=1}^{K} \sum_{\ell=1}^{K-i+1} \sum_{ \dbf \in {\mathfrak D}} \Big(\prod_{k \in [K]} q_{k,d_k}\Big)\sum_{\mathcal K_{\ell} \subseteq \{\order_{i}^*,\dots,\order_K^* \}} \Big(\length_{\order_i^*}^* - \length_{\order_{i-1}^*}^* \Big) \max\limits_{k \in \mathcal K_\ell} \lambda_i(\mathcal K_\ell,k,d_k) \notag\\ &\stackrel{(b)}{=} \sum_{i=1}^{K} \sum_{\ell=1}^{K-i+1} \sum_{ \dbf \in {\mathfrak D}} \Big(\prod_{k \in { [K]} } q_{k,d_k}\Big)\sum_{\mathcal K_{\ell} \subseteq \{\order_{i}^*,\dots,\order_K^* \}} \Big(\length_{\order_i^*}^* - \length_{\order_{i-1}^*}^* \Big) \notag\\ &\qquad \qquad\bigg( \sum_{n =1}^{N} \sum_{k \in {\mathcal K}_\ell } \mathbbm{1} \Big\{ (k,n)= \argmax\limits_{( s,t): s\in\mathcal K_\ell, t = d_s }\, \,\lambda_i(\mathcal K_\ell,s,t) \Big\} \;.\; \lambda_i(\mathcal K_\ell,k, n) \bigg) \notag\\ &\stackrel{(c)}{=}\sum_{i=1}^{K} \sum_{\ell=1}^{K-i+1} \sum_{\mathcal K_{\ell} \subseteq \{\order_{i}^*,\dots,\order_K^* \}} \sum_{n =1}^{N} \sum_{k \in {\mathcal K}_\ell } \Big(\length_{\order_i^*}^* - \length_{\order_{i-1}^*}^* \Big) \lambda_i(\mathcal K_\ell,k, n) \, \mathbb E\Big[ \mathbbm{1} \Big\{ (k,n)= \argmax\limits_{( s,t): s\in\mathcal K_\ell, t = f_s }\, \,\lambda_i(\mathcal K_\ell,s,t) \Big\} \Big] \notag\\ &\stackrel{(d)}{=}\sum_{i=1}^{K} \sum_{\ell=1}^{K-i+1} \sum_{\mathcal K_{\ell} \subseteq \{\order_{i}^*,\dots,\order_K^* \}} \sum_{n =1}^{N} \sum_{k \in {\mathcal K}_\ell } \Big(\length_{\order_i^*}^* - \length_{\order_{i-1}^*}^* \Big) \lambda_i(\mathcal K_\ell,k, n) \,\Gamma_i(\mathcal K_\ell,k,n) ,\label{eq:exp}\end{aligned}$$ where $(a)$ follows by writing the expectation with respect to the demand vector $\dbf\in{\mathfrak D}$. Then $(b)$ follows from replacing $\max_{k\in\mathcal K_\ell}\lambda_i(\mathcal K_\ell,k, d_k)$ with a sum over all possible file-receiver indices $(k,n)$ of $\lambda_i(\mathcal K_\ell,k, n)$ multiplied by the indicator function that picks the maximum value, and $(c)$ follows since only the indicator function depends on the demand, and $(d)$ follows by denoting $$\begin{aligned} \Gamma_i({\mathcal K_\ell,k,n}) \triangleq \mathbb P\Big( (k,n)= \argmax\limits_{ ( s,t): s\in\mathcal K_\ell, t = f_s }\,\lambda_i(\mathcal K_\ell,s,t) \Big),\end{aligned}$$ which is the probability that file $n\ni \fbf$ requested by receiver $k\in\mathcal K_\ell$ maximizes the quantity $\lambda_i(\mathcal K_\ell,s,t) $, and where $\sum_{n=1}^{N}\sum_{k \in \mathcal K_{\ell}} \Gamma_i({\mathcal K_\ell,k,n}) =1$. Therefore, the expected coded multicast rate achieved by GGC$_1$ for a given set of caching distributions $\{\pbf_k \}_{k=1}^K$ is upper bounded by Expected coded multicast rate achieved by GCC$_2$ ------------------------------------------------- Taking the expectation of the rate given in over all demand realizations $\dbf\in\mathfrak D$ results in $$\begin{aligned} {\bar \Psi}^{(2)} \Big( \{\qbf_{k} \}, \{\Omega_{k,n}\} \Big)\triangleq \mathbb E \Big[ \Psi_{\dbf}^{(2)}\Big( \{\Omega_{k,n}\}\Big) \Big] &=\sum_{n=1}^N \mathbb E\Big[\mathbbm{1}\{n\ni \dbf\} \Big(\max\limits_{k:d_k=n} \Omega_{k,n} - \min\limits_{k:d_k=n} p_{k,n}\mu_k \Big) \Big] \notag\\ &\stackrel{(a)}{\leq} \sum_{n=1}^N \mathbb P\Big( \mathbbm{1} \{n \ni \dbf\}\Big) \Big(\max\limits_{k\in[K]} \length_{k,n} - \min\limits_{k\in[K]} p_{k,n}\mu_k \Big) \notag\\ &= \sum_{n=1}^N \Big(1-\prod_{k=1}^K (1-q_{k,n})\Big)\Big(\max\limits_{k\in[K]} \length_{k,n} - \min\limits_{k\in[K]} p_{k,n}\mu_k \Big) \notag ,\end{aligned}$$ where $(a)$ follows since $\max\limits_{k:d_k=n} \Omega_{k,n} \leq \max\limits_{k\in[K]} \Omega_{k,n}$ and $\min\limits_{k:d_k=n} p_{k,n}\mu_k \geq \min\limits_{k\in[K]} p_{k,n}\mu_k$ for any $\dbf\in\mathfrak D$. Proof of Theorem \[thm:Sym user\] {#app:Sym user} ================================= The proof follows steps similar to those in [@ji15order Appendix A], and based on the explanations given in Appendix \[App:demand\]. We upper bound the number of independent sets in ${\mathcal H}_{{\bf C}, {\bf Q}}$, by splitting the graph into $N$ subgraphs such that subgraph $i$ contains a subset of the packets of all requested files that have version length equal or larger than the $i^\text{th}$ shortest version length. Let $J_i(\Cbf,\Qbf)$ denote the number of independent sets found by Algorithm GCC$_1$ in subgraph $i$. For a given $\Cbf$ and $\Qbf$, we upper bound the delivery rate with $\sum_{i}J_i(\Cbf,\Qbf)$. Let the ordered set $\zeta_1,\dots,\zeta_N$ denote a permutation of the file indices $\{1,\dots,N \}$ such that $\Omega_{\zeta_1}\leq\dots\leq\Omega_{\zeta_N}$. Then, for a given demand, subgraph $i$ is composed of all (if any) vertices in $\mathcal V$, denoted by $\mathcal V^{(i)}$, corresponding to packets in $\bf Q$ that belong to the portion of requested files from bit $\Omega_{\zeta_{i-1}}\tau$ to bit $\Omega_{\zeta_i}\tau$. Let us denote the set of receivers requesting a packet in subgraph $i$ by ${\mathcal K}^{(i)}$. Following the procedure in Appendix A, the normalized number of independent sets generated by the algorithm becomes $$\begin{aligned} \frac{1}{\tau/T} \mathbb E_{\bf C}\Big[ \sum_{i=1}^N\mathcal J_i(\Cbf,\Qbf) \Big]&= \sum_{i=1}^N \sum_{\ell=1}^{|{\mathcal K}^{(i)}|}\sum_{{\mathcal K}_{\ell} \in {\mathcal K}^{(i)}} \Big( \Omega_{\zeta_{i}} - \Omega_{\zeta_{i-1}} \Big) \max\limits_{n\in{\bf f}_{\ell}} \sum\limits_{ \substack{v \in {{\cal V}}^{(i)}:\\ \alpha(v)\text{ belongs to } n} } \mathbbm{1} \Big\{\eta_i(v) = \mathcal K_{\ell}\setminus \{k\} \Big\}\notag\\ &= \sum_{i=1}^N \sum_{\ell=1}^{ |{\mathcal K}^{(i)}|} \binom{|{\mathcal K}^{(i)}|}{\ell} \Big( \Omega_{\zeta_{i}} - \Omega_{\zeta_{i-1}} \Big) \max\limits_{n\in{\bf f}_{\ell}} \lambda(|{\mathcal K}^{(i)}|,\ell, n) , \notag\end{aligned}$$ which follows due to the homogeneity across receivers, with $\lambda(K,\ell, n) $ given as $$\begin{aligned} \lambda(K,\ell, n) = ( p_n^c )^{\ell-1} ( 1-p_n^c )^{ K-\ell+1} .\end{aligned}$$ The expected delivery rate can be upper bounded by taking the expectation over all demands as: $$\begin{aligned} {\bar \Psi}^{(1)}\Big(\qbf,\mu,\pbf,\{{\length}_{n}\} \Big) &\leq \mathbb E\bigg[ \sum_{i=1}^N \sum_{\ell=1}^{ |{\mathcal K}^{(i)}|} \binom{|{\mathcal K}^{(i)}|}{\ell} \Big( \Omega_{\zeta_{i}} - \Omega_{\zeta_{i-1}} \Big) \max\limits_{n\in{\bf f}_{\ell}} \lambda(|{\mathcal K}^{(i)}|,\ell, n) \bigg]\notag\\ & = \sum_{i=1}^N\Big( \Omega_{\zeta_{i}} - \Omega_{\zeta_{i-1}} \Big) \mathbb E\bigg[ \sum_{\ell=1}^{ |{\mathcal K}^{(i)}|} \binom{|{\mathcal K}^{(i)}|}{\ell} \max\limits_{n\in{\bf f}_{\ell}} \lambda(|{\mathcal K}^{(i)}|,\ell, n) \bigg]\notag\\ & \stackrel{(a)}{\leq} \sum_{i=1}^N\Big( \Omega_{\zeta_{i}} - \Omega_{\zeta_{i-1}} \Big) \mathbb E\bigg[ \sum_{\ell=1}^{ |{\mathcal K}^{(i)}|} \binom{|{\mathcal K}^{(i)}|}{\ell} \sum_{ n\in \{\zeta_i,\dots, \zeta_N \}}\Gamma_i(|{\mathcal K}^{(i)}|,\ell, n) \lambda(|{\mathcal K}^{(i)}|,\ell, n) \bigg]\notag\\ & \stackrel{(b)}{\leq} \sum_{i=1}^N\Big( \Omega_{\zeta_{i}} - \Omega_{\zeta_{i-1}} \Big) \sum_{\ell=1}^{ {\widetilde K}_i } \binom{{\widetilde K}_i }{\ell} \Gamma_i({\widetilde K}_i , \ell, n) \lambda({\widetilde K}_i,\ell, n) \notag,\end{aligned}$$ where $(a)$ follows using the same trick as in Appendix B with $$\begin{aligned} \Gamma_i(K,{\ell,n} ) = \mathbb P\Big( n= \argmax\limits_{t\in\mathcal F_\ell} \;\; ( p_t^c )^{\ell-1} ( 1-p_t^c )^{K-\ell+1} \Big),\end{aligned}$$ denoting the probability that file $n\in{\mathcal F}_\ell$ chosen from a random set of $\ell$ files in $\{\zeta_i,\dots, \zeta_N \}$ maximizes $\lambda(K,{\ell,n} )$, and $(b)$ follows from Jensen’s inequality due to the concavity of the function over which the expectation is taken. $ {\widetilde K}_i = K \sum_{j=i}^N q_{\zeta_j}$ denoting the expected number of receivers in set ${\mathcal K}^{(i)}$. [^1]: We use $\tau$ instead of the conventional notation $F$ in [@maddah14decentralized; @maddah14fundamental; @ji15order] to avoid confusion with the number of source-samples in Sec. \[subsec: RAPP-GCC with SVC\]. [^2]: In our setting, the different versions of a file can correspond to different numbers of enhancement layers in successively refinable compression (further explained in Sec. \[subsec: RAPP-GCC with SVC\]), or could correspond to different-length portions of the same document. [^3]: A valid vertex coloring is an assignment of colors to vertices such that no two adjacent vertices are assigned the same color. [^4]: LFU is a local caching policy that, here, leads to all receiver caches having large overlaps, limiting the coding opportunities. [^5]: An independent set is a set of vertices in a graph, no two of which are adjacent.
--- abstract: 'Dark matter may be discovered through its capture in stars and subsequent annihilation. It is usually assumed that dark matter is captured after a single scattering event in the star, however this assumption breaks down for heavy dark matter, which requires multiple collisions with the star to lose enough kinetic energy to become captured. We analytically compute how multiple scatters alter the capture rate of dark matter and identify the parameter space where the effect is largest. Using these results, we then show how multiscatter capture of dark matter on compact stars can be used to probe heavy $m_X \gg {\text{TeV}}$ dark matter with remarkably small dark matter-nucleon scattering cross-sections. As one example, it is demonstrated how measuring the temperature of old neutron stars in the Milky Way’s center provides sensitivity to high mass dark matter with dark matter-nucleon scattering cross-sections smaller than the xenon direct detection neutrino floor.' author: - Joseph Bramante - Antonio Delgado - Adam Martin bibliography: - 'dmulti.bib' title: Multiscatter stellar capture of dark matter --- Introduction {#sec:intro} ============ The nature of dark matter remains an outstanding mystery of our cosmos. Terrestrial direct detection experiments have become exceptionally sensitive to dark matter in the mass range ${\rm GeV} - {\rm 10~ TeV}$. While there are proposals for probing lighter dark matter, finding heavy dark matter, which has a lower particle flux through terrestrial detectors, presents a special challenge. Compact stars, which have a much larger fiducial mass than terrestrial detectors, provide an alternative means to probe dark matter. Specifically, pairs of dark matter particles captured via interactions with the star can annihilate, leaving a distinct thermal trace. Prior studies of dark matter’s accumulation in stars have considered the case that dark matter capture occurs after dark matter scatters once off a stellar constituent ($e.g.$ nucleus, nucleon, electron). This is appropriate when the scattering cross section between dark matter and the constituent is small, leading to a mean path length that is large compared to the size of the star, so that at most one scatter is expected [@Press:1985ug; @Gould:1987ir]. In this paper, we consider the case where the single scatter approximation breaks down and the dark matter is predominantly captured by scattering multiple times. We derive equations suitable for computing multiscatter capture of dark matter in stars, and as one application, show that observations of neutron stars in our galaxy would be sensitive to super-PeV mass dark matter that annihilates to Standard Model (SM) degrees of freedom, for dark matter-nucleon scattering cross-sections smaller than the xenon direct detection atmospheric neutrino floor. To become captured while transiting through a star, dark matter must slow to below the stellar escape speed by recoiling against stellar constituents. During a single transit through the star, if the number of such interactions exceeds unity, $$\begin{aligned} N \approx n \sigma R \geq 1, \label{eq:ndim}\end{aligned}$$ dark matter will be slowed (and possibly captured) by multiple scatters. Here $n$ is the number density of stellar constituents, $R$ is the radius of the star, and $\sigma$ is the cross-section for dark matter to scatter off a stellar constituent. In white dwarfs, $\sigma$ is typically the cross-section for scattering off nuclei ($\sigma_{NX}$), while in neutron stars $\sigma$ is typically the cross-section for scattering off nucleons ($\sigma_{nX}$). One might also consider dark matter which predominantly scatters with electrons, in which case $\sigma$ would be the dark matter-electron cross-section. Often the stellar mass is related to the number of scattering sites by $M \simeq m\, N_n$, with $m$ the mass of a scattering site and $N_n$ the number of scattering sites per star. Keeping the stellar mass (or, equivalently, $N_n$) fixed while varying the star’s size, Eq.  implies that the typical number of dark matter scatters inside a star scales as $$\begin{aligned} N \propto \frac{N_{n}\, \sigma R}{\frac{4}{3} \pi R^3} \sim \frac{N_{n} \sigma}{R^2}.\end{aligned}$$ As explored hereafter, this means that multiscatter capture is particularly relevant for dark matter accumulating in compact stars, $i.e.$ white dwarfs and neutron stars. Specifically, fixing $\sigma$ and comparing our Sun with an equivalent mass white dwarf ($R \sim 10^{-2}~ R_{\rm sun}$) or neutron star ($R \sim 10^{-5}~ R_{\rm sun}$), the smaller size of the compact stars leads to a $10^4$ enhancement in the average number of scatters for white dwarfs relative to the Sun, and a $10^{10}$ relative enhancement for neutron stars. While multiscatter can occur for dark matter of any mass, multiscatter capture is most important for heavy dark matter. This is primarily for two reasons. First, in order to be captured, the dark matter must lose a sufficient amount of its energy through collisions with scattering sites in the star. The fraction of the dark matter’s energy lost in each collision depends on the scattering angle, but is proportional to the constituent mass $m$ divided by the dark matter mass $m_X$ in the limit that $m_X \gg m$. Therefore, heavier dark matter loses less energy per scatter, making gravitational capture after a single scatter less likely and multiscatter capture more important. Second, the range of dark matter-nucleon cross-sections for which heavy (PeV-EeV) dark matter capture in neutron stars proceeds predominantly through multiscatter energy losses, happens to coincide with dark matter-nucleon cross-sections just beyond the reach of next-generation direct detection experiments. Furthermore, it will be demonstrated in Section \[sec:results\] that PeV-EeV mass dark matter can be captured by multiple ($\sim 10 - 10^3$ times) scatters in neutron stars even for the dark matter-nucleon cross-sections below the xenon direct detection “neutrino floor," $\sigma_{\rm nX} \sim 10^{-45}~{\rm cm^2} (m_X/{\rm PeV})$ [@Billard:2013qya]. For these reasons, a primary focus of this paper will be dark matter with mass $m_{\chi} \gg {\text{TeV}}$. The dark matter masses just mentioned are well above the canonical WIMP mass scale of about 100 GeV. Dark matter with a weak scale mass has received deserved attention in the past decade because it can reproduce the observed dark matter abundance as a thermal relic. Considerable experimental efforts have bounded the nucleon scattering cross-section for weak-scale mass ($m_X \sim 100~{\rm GeV}$) dark matter to $\sigma_{nX} \lesssim 10^{-46}~{\rm cm^2}$, Ref. [@Aprile:2012nq; @Akerib:2016vxi; @Tan:2016zwf]. On the other hand, it has been shown that if one deviates from the minimal cosmological scenario, dark matter models with heavier masses $m_X \sim {\rm TeV-EeV}$ are predicted, $e.g.$ [@Kane:2011ih; @Davoudiasl:2015vba; @Randall:2015xza; @Berlin:2016vnh; @Bramante:2017obj], either as a result of extra sources of entropy that dilute the thermal overabundance or because dark matter is very weakly coupled to the SM and it never thermalizes. As weak-scale mass dark matter has become increasingly constrained, the prospect of very heavy dark matter, which can still have a nearly “weak" scale cross-section with nucleons ($\sigma \sim 10^{-40}~{\rm cm^2}$) deserves more attention. However, as a consequence of reduced dark matter flux, direct detection experiments have sensitivities that drop off with $1/m_X$ at high masses, and new methods to probe heavy dark matter are necessary. As we will show, neutron stars in our galaxy are powerful probes of heavy, weakly interacting dark matter. Some prior work has considered multiscatter dark matter capture in the Earth and Sun [@Gould:1991va; @Albuquerque:2000rk; @Mack:2007xj], where the gravitational potential of the capturing body, nuclear coherence, and relativistic effects could be reasonably neglected. Hereafter we treat single and multiple scatter capture rates and provide an equation valid for capture in the limit that the escape velocity of the capturing body greatly exceeds dark matter’s halo velocity. The organization of the rest of this paper is as follows: in Section \[sec:simpeq\], we present our main points and the parametric dependence of multiscatter dark matter capture in compact stars. A detailed derivation of multiscatter capture is given in Section \[sec:detail\]. Using the derived multiscatter capture formulae, Section \[sec:results\] finds prospects for old neutron stars near the galactic center to constrain heavy dark matter that annihilates to Standard Model particles. In Section \[sec:conclusions\], we conclude. Parametrics of multiscatter capture {#sec:simpeq} =================================== In order to calculate the parametric dependence of multiscatter capture, we are going to first examine the dark matter single-scatter capture rate, and then investigate how the rate changes when one accounts for more than one collision. We will find that, for heavy enough dark matter, the mass capture rate of dark matter on compact stars depends linearly on $\sigma$ and inversely on $m_X$. This $\sim \sigma/m_X$ scaling of the mass capture rate arises for heavier dark matter, because more scatters (which scale up with $\sigma$) are needed for heavier particles to be captured by the star. Dark matter capture in a star depends upon the flux $F$ of dark matter through the star and the probability $\Omega$ that collision(s) with the star will deplete the dark matter’s energy enough that it becomes gravitationally bound. The flux in turn depends upon the number density of dark matter in the halo $\left( n_X=\frac{\rho_X}{m_X} \right)$, the relative motion of the star with respect to the dark matter halo ($v_{star}$), the distribution of dark matter speeds in the dark matter halo, and the escape speed of the dark matter halo ($v^{halo}_{esc}$). The probability to capture ($\Omega$) depends on the speed of the dark matter, set by the initial speed plus the amount of speed it has gained falling into the star’s gravitational well. Additionally, the probability depends on the density of scattering sites in the star ($n_T$), the cross section of dark matter to scatter off scattering sites ($\sigma$), and the fraction of scattering phase space where sufficient energy is lost. Both the velocity gained by falling into the star and the number density are, in principle, functions of where inside the star the collision occurs. Combining the flux and capture probability yields a differential capture rate, which must be integrated over dark matter initial velocities and trajectories through the star. Schematically, the differential capture rate is $$\begin{aligned} \frac{d\, C}{dV\, d^3u} = dF(n_X, u, v_{star}, v^{halo}_{esc})~\Omega(n_T(r), w(r), \sigma, m_n, m_X),\end{aligned}$$ where $u$ is the dark matter velocity far from the star (the halo velocity) and $w^2(r) = u^2 + v^2_{esc}(r)$ is the speed of the dark matter after it has fallen to a distance $r$ from the star’s center (either inside or outside of the star). To focus on the parametrics of dark matter capture, for simplicity we assume no motion of the star relative to the dark matter thermal distribution in the halo ($v_{star} \to 0$) and an infinite escape speed for the dark matter halo ($v^{halo}_{esc} \to \infty$). We also fix the escape speed of dark matter in the star to the escape speed at the star’s surface ($v_{esc}(r) = v_{esc}(R)$), and for the moment omit general relativistic and nuclear physics corrections. With these provisos, a constant-density star in the rest frame of the dark matter halo with stellar escape velocity $v_{\rm esc}^2 \sim 2 G M /R$ has a single-scatter dark matter capture rate derived in Appendix \[app:single\] $$\begin{aligned} C_{1} = \sqrt{24 \pi} G \frac{\rho_X}{m_X} M R \frac{1}{\bar{v}} ~{\rm Min}\left[1, \frac{\sigma}{\sigma_{\rm sat}}\right] \left( 1-\frac{1-e^{-A^2}}{A^2} \right). \label{eq:singlescatter}\end{aligned}$$ Note that the capture rate scales with dark matter density $\rho_X$ and inversely with the dark matter halo velocity $\bar v$. Here, $G$ is Newton’s constant, $M$ is the mass of the star, $\sigma$ is dark matter’s cross-section with a stellar constituent (nucleus, nucleon, electron). The exponential factor $A^2 \equiv \frac{3}{2} \frac{v_{\rm esc}^2}{\bar{v}^2} \beta_-$, where $\beta_\pm \equiv 4 m_X m/(m_X \pm m)^2$ and $m$ is the mass of the particle (nucleus, nucleon, electron) dark matter scatters against. Increasing the cross-section past a certain threshold will guarantee that most transiting dark matter scatters with the star at least once, though it may not lose enough energy to be captured. This threshold cross-section is customarily defined as $\sigma_{\rm sat} = \pi R^2 /N_n$, where $N_n$ is the number of scattering sites, and the “Min" function evaluates to unity once at least one capture is probable. The parenthetical term in Eq.  takes into account dark matter that scatters but does not lose sufficient energy to be gravitationally captured. To better understand the origin of the parenthetical piece of Eq. (\[eq:singlescatter\]), let us examine the energetics of gravitational capture. To be captured after a single collision, the energy lost by the dark matter must be greater than its initial kinetic energy in the galactic halo. The energy loss is proportional to the reduced mass of the dark matter - constituent system, $\mu_n$ and the speed of the dark matter at the collision site. In the limit that the star’s escape velocity is much greater than the halo velocity ($w = \sqrt{u^2+ v^2_{esc}} \simeq v_{esc}$) the capture requirement is $$\begin{aligned} \Delta E \simeq 2\, \frac{\mu^2_n}{m}\, v^2_{esc}\, z \ge \frac 1 2 m_X\, u^2, \label{eq:cap}\end{aligned}$$ where $z$ is a kinematic variable $\in [0,1]$ related to the scattering angle. Assuming dark matter is much heavier than the stellar constituents and turning the above requirement above into a condition on $u$, $$\begin{aligned} u < u_{max} = \sqrt{\beta_+\, z}\, v_{esc}.\end{aligned}$$ In the full capture treatment (Appendix \[app:single\]), for dark matter with Boltzmann distributed velocities from $0$ to $u_{max}$ and scattering angles $z \in [0,1]$, we consider kinematic phase space where dark matter is moving slowly enough to be captured after a single collision. The limit of this phase space is set by $u_{max}$, which is evident in the form of the $A^2$ exponential factor in Eq. . Note that when $m_X \gg m$, a limit that will be appropriate throughout this paper, $\beta_{\pm}$ both reduce to $4\, m/m_X$. The origin and form $A^2$ term are important because $A^2$ governs the dependence of $C_1$ on the dark matter mass. When $A^2$ is large, corresponding to a maximum capture speed much larger than than average dark matter speed, the parenthetical term in Eq. (\[eq:singlescatter\]) evaluates to 1, and the sole dark matter mass dependence lies in the number density $\frac{\rho_X}{m_X}$. In this case, the single scatter capture rate scales as $$\begin{aligned} C_1 \propto \frac{\sigma}{m_X}, ~~~~~(A^2 \gg 1) \label{eq:Asq1}\end{aligned}$$ implying a mass capture rate $m_X C_1 \propto \sigma$ that is independent of the dark matter mass. However, if $A^2$ is small, implying a maximum capture speed less than a typical dark matter halo velocity $\bar{v}$, we can expand the entire parenthetical expression in Eq. , and find that the capture rate scales as $$\begin{aligned} C_1 \propto \frac{\rho_X}{m_X}\,\, \sigma \,A^2 \propto \frac{\sigma}{m^2_X},~~~~~(A^2 \ll 1) \label{eq:Asq2}\end{aligned}$$ implying a mass capture rate scaling $ m_X C_1 \propto \sigma/m_X $ that depends inversely on the dark matter mass. To see where the mass capture rate transitions from being constant to being $m_X$-dependent in compact stars, we can insert appropriate values for $v_{esc}$. For a solar mass white dwarf $v_{esc}\,c \sim 2 \times 10^3\, \text{km/s}$, while a solar mass neutron star has $v_{esc}\,c \sim 2\times 10^5\, \text{km/s}$; both of these escape speeds are far greater than the average dark matter halo speed $\bar v c\sim 220 \,\text{km/s}$, therefore $A^2$ will only be less than one if the dark matter is much heavier than $m$. Specifically, taking $A^2 = 1$ to be the transition value, and solving for $m_X$, we find the transition occurs at $m_X \sim {\text{TeV}}$ in a solar mass white dwarf (assuming scattering off of carbon) and $m_X \sim \text{PeV}$ for a solar mass neutron star (assuming scattering off a neutron). To see how the parametric dependence of Eq.  changes in the case of multiple scatters, let us revisit the energetics of gravitational capture. For the moment, let us assume that dark matter participates in $N \ge 1$ collisions during its transit of the star and that each collision results in an average energy loss $$\begin{aligned} \Delta E_i = \frac{\beta_+ E_i}{2}.\end{aligned}$$ If the dark matter initially entered the star with energy $E_0$, the energy after $N$ ‘average’ collisions is $$\begin{aligned} E_N = E_0 \left( 1 - \frac{\beta_+}{2} \right)^N, \end{aligned}$$ or a net energy deposit of $\Delta E_N = E_0 - E_N$. Assuming, as in the single scatter case, that the initial dark matter kinetic energy is $E_0 \sim 1/2\, m_X\, v^2_{esc}$ and plugging $\Delta E_N$ into the capture condition Eq. (\[eq:cap\]), we can solve for the maximum halo velocity $u$ that can be captured $$\begin{aligned} u \le v_{esc}\, \Big( 1 - \left( 1 - \frac{\beta_+}{2} \right)^N\Big)^{1/2} \label{eq:simplemulti}\end{aligned}$$ In the limit that $m_X \gg m$ and $\beta_+ \rightarrow 4 m/m_X$, the leading order term in the binomial expansion of the right side of Eq.  approximates the full expression. In that limit, the maximum allowed velocity simplifies to $$\begin{aligned} u \le \sqrt{\frac{N\, \beta_+}{2}}\, v_{esc} \cong \sqrt{\frac{2\, N\, m}{m_X}}\, v_{esc}\end{aligned}$$ up to corrections of $\mathcal O \left( \frac{(N\,m)^2}{m^2_X} \right)$. As we will show in more detail in the next section, in the limit of $v_{esc} \gg \bar v$ the probability to capture after $N $ scatters can be expressed in a form very similar to but with $A^2$ – the factor in the exponential – modified to $$\begin{aligned} A^2_N \equiv \frac{3\, v^2_{esc}}{\bar v^2}\frac{N\, m}{m_X}. \label{eq:newlim}\end{aligned}$$ As discussed following Eq. (\[eq:singlescatter\]), if this exponential factor is large then the $m_X$-dependence in the capture rate from the $A^2$ term is suppressed. Meanwhile, if the factor is small, the exponential can be approximated by an expansion, resulting in a capture rate $\propto n_X\, A^2 \propto \sigma/m^2_X$. Comparing Eq.  to Eq. , we see that multiple scattering has added a factor of $N$ to the $A^2$ term. The $N$ dependence in the numerator of Eq. (\[eq:newlim\]) means that for $N \gg 1$, the dark matter mass needs to be larger (for a given $v_{esc}, \bar v$ and $m$) before the exponential factor becomes small. Stated another way, if the dark matter scatters $N$ times, the capture rate will behave as $C_N \sim \sigma/m_X$ out to masses $N$ times higher than if dark matter only scatters once. Note that this discussion has involved only the energetics of slowing down a heavy dark matter particle to beneath a star’s escape speed and not whether the dark matter interacts with stellar constituents strongly enough to participate in multiple scatters in the first place. Following from Eq. (\[eq:ndim\]), the likelihood to participate in multiple scatters roughly depends on the path length of the dark matter $1/n \sigma $ compared to the size of the star. We will flesh out this dependence in the next section. Multiscatter capture {#sec:detail} ==================== Having examined the parametric scaling of multiscatter capture in the previous section, in this section we derive the multiscatter dark matter capture rate. Our notation follows that of [@Gould:1991va], which considered capture by the Earth’s iron core, where the acceleration of incoming dark matter due to Earth’s gravity, and – more broadly – general relativistic effects, could be neglected. In the large $N$ limit, the treatment presented here also allows for more efficient computation of the multiscatter capture rate, by obviating the $N$-fold kinematic phase-space integral in [@Gould:1991va]. For multiscatter capture it is convenient to define the optical depth $\tau = \frac{3 \sigma}{2\sigma_{\rm sat}}, \sigma_{sat} = \frac{\pi R^2}{N_n}$, the average number of times a dark matter particle with dark matter - nuclear cross section $\sigma$ will scatter when traversing the star.[^1] The probability for dark matter with optical depth $\tau$ to participate in $N$ actual scatters is given by $\text{Poisson}(\tau, N)$. However, this expression can be improved to incorporate all incidence angles of dark matter. Defining $y$ as the cosine of the incidence angle of dark matter entering the star, the full probability is $$\begin{aligned} p_N(\tau) = 2 \int_0^1 dy~\frac{y e^{-y \tau} \left(y \tau \right)^N}{N!}. \label{eq:poisson}\end{aligned}$$ While it incorporates all incidence angles, this expression still makes the assumption that the dark matter takes a straight path through the star. In practice, the straight path assumption will produce conservative bounds on dark matter capture, marginally under-predicting the capture rate. Incorporating the likelihood for dark matter to participate in $N$ scatters, the differential dark matter capture rate after exactly $N$ scatters looks similar to the single scatter formula (see Appendix \[app:single\]), with the probability to capture after $N$ scatters $g_{N}(w)$ adjusted to take into account the kinematics of $N$ collisions and replacing $\frac{\sigma}{\sigma_{sat}} \rightarrow p_{N}(\tau)$[^2], $$\begin{aligned} C_N & = \pi R^2 \, p_{N}(\tau) \int_{0}^{\infty} ~ f(u) \frac{du}{u}~ w^2 g_N(w). \label{eq:dCn}\end{aligned}$$ The velocity distribution $f(u)$ of dark matter particles in the galactic halo is given in Eq. . In writing the velocity distribution as $f(u)$ we have retained the assumptions from the single capture case that the escape velocity of the dark matter halo is infinite and the velocity of the star relative to the dark matter is zero. We have also maintained that the density of the star is uniform and ignored the radial dependence of the escape velocity.[^3] It is convenient to shift the integral to $w$, where $w^2 =u^2 + v_{\rm esc}^2$. The capture rate for $N$ scatters then becomes $$\begin{aligned} C_N = \pi\, R^2 \, p_{N}(\tau) \int_{v_e}^{\infty}\, dw\, \frac{f(u)}{u^2}\, w^3\, g_N(w), \label{eq:dCn2}\end{aligned}$$ and the total capture rate is the sum over all $N$ of the individual $C_N$ $$\begin{aligned} C_{\rm tot} = \sum_{N=1}^{\infty} C_N. \label{eq:CNsum}\end{aligned}$$ In actual computations, the sum in Eq. (\[eq:CNsum\]) will be cut off at some finite $N_{max}$ where $p_{N_{max}}(\tau) \approx 0$. Finally, we need to evaluate $g_N(w)$, the probability that the speed of the dark matter after $N$ collisions drops below the escape velocity. This probability, which we analyzed dimensionally in Section \[sec:simpeq\], depends solely on dark matter’s initial velocity, the amount of energy lost in each scatter, and the escape velocity of the star. For dark matter with initial kinetic energy at the star’s surface $E_0 = m_X w^2/2$, the energy lost in a single scattering event is given by $\Delta E = z \beta_+ E_0 $, where $z$ is related to the scattering angle, $z \in [0,1] $, and we again note that $\beta_+ \equiv 4 m_X m/(m_X + m)^2$. Iterating for $N$ scatters, the dark matter energy and velocity decrease to $$\begin{aligned} E_N = \prod_{i=1}^N\, (1-z_i\, \beta_+)\, E_0,~~~~ v_N = \prod_{i=1}^N\, (1-z_i\, \beta_+)^{1/2}\, w.\end{aligned}$$ If the velocity after $N$ scatters is less than the escape velocity, the dark matter is captured. Phrased as a condition on the initial velocities $w$ that we are integrating over, the capture probability is $$\begin{aligned} g_N(w) = \int_{0}^1\,dz_1\int_{0}^1\,dz_2 \cdots\int_{0}^1\,dz_N\, \Theta\Big(v_{esc}\prod_{i=1}^N(1-z_i\,\beta_+)^{-1/2} - w\Big), \label{eq:gnfull}\end{aligned}$$ where the $dz_i$ integrals sum over all possible scattering trajectories (angles) at each step. This condition requires an integral for every scatter, and becomes computationally cumbersome to evaluate for large $N$. Therefore, as a further approximation, let us replace the $z_i$ with their average value. Provided the differential dark matter-nuclear cross section is independent of scattering angle – valid in most scenarios of spin-independent elastic scattering – $\langle z_i \rangle = 1/2$ and $g_N(z)$ simplifies[^4], $$\begin{aligned} g_N(w) = \Theta\Big(v_{esc}\prod_{i=1}^N(1-z_i\,\beta_+)^{-N/2} - w\Big). \label{eq:uNrel}\end{aligned}$$ As in the single scatter case, the capture probability restricts the range of dark matter velocities that allow for capture. To illustrate the relationship between dark matter’s halo speed and the number of scatters it takes to slow down to below the star’s escape speed, we recast Eq. (\[eq:uNrel\]) as contours in $u-N$ space in Fig. \[fig:uNfig\] below, for typical neutron star and white dwarf parameters (see caption). ![Number of scatters needed to capture dark matter as a function of dark matter’s halo speed ($i.e.$ the speed at long distance from the star). The left plot shows the relation assuming a solar mass white dwarf made entirely of carbon ($m_N \sim 10\, {\text{GeV}}$) and with radius $R = 0.1\, R_{sun}$. The right plot shows the relation for a solar mass neutron star with radius $R = 10\, \text{km}$, which for the moment neglects relativistic corrections. The lines correspond to 10 TeV–100 PeV mass dark matter, as indicated.[]{data-label="fig:uNfig"}](numberscattersWD.pdf "fig:"){width="47.00000%"} ![Number of scatters needed to capture dark matter as a function of dark matter’s halo speed ($i.e.$ the speed at long distance from the star). The left plot shows the relation assuming a solar mass white dwarf made entirely of carbon ($m_N \sim 10\, {\text{GeV}}$) and with radius $R = 0.1\, R_{sun}$. The right plot shows the relation for a solar mass neutron star with radius $R = 10\, \text{km}$, which for the moment neglects relativistic corrections. The lines correspond to 10 TeV–100 PeV mass dark matter, as indicated.[]{data-label="fig:uNfig"}](numberscattersNS.pdf "fig:"){width="47.00000%"} The fact that dark matter with a given mass and speed requires more scatters to be captured in a white dwarf is due to the fact that the velocity at infinity ($u$) is a larger fraction of the star’s escape speed than for a neutron star. This gives the impression that multiscatter is more important for dark matter capture in white dwarfs. However, the number of scatters needed to slow down to sub-escape velocities is not the only factor in the problem; capture also depends on whether the dark matter-nuclei cross section is large enough for the dark matter to interact scatter multiple times as it transits the star. The strength of the dark matter - constituent interaction is encapsulated in the optical depth $\tau$ which, as we have seen, is proportional to $1/R^2$ and therefore much larger for neutron stars. Using the simplified form for $g_N(w)$, we can evaluate remaining integral in Eq. (\[eq:dCn2\]): $$\begin{aligned} C_N = \pi\, R^2\, p_N(\tau) \frac{\sqrt 6\, n_X}{\sqrt{\pi}\bar v}\Big((2\,\bar v^2 + 3\, v^2_{esc}) - (2\, \bar v^2 + 3\, v^2_N)\exp{\Big(-\frac{3(v^2_N - v^2_{esc})}{2\,\bar v^2} }\Big)\Big),\end{aligned}$$ with $v_N = v_{esc}(1 - \beta_+/2)^{-N/2}$. In the limit that $v_{esc} \gg \bar v$ and $m_X \gg m_n$, this becomes $$\begin{aligned} C_N = \sqrt{24\,\pi}\,p_N(\tau)\, G\, n_X\, M\, R\frac{1}{\bar v}\left( 1 - \left(1 - \frac{2 A^2_N\, \bar v^2}{3\, v^2_{esc}} \right)\, e^{-A^2_N} \right); \quad A^2_N = \frac{3\, v^2_{esc}\, N m}{\bar v^2\, m_X}, \label{eq:CNpart}\end{aligned}$$ where the last expression follows the format of the single scatter capture equation Eq. . Note that the reason $C_1$ according to this formula does not precisely match Eq.  is that we integrated over all possible energy loss fractions ($dz_1$) when deriving the latter, but assume average energy loss in the former. As expected, the capture rate for $N$ scatters has a similar form as the single capture rate, up to a factor of $N$ in the exponential factor $A^2_N$. Following the logic presented in Sec. \[sec:simpeq\], the factor of $N$ implies the $C_N \propto 1/m_X$ scaling persists out to higher $m_X$ than in the single scatter case. However, while the behavior of an individual $C_N$ is easy to see given $m_X, m$ and $v_{esc}$, the mass scaling of the full capture rate is more subtle as it involves the sum over all $C_N$, each weighted by $p_N(\tau)$.\ Having reviewed the general form of the multiple scatter capture rate, we can now apply it to white dwarfs and neutron stars. Each of these applications involves subtleties not present in Eq. . White dwarfs are compact stars ($R \sim 10^4\, \text{km}, M \sim 10^{57}~{\rm GeV}$) that are supported by electron degeneracy pressure. Their suitability as potential laboratories to capture and thereby constrain various dark matter candidates has been studied previously in the single-scatter regime [@Bertone:2007ae; @Kouvaris:2010jy; @McCullough:2010ai; @Bramante:2015cua; @Graham:2015apa]. At the upper end of the mass range, white dwarfs are largely composed of carbon and oxygen, so $m = m_N \sim \mathcal O(10 ~{\rm GeV})$ in the capture equations above. Dark matter possessing spin-independent ($e.g.$ scalar or vector current) interactions with nuclei will scatter coherently off the nucleons within carbon/oxygen if the momentum exchange is low enough, while higher energy exchanges will be sensitive to the substructure of the nucleus and correspondingly suppressed. This loss of coherence is expressed by a form factor. Including the form factor suppression, the multiscatter accumulation rate will be given by Eq.  with the cross-section substitution $$\begin{aligned} \sigma \rightarrow \sigma^{\rm WD}_{NX} \simeq \sigma_{nX} \frac{m_{N}^4}{m_{n}^4} F^2(\langle E_{\rm R} \rangle) , \label{eq:snx}\end{aligned}$$ where, in the case of scattering off carbon, the mass of the stellar constituent is $m_{N} \simeq 12\, m_n\, \simeq 11.1 ~{\rm GeV}$, and $F^2(\langle E_{\rm R} \rangle)$ is the Helm form factor evaluated at the average recoil energy $\langle E_{\rm R} \rangle$ [@Helm:1956zz]. The average recoil energy is defined as $$\begin{aligned} \left\langle E_{\rm R} \right\rangle \simeq \frac{\int_{0}^{E_{\rm R}^{\rm max}} ~dE_{\rm R}~ E_{\rm R}F^2(E_{\rm R})}{\int_{0}^{E_{\rm R}^{\rm max}}~dE_{\rm R}~ F^2(E_{\rm R})},\end{aligned}$$ where we make the approximation that $v_{\rm esc}$ is much greater than the halo velocity and therefore $E_{\rm R}^{\rm max} \simeq 4 m_N v_{\rm esc}^2$. For recoil energies relevant for heavy dark matter scattering off carbon in a solar mass white dwarf ($v_{\rm esc} \simeq 0.01$, $\langle E_{\rm R}\rangle \simeq {\rm MeV}$), the form factor evaluates to $F^2(\langle E_{\rm R} \rangle) \sim 0.5$. In addition to affecting the overall scattering cross section, the form factor also impacts the weighting of different momentum exchanges (scattering angles) in each scatter, previously encapsulated in the variable $z_i$. Higher momentum exchanges are suppressed by the form factor as they correspond to reduced dark matter-nucleus scattering coherence. As a result, lower energy scatters – where a smaller fraction the dark matter’s kinetic energy is deposited in each scatter – are more common. To account for this, we make the substitution $\left\langle z_i \right\rangle = \left\langle E_{\rm R} \right\rangle/E_{\rm R}^{\rm max}$ (instead of $\left\langle z_i \right\rangle=\frac{1}{2}$) in Eq. . In deriving $\left\langle z_i \right\rangle$, we have assumed that the relative velocity of the dark matter and nucleus remains constant (at $\sim v_{\rm esc}$) during the capture process. This assumption is valid so long as the dark matter halo velocity is much smaller than its velocity during capture $u \ll w \sim v_{\rm esc}$, implying that the speed of the dark matter remains approximately constant during capture. To understand this, note that as soon as the dark matter velocity decreases by an $\mathcal{O}(1)$ factor from $w \sim v_{\rm esc}$, its speed will be well below the escape velocity, since $u \ll v_{\rm esc}$. ![Mass capture rate of dark matter on a constant density white dwarf, for a per-nucleon scattering cross-section of $\sigma_{nX} = 10^{-38}$ (left panel) and $10^{-36}$ cm$^2$ (right panel). Following Eq. , these per-nucleon cross sections translate to dark-matter carbon cross sections of $\sigma_{nX} \sim 10^{-34}$ and $\sim 10^{-32}$ cm$^2$. In both panels we have taken the target star to be a 1 solar mass white dwarf composed of carbon 12, with $R=10^4$ km, in a background dark matter density of $\rho_{X} = 0.3\, {\rm GeV/cm^3}$ with halo velocity dispersion $\bar{v} \simeq 220~{\rm km/s}$.[]{data-label="fig:wdmc"}](WDPlot38 "fig:") ![Mass capture rate of dark matter on a constant density white dwarf, for a per-nucleon scattering cross-section of $\sigma_{nX} = 10^{-38}$ (left panel) and $10^{-36}$ cm$^2$ (right panel). Following Eq. , these per-nucleon cross sections translate to dark-matter carbon cross sections of $\sigma_{nX} \sim 10^{-34}$ and $\sim 10^{-32}$ cm$^2$. In both panels we have taken the target star to be a 1 solar mass white dwarf composed of carbon 12, with $R=10^4$ km, in a background dark matter density of $\rho_{X} = 0.3\, {\rm GeV/cm^3}$ with halo velocity dispersion $\bar{v} \simeq 220~{\rm km/s}$.[]{data-label="fig:wdmc"}](WDPlot36 "fig:") The mass capture rates for heavy dark matter in a white dwarf, computed using both the single and multiple capture expressions and two different assumptions about the size of the dark matter-nucleon cross section are shown in Fig. \[fig:wdmc\]. The contours in Figs. \[fig:wdmc\] display the capture rate for up to $N \leq 1,10, 100...$ scatters, using Eq. . As the dark matter-nucleon cross-section increases, the difference in mass capture rate for $N = 1$ versus $N \leq 1000$ scatters increases dramatically. This is a consequence of the fact that, as the dark matter-nucleon cross-section becomes large enough, most trajectories through the white dwarf will involve multiple scattering events and so the rate for capture after a single scatter more substantially under-predicts the total capture rate. We can also see that, as the number of scatters increases, the “turnover mass" (the mass at which the capture rate diminishes) also increases. As explored in Section \[sec:simpeq\], this is because lighter dark matter requires fewer scatters to be captured, since the fractional energy loss of the dark matter per scatter is $\sim 2 m_{\rm N} /m _X$. The quoted per-nucleon scattering cross-sections in Fig. \[fig:wdmc\], $\sigma_{nX} =10^{-34}$ and $10^{-36}~{\rm cm^2}$, which were chosen to be large enough so that multiple scatters are relevant, are typically excluded by direct detection searches for spin-independent DM-nucleon scattering [@Aprile:2012nq; @Akerib:2016vxi; @Tan:2016zwf]. One might consider whether white dwarfs could be used to constrain spin-dependent DM-nucleon interactions, which are less constrained by direct detection searches. Unfortunately, white dwarfs are composed of mainly spin-free nuclei ($e.g.$ carbon 12, oxygen 16), and so a precise determination of the fraction of spin $>0$ nuclei in a given white dwarf would need to be determined to set bounds on spin-dependent dark matter, something that is beyond the scale of this work. Another scenario for which large dark matter-nucleon cross-sections are not yet excluded and could potentially be probed by white dwarf observations is inelastic dark matter [@McCullough:2010ai; @Bramante:2016rdh], provided the dark matter settles to the core of the white dwarf ($i.e.$ thermalizes) within the age of the universe. Turning to neutron stars, a 1.5 solar mass neutron star has escape speed $\sqrt{2GM/R} \sim \frac{2}{3}$ [@Shapiro:1983du] and is supported by neutron degeneracy pressure. The extreme velocities and densities mean we must modify Eq.  to account for two general relativistic corrections when considering dark matter capture on a neutron star. First, the amount of dark matter crossing the star’s surface will be increased because of an enhancement from the star’s gravitational potential. It can be shown [@Goldman:1989nd] that for a dark matter particle with velocity $u$ and impact parameter $b$, if the particle barely grazes the surface of the star, then $C_X \propto b^2 = (2 G M R/u^2) [1-2GM/R]^{-1}$, where the square-bracketed term accounts for the general relativistic enhancement to dark matter crossing the star’s surface. Accordingly, the dark matter capture rate (with $m = m_n$, of course) is modified to, $$\begin{aligned} C_N \rightarrow \frac{C_N} {1-\frac{2GM}{R}}, \label{eq:GRcorr}\end{aligned}$$ to account for general relativity-enhanced capture.[^5] The second general relativistic correction we need is to account for the gravitational blueshift of the dark matter’s initial kinetic energy, in the rest frame of a distant observer. In the absence of general relativistic corrections, the dark matter must lose its initial halo kinetic energy $E_i = \frac 1 2 m_X u^2$ via scattering with the star in order to become gravitationally bound to the star. However, from the rest frame of a distant observer, this initial kinetic energy will be enhanced by a factor $\chi = [1-(1-2GM/R)^{1/2}]$ under the influence of the star’s gravitational potential. This can be accounted for by making the substitution in Eq.  $$\begin{aligned} v_{\rm esc} \rightarrow \sqrt{2 \chi}.\end{aligned}$$ In practice, the gravitational and kinetic energy blueshift effects alter the dark matter capture rate in neutron stars by less than a factor of two. Given the degeneracy of the neutrons that the dark matter must collide with, one may worry that Pauli blocking also comes into play when deriving the capture rate. Specifically, in order to scatter with the constituents of a neutron star, dark matter must excite them to momenta larger than their Fermi momentum, typically $p_{\rm F,NS } \sim 0.1~ {\rm GeV}$ [@Goldman:1989nd]. However, as the incoming dark matter has been accelerated to semi-relativistic speeds in the gravitational well of the neutron star, this requirement is easily satisfied provided the dark matter is heavy. Plugging in numbers, in the limit $m_X \gg m_n$ the average momentum exchanged in any scatter is $Q \sim \sqrt 2\, m_n\, v_{esc} \sim 0.7\, {\text{GeV}}\gg p_{F,NS}$; see $e.g.$ [@Bramante:2013hn; @Bertoni:2013bsa] for more discussion. ![Mass capture rate of dark matter on neutron star, for a per-nucleon scattering cross-section of $\sigma_{nX} = 10^{-44}$ and $10^{-42}$ cm$^2$. A constant density, $1.5$ solar mass neutron star composed of neutrons, with $R=10$ km, in a background dark matter density of $\rho_{X} = 0.3\, {\rm GeV/cm^3}$ with halo velocity dispersion $\bar{v} \simeq 220~{\rm km/s}$ is assumed. Note that the dark matter mass where the mass capture rate shifts from $m_X\, C_X \propto \text{const}$ to $m_X\, C_X \propto 1/m_X$ shifts to higher values as we include more scatters[]{data-label="fig:nsmc"}](NS44 "fig:") ![Mass capture rate of dark matter on neutron star, for a per-nucleon scattering cross-section of $\sigma_{nX} = 10^{-44}$ and $10^{-42}$ cm$^2$. A constant density, $1.5$ solar mass neutron star composed of neutrons, with $R=10$ km, in a background dark matter density of $\rho_{X} = 0.3\, {\rm GeV/cm^3}$ with halo velocity dispersion $\bar{v} \simeq 220~{\rm km/s}$ is assumed. Note that the dark matter mass where the mass capture rate shifts from $m_X\, C_X \propto \text{const}$ to $m_X\, C_X \propto 1/m_X$ shifts to higher values as we include more scatters[]{data-label="fig:nsmc"}](NS42 "fig:") In Fig. \[fig:nsmc\] we show the mass capture rate of dark matter on a neutron star for a range of dark matter masses and a dark matter-nucleon cross-sections where $\tau \gtrsim 1$. Figure \[fig:nsmc\] has all of the same qualitative features as Fig. \[fig:wdmc\]: the mass capture rate increases dramatically once multiple scatters are included, and exhibits a $1/m_X$ dependence in the large $m_X$ limit. However, comparing Figs. \[fig:wdmc\] and \[fig:nsmc\], it is evident that multiscatter capture is relevant for white dwarfs when $\sigma_{nX} \sim 10^{-35}~{\rm cm^2}$, while multiscatter capture on neutron stars becomes important for $\sigma_{nX} \sim 10^{-45}~{\rm cm^2}$. Because the latter cross-section is closer to the cross-section presently probed by direct detection experiments [@Aprile:2012nq; @Akerib:2016vxi], we will focus on neutron star probes of dark matter in the next section. While our focus here will be on dark matter which annihilates inside and thereby heats neutron stars, there are many other ways multiscatter stellar capture could be used to probe dark matter, including neutron star implosions [@Goldman:1989nd; @Kouvaris:2010jy; @deLavallaz:2010wp; @Kouvaris:2011fi; @McDermott:2011jp; @Guver:2012ba; @Bramante:2013hn; @Bell:2013xk; @Bramante:2013nma; @Bramante:2014zca; @Kurita:2015vga; @Bramante:2015dfa; @Bramante:2016mzo], monopole-induced nucleon decay [@Dimopoulos:1982cz; @Kolb:1982si], white dwarf heating [@Hooper:2010es; @McCullough:2010ai; @Hurst:2014uda], Type Ia supernova ignition [@Bramante:2015cua; @Graham:2015apa], neutrino signatures of superheavy dark matter [@Crotty:2002mv; @Albuquerque:2002bj], and dark matter-powered stars [@Moskalenko:2007ak; @Spolyar:2007qv; @Fairbairn:2007bn; @Iocco:2008xb]. Probing heavy dark matter with old neutron stars {#sec:results} ================================================ Dark matter that is captured in neutron stars may annihilate to Standard Model particles, thereby heating and increasing the apparent luminosity of old neutron stars. Consequently, the temperature of old neutron stars can be used to probe the dark matter-nucleon cross-section, provided that one bounds or measures the temperature of old stars in regions of sufficiently high dark matter density. Because it harbors a high density of dark matter, the galactic center is an obvious target [@Kouvaris:2007ay; @Bertone:2007ae; @deLavallaz:2010wp; @Kouvaris:2010vv]. While old neutron stars at the galactic center are being vigorously sought by the current generation of radio telescopes [@Wharton:2011dv; @Dexter:2013xga], to date none have been found, although they are expected to be within reach of next generation radio telescopes like FAST and SKA [@FASTSKA]. Here we determine the [*potential*]{} bounds on dark matter annihilating to SM particles in old neutron stars in the galactic center. Prior work [@deLavallaz:2010wp; @Kouvaris:2010vv] has explored this bound on dark matter using single scatter capture. This document extends these bounds to higher masses using multiple scatter capture, assuming that DM annihilates to Standard Model particles, and that an old, colder neutron star is resolved in the galactic center at some time in the future. The process by which dark matter heats neutron stars involves several steps. First, each captured dark matter particle must thermalize with the host neutron star through successive scatters off neutrons. This thermalization process is complicated by the fact that dark matter momentum will drop after each scatter, and eventually the momentum exchanged between dark matter and the neutrons becomes small enough that Pauli blocking can no longer be ignored. A full calculation of thermalization within neutron stars incorporating Pauli blocking was performed in Ref. [@Bertoni:2013bsa] and showed that the time to thermalize is much less than the age of the neutron star. As one example, for $m_X > 100~{\rm GeV}$ dark matter with a cross-section $\sigma_{nX} > 10^{-48}~{\rm cm^2}$ (well below the values where multscatter becomes important), thermalization occurs in less than a thousand years. Once thermalized, the dark matter settles into a spherical volume $V_{th}$ within the star. Approximating the neutron star as having a constant density core $\rho_{NS}$, $V_{th}$ can be related to the star’s temperature $T$ by $V_{th} = \frac{4}{3} \pi r_{th}^3$, $r_{th} = \sqrt{9 T / 4 \pi G \rho_{NS} m_X}$ (see $e.g.$ [@Bramante:2015cua]) within the star. The next step is to understand how $N_X(t)$ – the number of dark matter particles residing in $V_{th}$ – evolves with time. Assuming the thermalization time is rapid compared to other timescales, the number of dark matter particles increases as new particles are captured, and decreases as pairs of dark matter particles meet and annihilate. This can be phrased as a simple differential equation for $N_X(t)$ [@Bramante:2013nma], with solution: $$\begin{aligned} N_X(t) = \sqrt{\frac{C_{X} V_{th}}{\left\langle \sigma_a v \right\rangle}} {\rm ~tanh} \left[ \sqrt{\frac{C_X \left\langle \sigma_a v \right\rangle }{V_{th}}}t\right],\end{aligned}$$ where $C_{X}$ is the net capture rate, $t$ is time over which collection has occurred, and $\left\langle \sigma_a v \right\rangle$ is the thermally-averaged self-annihilation cross-section of the dark matter (DM DM $\to$ SM fields). Once $t > \sqrt{\frac{V_{th}}{C_X \left\langle \sigma_a v \right\rangle }}$, the dark matter population plateaus, and there is an equilibrium between the rate at which dark matter is annihilated and the rate at which it is captured. Assuming all dark matter passing through a neutron star is captured (which implies the longest equilibration time), this equilibration time is [@Kouvaris:2010vv]: $$\begin{aligned} t_{\rm eq} \simeq 10^{4}~{\rm yrs} \left( \frac{10^2~{\rm GeV}}{m_X} \right)^{1/4} \left( \frac{10^3~{\rm GeV/cm^3}}{\rho_{X}} \right)^{1/2} \left( \frac{T_{NS}}{3 \times 10^4~{\rm K}} \right)^{3/4}\left( \frac{10^{-45}~{\rm cm^3/s}}{\left\langle \sigma_a v \right\rangle} \right)^{1/2}, \label{eq:eqtime}\end{aligned}$$ where $T_{NS}$ is the temperature of the neutron star, and this equilibration time assumes that all DM passing through a $R=10$ km, $1.5~$M$_{\odot}$ NS with central density $\rho_{NS} \sim 4*10^{14}~{\rm g/cm^3}$ is captured. The temperatures for the oldest observed neutron stars (age $> 100$ million years) are projected to be $T \ll 3 \times 10^4\, K$ [@deLavallaz:2010wp]. Plugging this temperature into Eq. (\[eq:eqtime\]) and assuming our local dark matter density $\rho_{X} = 0.3\, {\text{GeV}}/{\rm cm}^3$, we find the equilibration time is $t_{eq} \le 10$ million years for $100\, {\text{GeV}}$ dark matter with annihilation cross sections of $\left\langle \sigma_a v \right\rangle \gtrsim 10^{-48}~{\rm cm^3/s}$. This value is already far less than the age of the oldest neutron stars, and increasing the dark matter mass, density or annihiliation cross section leads to even shorter times; for a benchmark point closer to our region of interest, PeV dark matter in the galactic center ($\rho_{X} = 10^3\, {\text{GeV}}/{\rm cm}^3$) will equilibrate in a $3 \times 10^4\, K$ neutron star in as little as 1000 years if $\langle \sigma_a\, v\rangle = 10^{-45}~{\rm cm}^3/s$. Because this dark matter self-annihilation cross-section is already quite small, hereafter we assume the dark matter annihilation rate rapidly reaches equilibrium with the capture rate. Within the parameter space where thermalization and equilibration times are short compared to the typical neutron star lifetime, the annihilation rate is equivalent to the capture rate, and the rate of energy release is simply the mass capture rate $m_X\, C_{X}$. We can define an effective neutron star temperature arising from dark matter annihilations by equating the energy release rate to the apparent luminosity,[^6] $$\begin{aligned} m_X\, C_{X} = L_{DM} = 4 \pi \sigma_0 R^2 T_{NS}^4 \left(1-\frac{2\,G M}{R} \right)^2, \label{eq:money}\end{aligned}$$ where $\sigma_0 = \pi^2/60$ is the Stefan-Boltzmann constant, and the parenthetical term accounts for the gravitational redshift of light departing the high curvature environment of a neutron star. Read left to right, Eq. (\[eq:money\]) defines a minimum temperature for an old neutron star (provided our assumptions of thermalization and equilibration) for a given dark matter mass, density, and capture cross section. Read right to left, Eq. (\[eq:money\]) forms a bound. Specifically, if an old neutron star is observed to have surface temperature $T_{NS}$, Eq. (\[eq:money\]) dictates what regions of $\rho_{X}, m_X$ and $\sigma$ are allowed and which regions would overheat the observed neutron star. Plugging Eq.  into Eq. , we can reframe the expression as $$\begin{aligned} \sum_{N} p_N(\tau)\left( 1 - \left(1 - \frac{2 A^2_N\, \bar v^2}{3\, v^2_{esc}} \right)\, e^{-A^2_N} \right) = \text{const}\,\frac{T_{NS}^4}{\rho_{X}}, \label{eq:sump}\end{aligned}$$ where the constant on the right hand side is a combination of $G$, $\sigma$, $\bar v$ and the mass and size of the neutron star. The sum over $N$ makes this formula a bit opaque, however we know from Sec. \[sec:simpeq\] that the left hand side of Eq. (\[eq:sump\]) is roughly linear in the dark matter-nucleon cross section $\sigma$ and is either independent of the dark matter mass or $\propto 1/m_X$ depending on whether the dark matter is lighter or heavier than a PeV. Solving Eq. (\[eq:sump\]) for $\sigma$, these two regions translate into bounds that are $\sigma \propto \text{const}$ (for $m_X < \text{PeV})$ or $\sigma \propto\, m_X$ (for $m_X > \text{PeV})$. To get a feeling for the type of bound that can be set in this way, in Fig. \[fig:ngc\] below we show $\sigma$ could be excluded as a function of $m_X$ should we observe an old neutron star with temperature $T_{NS} \sim 3\times10^4\, \text K$ in the galactic center ($\rho_{X} = 10^3\, {\text{GeV}}/{\rm cm}^3)$. ![Potential sensitivity to dark matter from annihilation to SM particles, heating a $1.5~M_\odot$ neutron star in the galactic center ($\rho_X = 10^3~{\rm GeV/cm^3}$, about 10 parsecs from the Galactic Center) to a core temperature of $ \sim 3 \times 10^4$ K, along with interpolations of the current LUX bounds and the neutrino floor (one atmospheric neutrino event on xenon [@Billard:2013qya]) for comparison. Here the parameters of the surrounding dark matter density and neutron star temperature have been chosen conservatively; observation of a colder neutron star, or a larger dark matter density would both deepen sensitivity. The curve labeled “1 scatter" uses Eq.  to set the bound, while the multiscatter curve uses the multiscatter formulae derived in this document. Note that multiscatter capture allows for heavier dark matter to be discovered or bounded, for cross-sections below the direct detection neutrino floor.[]{data-label="fig:ngc"}](NSplot) There are several interesting features in Figure \[fig:ngc\]. First, a shift in the cross section bound around $m_X \sim \text{PeV}$ is evident; this was the mass at which multiple scatter capture becomes relevant, as derived in Section \[sec:simpeq\]. Second, should a neutron star matching the criteria be found, the DM-nucleon cross section bound it implies would dominate over the existing xenon direct detection bound for all dark matter heavier than $m_X \sim {\text{TeV}}$. Furthermore, while comparing [*potential*]{} neutron star heating bounds to [*current*]{} xenon bounds may seem unfair, for $m_X > 0.1\, \text{PeV}$, the cross sections ruled out by neutron star heating are beneath the so-called ‘neutrino floor’ cross section, where direct detection experiments encounter an irreducible background. Given that direct detection experiments are approaching the multi-ton scale and the feasibility of further size increase are far from obvious, observing a cold neutron star may be the best path towards sub-neutrino floor bounds, and further study into how well current and planned telescopes can identify cold neutron stars in environments like the galactic center are warranted [@Bramantetal]. The dependency of the neutron star bound on the temperature of the observed star and the ambient dark matter density where the star is located are clear from the right hand side of Eq. (31), provided one does not deviate too much from the benchmark values of $T_{NS}=3 \times 10^4\, \text K,\, \rho_{X} = 10^3\, \text{GeV/cm}^3$. For example, observing a $T_{NS} \sim 1.5 \times 10^4~ {\rm K}$ neutron star in the galactic center would strengthen the bound in Fig. \[fig:ngc\] by a factor of $\sim 10$. For larger temperature or density deviations, the parametrics is not as simple, since the capture rate cannot be increased indefinitely by increasing the DM-nucleus cross section. Specifically, once $\sigma$ reaches the point where all dark matter (at all halo velocities) is captured, further increasing $\sigma$ will not change anything. This ‘saturation’ cross section will depend on the mass of the dark matter. While a detailed study of the feasibility of constraining neutron stars at various temperatures in the galactic center has not yet been undertaken, we note that observations of $> 10^4\, {\rm K}$ neutron stars within a parsec of the galactic center appear to be within the scope of existing X-ray observatories [@Prinz:2015jkd], and would lead to the strongest bound on the dark matter-neutron cross section for $m_X > {\rm PeV}$. Conclusions {#sec:conclusions} =========== The existence of dark matter has been established by a number of cosmological and astrophysical observations. It is, therefore, one of the most compelling arguments for physics beyond the Standard Model, since there is no candidate for dark matter within the Standard Model. This has inspired vigorous experimental searches for non-gravitational dark matter interactions, including underground detectors looking for dark matter smacking against nuclei, and satellites searching for annihilation of dark matter into Standard Model particles. These searches are most sensitive to dark matter masses up to a few TeV. One complementary way to look for heavier dark matter is though its accumulation in stars. Most studies addressing dark matter accumulation in stars have supposed that capture occurs after a single scatter. In this paper we explored multiscatter capture and found it is particularly relevant for high mass dark matter, which, even for cross-sections below present constraints, will typically scatter multiple times in a neutron star before being captured. We have derived analytical formulae for this process and we have proven that the dark matter-nucleon cross-section bounds obtained at large dark matter masses will have the same parametric dependence as xenon direct detection experiments. Note that while the $\sigma \propto m_X$ scaling at high masses for direct detection experiments is a result of decreased local dark matter number density at high masses ($n_X \sim \rho_X / m_X$), the same parametric dependence that arises for heavy dark matter capture in compact stars results from needing more scattering events to capture higher mass dark matter, as explained in Section \[sec:simpeq\]. We have used the resulting formalism to point out bounds on heavy dark matter, which could be obtained through thermal observation of old neutron stars in the galactic center. The resulting bounds are stronger at high dark matter masses, than the reach of next generation direct detection experiments. For $m_X \gtrsim 100$ TeV the cross-section bound on dark matter that annihilates to Standard Model particles from a $T\sim 10^4$ K neutron star near the galactic center, lies below xenon direct detection cross-sections at which atmospheric neutrinos will begin to provide a substantial background, known as the xenon direct detection neutrino floor. There are additional applications of multiscatter capture, some of which are listed at the end of Section \[sec:detail\], which we leave to future work. Acknowledgments {#sec:ack .unnumbered} =============== We thank Masha Baryakhtar, Matthew McCullough, and Nirmal Raj for useful discussions. This work was partially supported by the National Science Foundation under Grants No. PHY-1417118 and No. PHY-1520966. Research at Perimeter Institute is supported by the Government of Canada through Industry Canada and by the Province of Ontario through the Ministry of Economic Development & Innovation. Capture in the optically thin limit {#app:single} =================================== It is useful to summarize the derivation of dark matter capture [@Gould:1987ir] on stars: - Far enough away from the star, dark matter particles in the galactic halo have speeds that are Boltzmann distributed. Half the particles will be moving towards the star, namely those with headings $-\pi/2 < \theta < \pi/2$, where $\theta$ is the angle between each particle’s velocity and a vector pointing at the star center. The total flux of dark matter is defined as $\mathcal{F}$. - As it traverses the stars gravitational well, the dark matter moves faster in the star’s gravitational potential, but conservation of angular momentum implies that its angular momentum with respect to the star remains fixed. Therefore given $\theta$ and the particle’s initial speed ($i.e.$ altogether the particle’s initial velocity), we can determine whether it has an angular momentum small enough that it will intersect a spherical mass shell at radius $r$ from the center of the star. - The probability that dark matter scatters and is captured while transiting a mass shell of thickness $dr$, depends on the density of scattering sites $n(r)$, the initial dark matter velocity $\vec{u}$, and the dark matter’s cross-section with stellar constituents, $\sigma$. Integrating the Boltzmann distributed flux and the probability for capture over $0< u < \infty$ for each stellar mass shell, and integrating mass shells over $0<r<R$, determines the total capture rate. (In the case of multiscatter capture covered in Section \[sec:detail\], it is convenient to instead simply consider all dark matter that intersects the star at radius $R$, and then integrate over paths through the star, calculating the multiscatter probability along each path.) We assume dark matter particles surrounding the star will have velocities that follow a Maxwell-Boltzmann distribution. The number density of dark matter particles with velocities ranging from $u$ to $u + du$ is $$\begin{aligned} f(u)du = 3\sqrt{\frac{6}{\pi}}~ \frac{n_X u^2}{ \bar{v}^3} ~ {\rm Exp} \left[ -\frac{3 u^2}{2 \bar{v}^2}\right]~du, \label{eq:mboltz}\end{aligned}$$ where $n_X$ is the number density and $\bar{v}$ the average velocity of the dark matter particles. Here $f(u)~ du$ gives the distribution of dark matter velocities far from the gravitational well of the star; nearer to the star each dark matter particle will have a total velocity given by $w^2 = u^2 + v^2(r)$, where $v(r)$ is the escape velocity from the star at radius $r$. It is useful to at first consider the flux of dark matter particles across a spherical surface large enough that the star’s gravitational potential can be neglected. The angle at which dark matter intersects the large surface will increase or diminish its flux across this spherical surface; to account for this, we incorporate a factor of $\vec{u} \cdot \hat{R_{\rm a}} = u ~{\rm cos}~ \theta$, where $\theta$ is the angle between the DM velocity vector $\vec{u}$ and a unit vector $\hat{R_{\rm a}}$ normal to the large surface. Then the flux of dark matter particles towards the star, through an infinitesimal area element, is obtained by integrating the product of $u~ {\rm cos}~ \theta$ and Eq.  over the range $0 <d ({\rm cos}~\theta) < 1$, and including a factor of $1/2$ to effectively reject the outgoing DM flux, $$\begin{aligned} d F &= \frac{1}{2} ~ f(u) u~ du ~ {\rm cos}~ \theta~ d ({\rm cos}~\theta)= \frac{1}{4} ~ f(u) u~ du ~d ({\rm cos^2~}\theta). \label{eq:influxatinf}\end{aligned}$$ This leads directly to an expression for the flux of dark matter entering a region of size $R_{\rm a}$, which is large enough to ignore the star’s gravitational potential, $$\begin{aligned} d \mathcal{F} &= 4\pi R_{\rm a}^2 ~dF = \pi R_{\rm a}^2~ f(u) u~ du ~d ({\rm cos^2~}\theta). \label{eq:influxoverR}\end{aligned}$$ To incorporate the star’s gravitational potential into the capture rate, we must consider what the dark matter flux will be into a spherical shell of radius $r$, which is the radius of the star or smaller. We define $\alpha$ as the angle between the dark matter particle’s velocity vector $\vec{w}$ and the unit normal vector $\hat{r}$ on this small spherical shell. The dark matter’s dimensionless angular momentum is $$\begin{aligned} J \equiv u R_{\rm a} ~ {\rm sin} ~\theta = w r ~{\rm sin} ~\alpha, \label{eq:Jdef}\end{aligned}$$ where the last equality of Eq.  follows from angular momentum conservation. As noted previously, $w^2 = u^2 + v^2(r)$, and $v(r)$ is the escape velocity at radius $r$. The flux can now be recast with $dJ^2 = u^2 R_{\rm a}^2 ~d ({\rm cos^2~}\theta)$, $$\begin{aligned} d \mathcal{F} = \pi f(u) \frac{du}{u}~ d J^2. \label{eq:dfJ}\end{aligned}$$ As the dark matter particle transits the star’s interior, the probability that it is captured after scattering once can be defined as $g_1(w)$. Then the total probability for capture while traversing an infinitesimal spherical shell of length $dl = dr/{\rm cos} ~\alpha$, is the capture probability times the number of path lengths in $dl$: $$\begin{aligned} n(r) \sigma g_1(w) ~dl, \label{eq:dlprob}\end{aligned}$$ where we have indicated that the number density $n(r)$ of scattering sites may have radial dependence.[^7] Using Eq.  to re-express $dl = dr/ \sqrt{1-(J/rw)^2} $, the total single scatter capture rate can then obtained by multiplying Eqs.  and , and integrating over $J$. We apply a theta function to require that the dark matter’s angular momentum is small enough that it will intersect a shell of size $r$, $\Theta (rw - J)$. We also multiply by a factor of two to account for dark matter passing through both sides of a spherical shell of size $r$, $$\begin{aligned} dC_1 &= 4 \pi n(r) \sigma g_1(w)~ f(u) \frac{du}{u}~ \int_0^{\infty}dJ~\Theta (rw - J)~J ~dl \nonumber \\ &=4 \pi n(r) \sigma g_1(w)~ f(u) \frac{du}{u}~ w^2r^2~dr. \label{eq:g1dCsingle}\end{aligned}$$ It remains to determine the probability for capture after a single scatter, $g_1(w)$. We define $$\begin{aligned} \beta_{\pm} \equiv \frac{4m_X m_n}{(m_X \pm m_n)^2},\end{aligned}$$ where $m_n$ is the mass of the stellar constituent with which the DM scatters. A kinematic analysis shows that, in the star’s rest frame, the fraction of DM energy lost in a single scatter is evenly distributed over the interval $0 < \Delta E/E_0 < \beta_+$. For single scatter capture, the required fraction of DM kinetic energy loss is $u^2/w^2$, which is the ratio of DM’s kinetic energy far away, versus inside the star. To define $g_1(w)$, we use the probability for a single scatter to diminish the DM kinetic energy by a fraction $u^2/w^2$, $$\begin{aligned} \frac{1}{\beta_+} \left(\beta_+ - \frac{u^2}{w^2} \right), \label{eq:gouldprob}\end{aligned}$$ along with a theta function that enforces dark matter capture after a single scatter, $$\begin{aligned} \Theta \left(\beta_+ - \frac{u^2}{w^2} \right). \label{eq:gouldtheta}\end{aligned}$$ Then $g_1(w)$ is the product of Eqs.  and . Inserting this into Eq. , and integrating over the incoming Boltzmann distribution of DM ($u$), the total capture rate as a function of radius is $$\begin{aligned} C_{\rm 1} = \sqrt{\frac{96}{\pi}} \frac{n_{\rm X}}{\bar{v}} \int_{0}^{R} dr~ r^2~ n(r) \sigma(r) v^2(r) \left( 1-\frac{1-e^{-A^2(r)}}{A^2(r)} \right), \label{eq:singlecapturefull}\end{aligned}$$ where we have indicated that the number density of scattering sites $n(r)$, the escape velocity, $v(r)$, the Boltzmann variable $A^2 \equiv 3v^2(r)/2\bar{v}^2 \beta_-$, and the scattering cross-section, as a consequence of form factor suppression at higher velocities, all depend on the radius of the mass shell, $r$. In the limit that we ignore radial dependence, and set $v(r) \simeq v_{~esc}(R)$ Eq.  results. [^1]: To understand the $\frac{3}{2}$ factor in the optical depth, observe that the cross section for which $1$ scatter occurs over a distance of $2R$, (where $R$ is the radius of the star) is $$\begin{aligned} 1 = n\, \sigma\, (2R) &= \frac{N_n}{(4/3)\pi R^3}\sigma\, (2R) = \frac{3\, N_n}{2\pi\, R^2}\sigma \\ \nonumber & \rightarrow \sigma = \frac{2}{3} \left(\frac{\pi R^2}{N_n} \right) = \frac{2}{3} \sigma_{\rm sat},\end{aligned}$$ The optical depth is normalized so that $\tau =1$ when dark matter typically scatters once as it passes through the star. [^2]: The multi-scatter capture rate can be obtained by setting $n(r) = \frac{N_n}{\frac{4}{3} \pi R^3}$ in Eq. , integrating $r$ from $0$ to $R$ and making the substitutions $g_1(w) \rightarrow g_{N}(w)$ and $\frac{\sigma}{\sigma_{sat}} \rightarrow p_{N}(\tau)$. [^3]: To estimate how much the constant density assumption alters the neutron star capture rate, consider an approximate neutron star density profile (ADP) $\rho_{NS}^{\rm ADP} (r) = 2.6 \times 10^{38} ~{\rm GeV/cm^3} \left( \frac{10~{\rm km}}{r} \right) $, which matches a 1.5 $M_{\odot}$, $R=$ 10 km neutron star. This can be compared to a constant density (CD) profile, such a neutron star would have $\rho_{NS}^{\rm CD} \simeq 4 \times 10^{38} ~{\rm GeV/cm^3}$. We can calculate the integrated optical depth $d \tau_{i} = n(r) \sigma_{nX} d \ell$, where $\ell$ is the path of the dark matter particle. Calculating this integrated optical depth for a dark matter particle that passes within a kilometer of the center of the neutron star, we find that for the constant density and approximate density profile cases, for trajectories passing deep within the neutron star, the optical depth can increase by up to fifty percent. This would somewhat aid capture in the multiscatter regime. Therefore, the bounds derived in this paper are somewhat conservative. [^4]: We have checked numerically that for $N \gtrsim 5$, the approximate expression in Eq. (\[eq:uNrel\]) matches the full expression Eq. (\[eq:gnfull\]) to within less than a percent for the applications presented in Section \[sec:results\]. [^5]: Technically, the general relativistic effects are most straightforwardly introduced into the differential capture rate $dC_N/dr$, which, upon integration, yield Eq. (\[eq:GRcorr\]) plus corrections. Given that we are already making an approximation in assuming straight trajectories through the star, we will neglect these corrections to Eq. (\[eq:GRcorr\]). [^6]: This implicitly assumes that the energy of all DM annihilation products go to heating. It can be verified that the scattering length for neutrinos (and all more strongly coupled Standard Model particles) is much less than the neutron star radius. The exact way the temperature will rise requires knowledge of the equation of state of the star, which is beyond the scope of this paper, but would be an interesting topic for future research. [^7]: In the case of multiscatter capture, the probability for capturing a dark matter particle that traverses the star in $N$ scatters is given by $g_{N}(w) p_{N}(\tau)$, where these are defined in Section \[sec:detail\].
--- abstract: 'We develop a method of constraining the cosmic string tension $G\mu$ which uses the Canny edge detection algorithm as a means of searching CMB temperature maps for the signature of the Kaiser-Stebbins effect. We test the potential of this method using high resolution, simulated CMB temperature maps. By modeling the future output from the South Pole Telescope project (including anticipated instrumental noise), we find that cosmic strings with $G\mu > 5.5\times10^{-8}$ could be detected.' author: - Andrew Stewart - Robert Brandenberger title: 'Edge Detection, Cosmic Strings and the South Pole Telescope' --- Introduction ============ At very early times it is believed that the universe underwent a series of symmetry breaking phase transitions which led to the formation of different types of topological defects. Among them are linear topological defects known as cosmic strings (for reviews see e.g. [@2000csot.bookV; @Hindmarsh:1994re; @Brandenberger:1993by]). After creation, the cosmic strings form a random network of infinite strings and closed string loops, the arrangement of which evolves over time through string interactions. Cosmic strings can also have self-interactions that lead to the formation of closed loops via the exchanging of endpoints, or *intercommutation* [@Shellard:1987bv]. When formed, cosmic string loops break off of the longer segments and continue to oscillate, losing energy via gravitational radiation, until eventually decaying. Infinitely long strings, on the other hand, cannot decay into gravitational radiation and survive indefinitely. The string network eventually approaches a scaling regime in which the number of strings crossing a given Hubble volume is fixed and the strings contribute some fraction of the total energy in the universe. The existence of a scaling solution is supported by independent numerical simulations of the evolution of the cosmic string network [@Albrecht:1984xv; @PhysRevLett.60.257; @Allen:1990tv; @Albrecht:1989mk]. The quantity which characterizes the cosmic strings is their tension, $\mu$, which is equivalent to the mass per unit length. This tension is directly determined by the energy scale of the symmetry breaking during which the cosmic strings were formed. It is possible that cosmic strings could have been formed at many different epochs, meaning the tension of the cosmic strings can take a wide variety of values. When discussing cosmic strings it is more common to work with the dimensionless parameter $G\mu$, where $G$ is Newton’s constant. Until the late 1990’s, cosmic strings were studied as potential seeds for structure formation [@Turok:1985tt; @Sato; @Stebbins]. The eventual discovery of the acoustic peaks [@Boomerang; @WMAP] in the angular power spectrum of the CMB lead to cosmic strings being ruled out as the main origin of structure in favour of the inflationary paradigm, since the angular power spectrum predicted by cosmic strings consists of only a single broad peak [@Periv; @Albrecht; @Turok]. Despite this, there currently exists a renewed interest in cosmic strings fueled by the study of different cosmological models in which their formation is generically predicted (see [@Brandenberger:1988aj; @Jones:2003da; @Jeannerot:2003qv] for just a few possibilities). It has also recently been shown that a contribution of less than 10% of the observed CMB power on large scales coming from cosmic strings is acceptable [@Pogosian:2003mz; @Wyman:2005tu; @Fraisse:2006xc; @Seljak:2006bg; @Bevis:2006mj; @Bevis:2007gh; @Pogosian:2008am]. The current bounds on the cosmic string tension come from a variety of measurements. The gravitational waves emanating from many string loops at different times produce a stochastic background which is the focus of current interferometer and pulsar timing experiments. Pulsar timing, specifically, places a bound $G\mu<10^{-7}-10^{-8}$ on the cosmic string tension [@Damour:2004kw; @Jenet:2006sv]. However, we note that in order to place a bound on $G\mu$ using gravitational wave constraints one must make assumptions about the size of the loops which are formed in the string network, the probability that strings will intercommute when crossing, and even the string model under consideration. Whereas the scaling solution for the long string network is well established, the distribution of loops is uncertain by several orders of magnitude. Therefore, the strength of the bounds obtained by considering gravitational radiation from string loops can be questioned. A more robust bound on the tension comes from the angular power spectrum of the CMB (since the spectrum obtains an important contribution from the long string network). Assuming a scaling solution of the long string network with the parameters from numerical studies of cosmic string evolution, the string contribution to the angular power spectrum of the CMB was determined, and the results of these studies translate directly into a bound $G\mu<5\times10^{-7}$ [@Wyman:2005tu; @Fraisse:2005hu]. Along with the above mentioned phenomena, there exists another observational signature unique to cosmic strings which could be directly detected, namely, linear discontinuities in the temperature of the CMB. This signature was first studied by Kaiser and Stebbins [@Kaiser:1984iv] and is usually referred to as the KS-effect. This effect occurs because the space-time around a straight cosmic string is flat, but with a wedge, whose vertex lies along the length of the string, removed. The angle subtended by the missing wedge, $\phi$, is determined by the tension of the cosmic string as [@Vilenkin:1981zs] $$\phi=8\pi G\mu\,.$$ For an observer looking at a source while a cosmic string is moving transversely through the line of sight between the two, the photons passing from the source to the observer along one side of the string will appear to be Doppler shifted relative to those passing along the other side due to this non-trivial geometry (see Figure \[string\]). If the source that the observer is viewing happens to be the CMB, this effect will manifest itself as discontinuities in the microwave background temperature along curves in the sky where strings are located. The magnitude of the step in temperature across a cosmic string is $$\label{KS} \frac{\delta T}{T}=8\pi G\mu\gamma_sv_s\,|\hat{k}\cdot(\hat{v_s}\times\hat{e_s})|\,,$$ where $v_s$ is the speed with which the cosmic string is moving, $\gamma_s$ is the relativistic gamma factor corresponding to the speed $v_s$, $\hat{v}_s$ is the direction of the string movement, $\hat{e}_s$ is the orientation of the string and $\hat{k}$ is the direction of observation [@Moessner:1993za]. For cosmic strings formed in a phase transition in the early universe, the “missing wedge” produced by a string has, at time $t$, a finite depth given by the Hubble radius $H^{-1}(t)$ at that time [@Joao]. Some work has already been dedicated to searching for the KS-effect in current CMB data [@Jeong:2006pi; @Lo:2005xt], but a cosmic string signal was not found, leading to a constraint on the tension $G\mu\lesssim4\times10^{-6}$. ![\[string\] The geometry of the space-time near a cosmic string. Shown here is a slice of the space-time perpendicular to the orientation of the string. The coloured area represents a missing wedge with deficit angle $\phi$, while the dashed lines represent the paths of photons travelling from a source to an observer and the arrow shows the direction of motion of the string.](geometry){width="0.75\linewidth"} In this work we implement a method of detecting the temperature discontinuities in the CMB produced by cosmic strings via the KS-effect using an edge detection algorithm commonly employed in image analysis, the Canny algorithm [@Canny:1986aa]. The motivation behind this choice is clear since the cosmic strings literally appear as edges in the CMB temperature. Depending on the sensitivity of the edge detection algorithm to the temperature edges, a bound on the cosmic string tension could then be imposed. This work is a continuation of a previous study [@Amsel:2007ki] which indicated that an edge detection method may lead to a significant improvement in the sensitivity to the presence of cosmic strings compared to previous direct searches for strings in the CMB. In this paper we improve the method proposed in [@Amsel:2007ki] and we investigate its application to surveys with a different set of specifications than those examined in that initial work. We are interested in the cosmic strings in the network that survive until later times. The times relevant to the production of an edge signature in the CMB are the time of last scattering until the present day. Based on the evolution of the network, cosmic strings are more numerous around the time of last scattering than later times. On today’s sky, those strings correspond to an angular scale of approximately 1$^\circ$. Therefore, an observation of the CMB with an angular resolution significantly less than 1$^\circ$ is necessary in order to be able to detect the edges related to these strings. With this in mind, we also focus on the application of the edge detection method to high resolution surveys of the CMB, particularly the future data from the South Pole Telescope. The South Pole Telescope (SPT) [@Ruhl:2004kv] is a 10m diameter telescope being deployed at the South Pole research station. The telescope is designed to perform large area, high resolution surveys of the CMB to map the anisotropies. The SPT is designed to provide 1$^\prime$ resolution in the maps of the CMB, making it ideal to search for the KS-effect. Based on previous results [@Amsel:2007ki], we believe that with such high resolution data our method could provide bounds on the cosmic string tension competitive with those of pulsar timing. The remainder of this paper is arranged as follows: In section \[secmap\], we discuss the CMB maps used in our analysis with a focus on the anisotropies coming from Gaussian fluctuations and cosmic strings. In Section \[seccanny\], we outline the edge detection algorithm we are using, highlighting the details of our particular implementation. In Section \[seccount\], we discuss how we quantify the edge maps output by the edge detection algorithm and we explain the statistical analysis used to determine if a significant difference has been detected. In Section \[secresults\], we present the results of running the edge detection algorithm on CMB maps and the possible constraints on the cosmic string tension that could be applied. We finish in Section \[secdiscuss\] with a discussion of our results. Map Making {#secmap} ========== For this initial investigation of edge detection as a method for constraining or even detecting cosmic strings, we generate CMB temperature anisotropy maps by means of numerical simulations and use these as the input for the edge detection algorithm. The simulated maps are constructed through the superposition of different temperature anisotropy components based on the type of effects being reproduced. We are interested in the simulation of small angular scale patches of the microwave sky, so we employ the flat-sky approximation [@White:1997wq]. In this approximation, the geometry of a small patch on the sky can be considered to be essentially flat. Thus, each map component, as well as the final map itself, is a two dimensional square image characterized by an angular size and an angular resolution. Specifically, we work with a square grid that has a size corresponding to the angular size being simulated, and a pixel size corresponding to the angular resolution being simulated. The pixels in the grid are indexed by two dimensional Cartesian coordinates $(x,y)$ and we take the upper left corner of the grid to be the origin. The common component in every simulated CMB map is a set of temperature anisotropies produced by Gaussian inflationary fluctuations. We simulate these Gaussian fluctuations such that they account for all of the observed power in the CMB. Thus, in the absence of any other sources the final simulated map is simply equivalent to the Gaussian component and is consistent with observations. That is, we define $$T(x,y) \, \equiv \, T_G(x,y)\,,$$ where $T(x,y)$ represents the the final temperature anisotropy map and $T_G(x,y)$ represents the Gaussian component. To make a CMB map including the effects of cosmic strings, we simulate a separate component of string induced temperature fluctuations produced via the KS-effect. In linear perturbation theory, if there are two sources of fluctuations, the resulting temperature anisotropies are given by a linear superposition of the individual sources. Therefore, the total temperature map is obtained by simply summing the contributions from Gaussian noise and from cosmic strings. Note, however, that the power of the Gaussian component must be adjusted in order to obtain the total observed power of CMB fluctuations. That is, if $T_G(x,y)$ is the WMAP-normalized Gaussian signal, then, in a map including cosmic strings, it needs to be reduced by a scaling factor $\alpha$, the value of which is determined by the tension of the cosmic strings being simulated. Denoting the string component by $T_S(x, y)$, the total temperature map is $$T(x,y) \, \equiv \, \alpha\, T_G(x,y) + T_S(x,y)\,.$$ In this way the strings can contribute a fraction of the total power, while the final map is still in agreement with current measurements of the angular power spectrum of CMB anisotropies. Let us comment in more detail on the nature of this scaling. We demand that the angular power of the final combined temperature map match the observed angular power for multipole values up to the first acoustic peak, i.e. $l\lesssim220$. We choose this multipole range because it is tightly constrained by current observations [@Komatsu:2008hk]. However, as mentioned above, the Gaussian component alone accounts for all of the observed angular power in the CMB. Thus, this demand is equivalent to requiring that the angular power of the combined map match that of a pure Gaussian component. Working in the flat-sky approximation allows us to replace the usual spherical harmonic analysis of the CMB fluctuations by a Fourier analysis [@White:1997wq]. We can then express our condition as $$\label{power} \langle|T_G(k<k_p)|^2\rangle \, = \, \alpha^2\langle|T_G(k<k_p)|^2\rangle + \langle|T_S(k<k_p)|^2\rangle\,,$$ where $k_p$ is the wavenumber corresponding to the first acoustic peak of the angular power spectrum of the CMB, $\langle|T_S(k<k_p)|^2\rangle$ is the average of the Fourier temperature anisotropy values from the string component for wavenumbers less than $k_p$ and $\langle|T_G(k<k_p)|^2\rangle$ is the equivalent object for the Gaussian component. From Equation one can see that the average of the temperature anisotropy values in the string component should go as the cosmic string tension squared. Therefore, if we define a reference cosmic string tension, $G\mu_0$, we have $$\langle|T_S(k<k_p)|^2\rangle \, = \, \langle|T_S(k<k_p)|^2\rangle_0 \left(\frac{G\mu}{G\mu_0}\right)^2\,,$$ where $\langle|T_S(k<k_p)|^2\rangle_0$ is the average for a string component corresponding to the reference tension and $G\mu$ is the cosmic string tension corresponding to the string component on the left-hand side of the equation. Substituting this into we can solve for the final form of the scaling factor: $$\label{alpha} \alpha^2 \, = \, 1 - \frac{\langle|T_S(k<k_p)|^2\rangle_0}{\langle|T_G(k<k_p)|^2\rangle}\left(\frac{G\mu}{G\mu_0}\right)^2\,.$$ The benefit of having $\alpha$ in this form is we need only calculate the ratio of averages once using the reference tension. After this we can calculate the value of the scaling factor with only the cosmic string tension used in the given simulation, $G\mu$. As mentioned in the Introduction, studies of combining string anisotropies and Gaussian anisotropies [@Pogosian:2003mz; @Pogosian:2008am] have concluded that, on the basis of the angular power spectrum of CMB anisotropies, a cosmic string contribution of less than $10\%$ of the observed CMB power on large scales cannot be ruled out in general. However, in calculating the angular power spectrum, coherent features in position space such as the line discontinuities induced by the Kaiser-Stebbins effect are washed out. Thus, we expect that better limits on the string tension can be established by making use of edge detection algorithms working in position space. A third component which we must include in the final map is a simulation of instrumental noise. For simplicity, we simulate an instrumental noise component that is simply white noise with some given maximum amplitude in the temperature difference $\delta T_{N,max}$. If an instrumental noise component is included we do not need to perform any additional scaling of the initial Gaussian component of the map since the instrumental noise does not modify the actual sky map. Thus, the instrumental noise component is simply summed directly to the other components. Denoting the noise component by $T_N(x,y)$, we have $$T(x,y) \, \equiv \, T_G(x,y) + T_N(x,y)$$ for a simulation without cosmic strings, or $$T(x,y) \, \equiv \, \alpha\, T_G(x,y) + T_S(x,y) + T_N(x,y)$$ for a simulation including cosmic strings. The dominant portion of the final simulated map is the Gaussian temperature fluctuations. As such, these Gaussian fluctuations represent the most significant “noise” when trying to directly detect the effect of cosmic strings with the edge detection algorithm. The significance of the instrumental noise component in the final map is determined by the maximum amplitude of the noise, which should in general be small compared to the amplitude of the Gaussian fluctuations. The size of the temperature anisotropies in the string component depends directly on the tension of the cosmic strings which are being simulated, as described by Equation . For interesting values of the string tension, the amplitude of the string-induced anisotropies will lie from a factor of a few up to orders of magnitude below the amplitude of the Gaussian temperature anisotropies, thus presenting the difficulty in directly detecting them. Examples of each of the three map components are shown in Figure \[comps\]. Before moving on to discuss the edge detection algorithm itself, we first review our methods for generating the Gaussian and string components. The Gaussian Component ---------------------- As touched on above, the spherical harmonic expansion of the CMB temperature anisotropies can be replaced by a Fourier expansion when using the flat-sky approximation [@White:1997wq]. Therefore, when generating the component of Gaussian fluctuations, we choose to work on a grid in Fourier space where each pixel in the grid is indexed by the coordinates $(k_x,k_y)$, which are the components of the wavevector pointing to that pixel. The size and resolution of the grid still correspond to the two angular scales in the simulation. The advantage of being able to use a Fourier analysis is that it greatly simplifies the calculations, and the value of the temperature anisotropy at a particular pixel on the the grid is then given by the relation $$\label{gtemp} \frac{\delta T_{G}}{T}(k_x,k_y) \, = \, g(k_x,k_y)\,a(k_x,k_y)\,,$$ where $g(k_x,k_y)$ is a random number taken from a normal probability distribution with mean zero and variance one [@Bond:1987ub]. The quantity $a(k_x,k_y)$ is the Fourier space equivalent of $a_{lm}$ in the usual spherical harmonic expansion and is related to the angular power spectrum of the temperature anisotropies in the same way, $$\label{a} <|a(k_x,k_y)|^2> \, = \, C_l\,.$$ In the flat-sky approximation the multipole moment is related to pixel position in the grid by $$\label{mpol} l \, = \, \frac{2\pi}{\theta}\sqrt{k_x^2+k_y^2}\,,$$ where $\theta$ is the angular size of the survey area [@Bond:1987ub]. We compute the Fourier temperature fluctuations pixel by pixel using the above equations. It is clear from Equation that, in general, the largest multipole moment required for a simulation increases as the resolution increases. Since we are interested in simulating high resolution CMB maps, we generate the COBE normalized angular power spectrum of the CMB to very large multipole moments using the <span style="font-variant:small-caps;">camb</span> software with input cosmological parameters determined by surveys at lower angular resolution. To be precise, we choose our input parameters to be those derived using the CMBall data set, which combines the results from multiple surveys [@Reichardt:2008ay]. Depending on the pixel position, the value of $l$ as calculated by Equation can take non-integer values, whereas the angular power spectrum is computed for only integer values. In these cases, we simply approximate the value of the angular power spectrum at any given $l$ using a linear interpolation. Once we have computed the value of each pixel in the grid, we take the inverse Fourier transform of the array using a fast Fourier transform (FFT) algorithm, which produces a temperature anisotropy map in position space. By choosing the origin of the grid to be at the top left corner in the maps, we have introduced a preferred direction into the simulation of the Gaussian fluctuations. To compensate for this asymmetry, we construct the final Gaussian component, $T_G(x,y)$, by superimposing four separate sub-components, which we label as $T_1...T_4$, each computed separately using the method described above. When combining these sub-components, we reflect each along one of the four axes on the grid eliminating any irregularity in the final map. Therefore, the Gaussian component is defined as $$\begin{aligned} T_G(x,y) \, &\equiv& \, \frac{1}{2}\big[T_1(x,y) + T_2(x_{max}-x,y) \nonumber \\ && \, + T_3(x,y_{max}-y) + T_4(x_{max}-x,y_{max}-y)\big]\,,\end{aligned}$$ where $x_{max}$ and $y_{max}$ are the maximal $x$ and $y$ values based on the simulation parameters. The factor of $1/2$ in front of the sum is required to maintain the original standard deviation. The String Component -------------------- Since the focus of this work is on testing the edge detection method, not the details of the cosmic string network evolution, we utilize a toy model of the network for simplicity. We then examine the resulting temperature anisotropies caused by the strings which photons encounter between the time of last scattering and the present day. We choose to use the toy model originally presented by Perivolaropoulos in [@Perivolaropoulos:1992if]. In this model, we first separate the period between the present time, $t_0$, and the time of last scattering, $t_{ls}$, into N Hubble time steps such that $t_{i+1}=2t_i$. For a redshift of last scattering $z_{ls}=1000$ we then have [@Moessner:1993za] $$N \, = \, \log_2\left(\frac{t_0}{t_{ls}}\right)\simeq15\,.$$ For large redshifts and assuming $\Omega_0=1$, the angular size of the Hubble volume at a given Hubble time is approximated by $\theta_{H_i}\sim z_{i}^{-1/2}\sim t_{i}^{1/3}$. Therefore, we have $\theta_{H_{ls}}\simeq z_{ls}^{-1/2}\simeq1.8^\circ$ for the Hubble volume corresponding to the time of last scattering and $\theta_{H_{i+1}}\simeq2^{1/3}\theta_{H_{i}}$ for all subsequent Hubble time steps [@Moessner:1993za]. At each Hubble time, a network of long straight strings with a length equal to two times the size of the Hubble volume at that time, each with random position, orientation and velocity, is laid down. The network of strings produced at each Hubble time is assumed to be uncorrelated with that of the previous Hubble time. This is justified since cosmic strings move with relativistic speeds, meaning that between Hubble times there will be multiple string interactions, causing the network to enter into a completely different configuration. For a specific Hubble time step $t_i$ we start with an extended region that has a total angular size equivalent to the angular size of the string component being simulated plus two times the angular size of the Hubble volume at that particular Hubble time. The number of strings $n_i$ that should exist in that particular region is then given by the scaling solution $$n_i \, = \, M\frac{(\theta+2\theta_{H_i})^2}{\theta_{H_i}^2}\,,$$ where $M$ is the number of cosmic strings crossing each Hubble volume and $\theta$ is the angular size of the string component being simulated [@Moessner:1993za]. As usual, we work on a square grid, this time placed over the entire extended region, with pixel size still given by the angular resolution being considered. Pixels within the entire extended area are then chosen at random to be the midpoints of strings, with a probability such that the average number of strings in a single Hubble volume is in agreement with the number $M$ of the scaling solution. If a pixel is chosen to be a midpoint, we choose a random orientation about that pixel and we place a straight string of length $2\theta_{H_i}$. We then simulate the temperature fluctuation produced by that string by adding a temperature anisotropy $$\label{beta} \frac{\delta T_S}{T} \, = \, 4\pi G\mu\gamma_s v_s r$$ to a rectangular region on one side of the string, and subtracting the same amount from a rectangular region on the other side. This temperature anisotropy corresponds to the KS-effect as given by Equation , where $r=|\hat{k}\cdot(\hat{v}_s\times\hat{e}_s)|$ takes into account the projection effects. The direction of observation $\hat{k}$ is approximately constant over the entire field of view while the quantity $\hat{v}_s\times\hat{e}_s$ is a random unit vector since both the string orientation and velocity are random. Thus, the value of $r$ is uniformly distributed over the interval \[0,1\] [@Moessner:1993za]. In Equation , we take the RMS speed of the strings to be $v_s=0.15$ [@Moessner:1993za], so the amplitude of the fluctuation is determined entirely by the string’s tension and its orientation. Each rectangular region affected by the temperature fluctuation has a length $2\theta_{H_i}$ along the direction of the string and extends a distance $\theta_{H_i}$ in the direction perpendicular to the string [@Joao]. Thus, each cosmic string gives rise to five separate temperature discontinuities: one at its position, two parallel to it at a distance $\theta_{H_i}$ and two perpendicular to the string at the endpoints. After placing all of the cosmic strings and calculating the temperature fluctuation for each, we have finished simulating the cosmic string network for the given time step. Since we began with a region which is larger than the string component we wanted to simulate in the first place, we must crop the larger area to the correct size. We choose to discard pixels equally from all four sides of the extended area, so that we retain only those from the central region of the larger area. By identifying the correctly sized simulated area with the centre of the extended area, one can see that what we essentially did when first defining the extended region was to enlarge the actual simulation area by a Hubble volume in each direction. The reason that we expand our simulated area in this way is because any string whose midpoint is within a distance $\theta_{H_i}$ of the actual area we want to simulate could enter into it. Thus, we must also account for these strings which lie around the edges of the area of interest, not only those centred within it. The final string-induced anisotropy map is given by the superposition of the effects of all of the strings in all of the Hubble volumes. Therefore, to produce the final cosmic string component $T_S(x,y)$, we simply sum together all fifteen sub-components pixel by pixel. This superposition approximates the contribution from the entire, more complex cosmic string network. In the model described above, we have fixed values for the the speed of the strings, the length of the strings and the depth of the rectangular temperature fluctuation region around the string. These values were obtained from particular numerical simulations [@Moessner:1993za], however, these parameters can vary significantly for different models of the string network (see [@2000csot.bookV] for a review) and should not be considered as established. We also note that in this toy model cosmic string loops and their subsequent effects are not included. Cosmic string loops will also produce CMB anisotropies. However, based on the current knowledge of the distribution of scaling strings, the string loop contribution to the CMB is believed to be sub-dominant. This justified us neglecting these effects. The Canny Edge Detection Algorithm {#seccanny} ================================== When looking for edges in an image we are looking for curves across which there is a strong intensity contrast. The strength of an edge can then be quantified by the magnitude of the contrast from one side of the edge to the other, or equivalently, the magnitude of the gradient across the edge. For CMB temperature anisotropy maps, the intensity that we are dealing with is simply the amplitude of the fluctuations. Thus, we define the edges in the CMB maps as lines across which the temperature difference is large. To search for these edges we employ the Canny edge detection algorithm [@Canny:1986aa], which is one of the most commonly used edge detection methods in image analysis. Figure \[canny\] shows an example of a final map of edges generated by the Canny algorithm along with an intermediate map produced during the edge detection process. To clearly illustrate the result of each stage of the edge detection, we present maps corresponding to the same cosmic string component shown in Figure \[comps\] with no other components added to it, however, this does not represent a legitimate final simulated CMB map. In the following sections we review the steps involved in applying the Canny algorithm to CMB maps and how these images are generated. Non-maximum Suppression ----------------------- Since we are interested in temperature gradients, the first step of the Canny edge detection algorithm is to simply compute the gradient of the temperature anisotropy map and use it to determine which pixels could be part of an edge. We first construct two square filters $F_x(x,y)$ and $F_y(x,y)$, which are first-order derivatives of a two dimensional Gaussian function along each of the two map coordinates $(x,y)$. We then apply each of these filters to the temperature map separately by convoluding the two using a FFT. This produces two new maps $G_x(x,y)$ and $G_y(x,y)$, which are the components of the gradient magnitude along the $x$-direction and $y$-direction. With these components we can then construct another new map $$G(x,y) \, = \, \sqrt{G_x^2(x,y)+G_y^2(x,y)}\,,$$ which is the map of the gradient magnitude, or edge strength, corresponding to the original temperature anisotropy map. We can also construct a second map $$\label{gang} \theta_G(x,y) \, = \, \arctan\left(\frac{G_{y}(x,y)}{G_{x}(x,y)}\right)\,,$$ which is the map of the gradient angle, or gradient direction. In the above equation the sign of both components is taken into account so that the angle is placed in the correct quadrant. Therefore, the arctangent has a range of ($-180^\circ,180^\circ$\]. However, at each pixel on a square grid there are only eight distinguished directions which form four axes. In order to relate the gradient direction as calculated by Equation to one that we can trace on the grid, we approximate the value of $\theta_G(x,y)$ at each pixel to lie along one of the eight grid directions. We do this by simply replacing the value of $\theta_G(x,y)$ with the angle corresponding to the closest grid direction. For example, if the gradient direction takes any of the values $-22.5^\circ\leq\theta_{G}(x,y)<22.5^\circ$ it would be replaced by $\theta_{G}(x,y)=0^\circ$. In the Canny algorithm, part of the definition of a pixel that is considered to be on an edge is that it must be a local maximum in the gradient magnitude. By local maximum we mean that the gradient magnitude at a given pixel is larger than that of both pixels which neighbour it along the axis defined by the gradient direction at that same pixel. Using the gradient magnitude and direction maps, it is straightforward to check the local maximum condition pixel by pixel and determine which could be a part of an edge and which could not be part of an edge. Since we are only interested in constructing a final map of edges, if a pixel does not satisfy the local maximum condition we immediately discard that pixel. Therefore, this process is referred to as *non-maximum suppression*. Figure \[canny\] shows a gradient magnitude map after non-maximum suppression has been performed. Many of the original pixels have been discarded, as expected, and we are left with a rough map of edges. Although curves corresponding to certain edges in the original temperature anisotropy component can be seen, there are many other pixels marked as local maxima corresponding to extremely weak edges, making the signal from stronger edges difficult to detect. Thresholding with Hysteresis {#hysteresis} ---------------------------- When performing non-maximum suppression we only compared a single pixel with two of its neighbours to determine if it could be part of an edge. Pixels with a small gradient magnitude may have still been marked as a local maxima if the gradient magnitudes of their neighbours were also small. As mentioned above, Figure \[canny\] shows that this is indeed the case. The magnitude at such pixels can in fact be so small that we do not want to consider them as edge pixels, since they can dilute the more significant signal coming from stronger edges. In addition, we want to detect edges which appear due to cosmic strings via the KS-effect. Therefore, we expect the gradient direction to be consistent across the length of the string induced edge. This directionality needs to be taken into account to determine which local maxima pixels belong to the same string edge. Taking these two points into consideration, we must further expand our definition of exactly what constitutes an edge pixel. The Canny algorithm outlines a process of applying multiple thresholds to define the edges in an image, known as *thresholding with hysteresis*. First, we choose an upper gradient threshold, $t_u<1$, such that we can then define a pixel which is definitely part of an edge, which we name a *true-edge pixel*, as one which is not only a local maximum but also satisfies $$G(x,y) \, \geq \, t_uG_m\,.$$ Here $G_m$ is the mean maximum gradient magnitude computed from simulated temperature maps which contain only strings. The value of $G_m$ depends on the parameters of the simulation being performed, most notably the string tension, and must be computed separately for each parameter set using a selected number of simulated string maps. One can think of $G_m$ as representing the strongest possible edge that could be formed by cosmic strings alone. Therefore, with this threshold, we are simply stating that if the gradient magnitude at a given pixel is some chosen fraction of the maximum possible, then it must be a true-edge pixel. It is not sufficient, however, to define the edges using only one threshold because the gradient magnitude can fluctuate at each pixel along the length of an edge. This variation can be caused by both instrumental noise and the random nature of the Gaussian anisotropies. If we applied only an upper threshold, we would reject the pixels at which the gradient magnitude fluctuates below that threshold, but should in fact still be considered as a part of a given edge. This would lead to edges being cut into smaller segments, making them look like dashed lines, rather than continuous curves on the map. To avoid this, we also choose a lower gradient threshold, $t_l<t_u$, and define a pixel which is possibly part of an edge, which we name a *semi-edge pixel*, as a local maximum pixel satisfying $$t_lG_m \, \leq \, G(x,y) \, < \, t_uG_m\,.$$ If a local maximum pixel still falls below the lower threshold then it is immediately rejected. The latter case is the requirement that an edge pixel have some minimum strength, and cures the problem of a local maxima with extremely small gradient magnitudes being included in the final edge map. Since we are interested in edges appearing due of the presence of cosmic strings, we also apply a “cutoff” threshold such that we reject all pixels for which $$G(x,y) \, > \, t_c G_m\,,$$ where $t_c\geq1$. We apply this third threshold because the Gaussian temperature fluctuations in the CMB map dominate those coming from the cosmic strings. As such, they lead to edges with much stronger gradient magnitudes, that is, greater than $G_m$. If we only applied the upper bound $t_u$, these edges would overwhelm the edge detection algorithm, washing out the cosmic string signal. By setting a cutoff threshold, we can discard the pixels with a gradient magnitude which we consider to be too strong to have been caused by cosmic strings, and keep only those representing the cosmic string signature. We choose $t_c\geq1$ because we also consider the slight enhancement of weak edges corresponding to Gaussian fluctuations, as a result of the underlying cosmic string edges, to be part of the cosmic string signal. After applying the thresholds as described above, we then further assert that any semi-edge pixel which is in contact with a true-edge pixel and has the appropriate gradient directionality is also a true-edge pixel sharing the same edge. By in contact, we mean that it is a semi-edge pixel which is one of the six neighbouring pixels of the true-edge pixel which does not lie along the gradient direction calculated at the position of the true-edge pixel. This definition stems from the fact that the two directions perpendicular to the gradient axis represent the edge axis, while the remaining four directions represent the two axes which are next to parallel to the edge axis. Essentially, we are stating that in order to be considered part of the same edge the semi-edge pixel must lie along (or almost along) the edge axis and it would be inconsistent for a pixel sharing the same edge to lie along the gradient direction. By appropriate gradient directionality, we mean that the semi-edge pixel also has a gradient direction which is parallel or next to parallel to the gradient direction calculated at the position of the true-edge pixel. The comparison of the gradient directions represents our demand that the temperature gradient be consistent along an entire edge. We scan the remaining pixels in the map to check which semi-edge pixels satisfy the above conditions. The ones which do are immediately changed to true-edge pixels. This allows us to fill the gaps which occur between true-edge pixels due to both types of noise, and avoid the incorrect breaking up of edges. Once a semi-edge pixel has been changed to a true-edge pixel it may then have another semi-edge pixel neighbouring it which needs to be changed, and so on. The scanning technique takes this into account and any connected series of semi-edge pixels will all be correctly changed to true-edge pixels ensuring that the entire edge is correctly identified. After scanning the map we consider all of the edges in the map to have been traced. At this point, if a pixel is still marked as a semi-edge pixel, we assume that it is not in contact with a true-edge pixel in any way, and it is rejected. The edge detection process is then finished, and the end result is the final map of true-edge pixels corresponding to the original temperature anisotropy map. Figure \[canny\] shows a final edge map after thresholding with hysteresis has been performed. Many of the pixels appearing in the map of local maxima have now been rejected, especially those with very small gradient magnitudes, and the stronger edges are now much better defined. This is a direct result of applying the thresholds and directionality conditions. Comparing the original temperature anisotropy map to the final edge map, it is clear that not only is the Canny algorithm good at locating the edges which are clearly visible, but that it is also sensitive to the faint edges which are not easily detectable by eye. Edge Length Counting and Statistical Analysis {#seccount} ============================================= To facilitate a comparison with edge maps generated from different input temperature anisotropy maps, we need a way to quantify each individual edge map. Since we are considering cosmic strings as a source of edges in CMB temperature anisotropy maps, one might intuitively expect that in the presence of strings one would observe a larger number of edges of all lengths, or at least a larger number in some finite range of lengths. With this in mind, we employ a simple method of quantifying the edge maps, which is to record the length of each edge appearing in the edge map. We define a single edge as a chain of true-edge pixels where each subsequent pixel is in contact with the previous pixel and has a similar gradient direction (both in the same sense as described in the previous section). When scanning the final edge map, we count the number of pixels appearing in each separate edge. These values are exactly the lengths of the edge in units of pixels. With this data we can then construct a histogram of the total number of edges of each possible length, which corresponds to the original temperature anisotropy map. We do not consider a single pixel to represent an edge, therefore, the minimum edge length that we include in our histograms is two pixels long. If there are any single pixels marked as edges then we simply ignore them. After generating histograms for different input maps, we need to develop a way to compare them and look for differences. Specifically, we are looking for a change in the distribution of the total number of edges between an edge map corresponding to a simulation without cosmic strings and an edge map corresponding to a simulation with cosmic strings. However, both the Gaussian and string components in the simulated CMB temperature anisotropy maps are generated using random processes. If we were to compare two histograms generated from only one simulated temperature anisotropy map each, we would not be able to draw a very meaningful conclusion. Therefore, to make our comparison more robust, we simulate many temperature anisotropy maps with the same input parameters and perform the edge detection and length counting on each one separately. This provides a set of histograms from which we can then compute the mean number of edges of each length occurring over all the runs. We also compute the standard deviation from each mean value. In the end this provides us with a new *averaged histogram* of edge lengths that has statistical error bars. Comparing two of these averaged histograms then allows us to assign a statistical significance to the difference in the distributions. From this point on, whenever we mention a histogram we mean an averaged histogram computed using many simulations. When comparing two histograms, we compare the mean value for each specific length separately, rather than perform a single general test based on the overall shapes of the distributions. We prefer to treat each bin separately because each has a separate standard deviation associated with it. Furthermore, we assume that the underlying values used to compute each mean are normally distributed. That way we can use Student’s t-statistic to determine the significance of the difference at each length. For two samples of equal size $n$, Student’s t-statistic is defined as $$\label{t} t \, = \, \left(\overline{N}_1-\overline{N}_2\right)\sqrt{\frac{n}{(\sigma_{1})^{2}+(\sigma_{2})^2}}\,,$$ where $\overline{N}_1$ and $\sigma_{1}$ are the mean and sample standard deviation of the first sample and $\overline{N}_2$ and $\sigma_{2}$ are the mean and sample standard deviation of the second sample. Given two histograms, we compute $t$ for each length occurring in the two histograms for which $\overline{N}_i\geq3\sigma_i$, where $i=1,2$. This constraint on the lengths we consider stems from our assumption that the underlying distribution of each mean value is normal. Since it would be inconsistent to consider negative values for the total number of strings at any given length, we choose only lengths for which the total number of strings is positive definite at the $3\sigma$ level. The p-value corresponding to each $t$ is then computed from a t-distribution with $2n-2$ degrees of freedom. We then combine the probabilities calculated for each length, denoted by $p_L$, into a single statistic which characterizes the difference between the two histograms. Using Fisher’s combined probability test, we can define the new statistic $\chi^2$ as $$\chi^{2} \, = \, -2\sum^{L_{m}}_{L=2}\ln(p_{L})\,,$$ where $L_{m}$ is the maximum length at which a p-value was computed. The final p-value corresponding to the statistic $\chi^2$ is then determined from a chi-square distribution with $2L_{m}-2$ degrees of freedom. The final step is to compare this single p-value to a significance level $\epsilon$ to conclude whether or not the difference in the two histograms is significant. We choose to work with the customary significance level $\epsilon=0.0027$ corresponding to $3\sigma$ of a normal distribution. If our p-value is less than $\epsilon$ we state that the difference in the two edge maps is statistically significant. Results {#secresults} ======= We present the results of running the Canny edge detection algorithm on simulated CMB anisotropy maps in two parts. First, we report the results for simulations which are designed to mimic the expected output from the SPT. Using these results, we determine what kind of bound on the cosmic string tension one could hope to achieve using the edge detection method on data from that survey. Second, we present the results for simulations corresponding to a hypothetical survey that has different specifications than those of the SPT. We use these results to investigate how the potential constraint on the tension changes with respect to the design of the survey. The SPT is capable of producing a 4,000 square degree survey of the anisotropies in the CMB [@Ruhl:2004kv]. To replicate the same amount of sky coverage, we simulate 40 separate $10^\circ\times10^\circ$ maps, where the angular resolution of each of these maps is 1$^\prime$ per pixel, again matching that specified for the SPT. To test the edge detection method, we simulate two separate sets of 40 maps, the first set including the effect of cosmic strings, and the second set excluding the effect of cosmic strings. Each set of maps gives rise to a histogram of edge lengths via the edge detection and edge length counting algorithms. We then compare these two histograms using the statistical analysis described in Section \[seccount\] to determine if the difference in the distributions is significant. We repeat this process for many different values of the cosmic string tension, until we can no longer identify a statistically significant difference in the two histograms. Figure \[compare\] shows a side by side comparison of a simulated CMB map without a cosmic string component and a simulated CMB map which does include a cosmic string component. The effect of the cosmic strings in the final temperature anisotropy map is not apparent and any difference in the typical structure between the two maps is unnoticeable by eye. Figure \[comparehist\], on the other hand, shows a histogram corresponding to a set of maps without a cosmic string component and a histogram corresponding to a set of maps with a cosmic string component. The two histograms show that the edge detection method is in fact able to detect a difference which is not evident by eye, with maps including strings having slightly higher mean values for certain lengths. Although the difference in histograms may not seem large, this particular example would generate a significant result. ![\[comparehist\] Comparison of histograms for maps with and without a component of cosmic string induced fluctuations. Each histogram corresponds to a set of 40 simulated CMB maps. The angular size of each map was $10^\circ\times10^\circ$ and the angular resolution of each was 1$^\prime$ per pixel (360,000 pixels). In the maps including a cosmic string component, the string free parameters were taken to be $G\mu<6\times10^{-8}$ and $M=10$ while the scaling factor in the map component addition was $\alpha=0.987$. In the edge detection algorithm the gradient filter length was 5 pixels and the thresholds were $t_u=0.25$, $t_l=0.10$ and $t_c=3.5$. The value of $G_m$ was calculated using the same cosmic string tension given above. The height of each bar corresponds to the mean number of edges at that edge length. The error bars represent a spread of $3\sigma$ from the mean value, where $\sigma$ is the standard deviation of the mean. Shown here are only the lengths for which the mean is greater than $3\sigma$ in both histograms.](comp_hist.eps){width="0.75\linewidth"} Although the angular size and resolution of the simulation are determined by the specifications of the survey in question, the values of the other free parameters in each step of process must also be fixed. We take the number of cosmic strings per Hubble volume in all of the string component simulations to be $M=10$ [@Moessner:1993za], regardless of the cosmic string tension. In every run of the edge detection algorithm, we choose the gradient filter length to be 5 pixels, the value of the upper threshold to be $t_u=0.25$ and the value of the lower threshold to be $t_l=0.10$. These values for the thresholds may appear small, but as one can see from the scale in Figure \[canny\], the gradient magnitude in the string component can take a large range of values. Therefore, $G_m$ can be quite a bit larger than the average gradient magnitude on a string induced edge, so we must choose low values for the thresholds in order to not throw away the entire string signal. We have not mentioned the value of the scaling factor in the map addition, $\alpha$, nor the value of the cutoff threshold, $t_c$. The reason is, we do not fix the value of these two parameters for all of the runs. In the case of the scaling factor, its value must change for each given cosmic string tension, as described by Equation . The value of the cutoff threshold, on the other hand, is chosen deliberately based on the value of the tension, such that we get the best results from our edge detection method. We note the value of both of these parameters when presenting our findings. For the SPT specific simulations, the capability of the edge detection method to make a significant detection of the cosmic string signal for different choices of the cosmic string tension is summarized in Table \[SPT\]. We find that our edge detection method can distinguish a signal arising from cosmic strings down to a tension of $G\mu=5\times10^{-8}$. Therefore, if the edge detection method was used on ideal data from the SPT, but was unable to distinguish a difference from a theoretical data set without the effect cosmic strings, we could then impose a constraint on the cosmic string tension of $G\mu<5\times10^{-8}$. [c|c|c|c]{} String Tension ($G\mu$) & Scaling Factor ($\alpha$) & Cutoff Threshold ($t_c$) & p-value\ \ $6.0\times10^{-8}$ & 0.987 & 3.5 & $7.19\times10^{-12}$\ $5.5\times10^{-8}$ & 0.989 & 4.2 & $6.99\times10^{-4}$\ $5.0\times10^{-8}$ & 0.991 & 5.5 & $2.39\times10^{-3}$\ $4.5\times10^{-8}$ & 0.993 & 6.0 & $9.95\times10^{-3}$\ \ $6.0\times10^{-8}$ & 0.987 & 3.5 & $2.92\times10^{-10}$\ $5.5\times10^{-8}$ & 0.989 & 4.2 & $1.45\times10^{-3}$\ $5.0\times10^{-8}$ & 0.991 & 5.5 & $1.36\times10^{-2}$\ $4.5\times10^{-8}$ & 0.993 & 6.0 & $1.96\times10^{-2}$\ The above mentioned results were determined from simulated maps which did not contain a component of instrumental noise. To examine the effect that detector noise will have on the ability of the edge detection method to constrain the cosmic string tension, we repeat the same process described above, with the same choices for all of the parameters, but this time with instrumental noise included in the simulation of the CMB maps. As mentioned earlier, we simulate a component of white noise with a given maximum temperature change. Here, we choose the maximum temperature change caused by the instrumental noise to be $\delta T_{N,max}=10\:\mu\mbox{K}$, roughly corresponding to that planned for the SPT [@Ruhl:2004kv]. Figure \[compare\] shows a side by side comparison of a simulated CMB map which includes instrumental noise and one which does not. The effect that the noise has on the map is clear, making it appear pixelated and non-Gaussian, yet the overall structure of the image is still visible since the temperature fluctuations caused by the noise are sub-dominant compared to the Gaussian fluctuations. For the SPT specific simulations including instrumental noise, the results of using the edge detection method to detect a cosmic string signal are also presented in table \[SPT\]. We find that detector noise does not have a substantial effect, and it weakens the possible constraint that the edge detection method could place on the cosmic string tension only slightly to $G\mu<5.5\times10^{-8}$. [c|c|c|c]{} String Tension ($G\mu$) & Scaling Factor ($\alpha$) & Cutoff Threshold ($t_c$) & p-value\ \ $3.5\times10^{-8}$ & 0.995 & 8.0 & $1.95\times10^{-5}$\ $3.0\times10^{-8}$ & 0.997 & 8.8 & $8.16\times10^{-4}$\ $2.5\times10^{-8}$ & 0.998 & 9.6 & $7.87\times10^{-3}$\ \ $3.5\times10^{-8}$ & 0.995 & 8.0 & $2.37\times10^{-5}$\ $3.0\times10^{-8}$ & 0.997 & 8.8 & $1.46\times10^{-3}$\ $2.5\times10^{-8}$ & 0.998 & 9.6 & $2.80\times10^{-1}$\ Along with the results specific to the SPT, we explore how the constraint which could be applied by the edge detection method changes based on the specifications of the survey. For this purpose, we imagine a theoretical observatory which has the same specifications as the SPT but could map five times the amount of sky with the same resolution, that is, produce a 20,000 square degree survey of the anisotropies in the CMB. To replicate the output of a survey with this design, we instead simulate 200 separate $10^\circ\times10^\circ$ maps at $1^{\prime}$ resolution. In this hypothetical case we again choose the maximum temperature change caused by the instrumental noise to be $\delta T_{N,max}=10\:\mu\mbox{K}$. The analysis follows the same procedure as outlined above, and we keep the same values for all of the free parameters. For the larger survey size, the results of using the edge detection method to detect a cosmic string signal are summarized in Table \[bigSPT\]. By increasing the survey size from that of the SPT by a factor of five, while keeping all other specifications the same, the ideal output from such an observatory could have the potential to improve the constraint on the cosmic string tension to $G\mu<3.0\times10^{-8}$. When instrumental noise is included, we find that the effect on the edge detection method is in this case negligible and the possible bound remains the same as that found using simulations without instrumental noise. Discussion {#secdiscuss} ========== We have developed a method of searching for linear discontinuities in the microwave background temperature caused by the presence of cosmic strings along our line of sight to the surface of last scattering. The method which we have developed involves applying an edge detection algorithm to CMB temperature anisotropy maps in order to identify the effect of cosmic strings. We have applied our edge detection method to simulated CMB maps both including cosmic strings, and without cosmic strings, to test its ability to discriminate between the two. This then translates directly into a possible constraint on the cosmic string tension. In particular, we have focused on two different sets of simulations, one which mimics the future output coming from the SPT and one which corresponds to a theoretical survey which covers five times as much sky as the SPT with the same angular resolution. We find that the edge detection method could potentially place a bound on the cosmic string tension of $G\mu<5\times10^{-8}$ for a perfect CMB observation from the SPT and that this could be lowered to $G\mu<3\times10^{-8}$ for the larger survey[^1]. For more realistic simulations which include instrumental noise, we find that the potential bound corresponding to the SPT weakens by only a small amount to $G\mu<5.5\times10^{-8}$ while the possible bound corresponding to the theoretical survey does not change at all, and is still $G\mu<3\times10^{-8}$. We consider the constraint corresponding to the SPT specific simulations which include a component of instrumental noise to be the main conclusion of this work. This possible bound is approximately an order of magnitude better than those arising from other methods which use CMB observations and approximately two orders of magnitude better than those arising from other methods which search for the KS-effect. Therefore, we believe that using the output from the SPT along with the edge detection method has the potential to greatly improve the constraint on the cosmic string tension. This bound is not tighter than the constraints arising from current pulsar timing data, although it is competitive, falling directly within the range of values reported by different observations. Nevertheless, as mentioned in the Overview, we consider our method of constraining the tension to be more robust since we make less assumptions about some of the unknown parameters which describe the cosmic string network and its evolution. Therefore, we believe that the possible bound on $G\mu$ given above would in fact represent a stronger constraint. We conclude that instrumental noise does not have a very major effect on the ability of the edge detection method to identify the cosmic string signal. We believe that this is an indication that the thresholding with hysteresis performs as it should, since noisy pixels could destroy the edge signal by causing large fluctuations in the gradient magnitude. Furthermore, as one can see from Figure \[comparehist\], the largest difference between histograms occurs at short lengths rather than longer lengths. The instrumental noise leaves this difference in the short edge signal between maps with and without strings relatively unchanged, since the probability of a particularly noisy pixel falling on a short edge, resulting in it being incorrectly detected by the Canny algorithm, is small compared to that for longer edges. While on the topic of instrumental noise, we reiterate that we have included only a simplified white noise component in our simulations. A more complex investigation of instrumental noise would include a low frequency piece which results in stripes appearing in the final map of the CMB. Based on the method described in this paper, it is clear that striping would be crucial, since it would result in maps with more edges than that predicted by the cosmological theory and this could be confused with the effect of cosmic strings. One redeeming feature of this type of low frequency noise is that the stripes which are introduced would lie along the scanning direction, thus, when dealing with actual SPT data, it may be possible to subtract this effect out of the final map or to simply ignore edges lying along the known scanning direction in the edge detection algorithm itself. No other systematic effects due to the instrumental scanning strategy have been included in the current analysis, nor have errors due to foregrounds in the microwave sky. In future work it would be useful to investigate all types types of noise as well as the removal strategies in more detail to determine if they could change the behavior of the edge detection method. We also found that increasing the simulated survey size increases the statistical significance of the deviations between the histograms for similar values of the cosmic string tension. This behaviour is expected though, since more edge maps were used to compute the mean values in each of the histograms and one can see from Equation that the value of $t$ scales as $\sqrt{n}$. While the p-values are smaller for similar tensions, the final constraint which can be levied by the larger survey is not drastically different from that corresponding to the SPT specific simulations. Increasing the survey size by 5 times only lowered the possible constraint by a factor of roughly $\sqrt{5}$. Based on this result, we conclude that the survey size does not have a major influence on the ability of the edge detection method. When generating the simulated CMB maps, we employed a toy model of the cosmic string network which includes only straight strings and no cosmic string loops. More detailed models of the network and its evolution have been developed in other works [@Albrecht:1989mk; @PhysRevLett.60.257; @Allen:1990tv; @Fraisse:2007nu] and can be implemented numerically. Therefore, one obvious way to improve the testing method we have outlined here would be to implement one of these more complex models which would in turn produce a more realistic map of the temperature anisotropies induced via the KS-effect. On the other hand, we stress that a change of this nature would come at a large computational expense. On a similar note, it may also be useful to develop a more robust method of combining the string induced temperature anisotropies with those coming from Gaussian fluctuations, to make sure that the final simulated map agrees with other observations. Furthermore, after applying the Canny edge detection algorithm to the CMB temperature anisotropy maps, we quantify the corresponding edge map by recording the length of every edge appearing in it. As mentioned in Section \[seccount\], this is one of the simplest ways of describing the edge map, and it may be beneficial to investigate an alternative method of image comparison which provides a more powerful way of discriminating between the two edge maps. For now, we leave these improvements as the goal of future work. While we have chosen to focus on the SPT in this work, the edge detection method is quite versatile and could be used with virtually any high resolution CMB survey. We conclude that this method presents a powerful and unique way of constraining the cosmic string tension which has the potential to perform better than current methods, or, at the very least, to provide a complimentary technique to those already in use. Acknowledgements {#acknowledgements .unnumbered} ================ We would like to thank Gil Holder and Matt Dobbs for useful discussions concerning various parts of this work. Thank you to Joshua Berger and, especially, Stephen Amsel for making their code available and for answering many questions regarding the edge detection method. We also wish to extend a big thank to Eric Thewalt for debugging some parts of the code and making many helpful suggestions. R.B. wishes to thank Rebecca Danos for useful discussions. R.B. is supported by an NSERC Discovery Grant and by funds from the Canada Research Chairs Program. [44]{} natexlab\#1[\#1]{}bibnamefont \#1[\#1]{}bibfnamefont \#1[\#1]{}citenamefont \#1[\#1]{}url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{} , ** (, ). , ****, (), . , ****, (), . , ****, (). , ****, (). , ****, (). , ****, (). , ****, (). , ****, (). , ****, (). , ****, (). (), ****, (), . (), ****, (), . , ****, (), . , , , , ****, (), . , , , ****, (), . , ****, (). , , , ****, (), . , , , ****, (), . , , , ****, (), . , , , , ****, (), . , , , (), . , ****, (), . , , , ****, (), . , , , , ****, (), . , , , , ****, (), . , ****, (), . , ****, (). (), . , ****, (). , ****, (). , , , ****, (), . , ****, (). (), . (), . , ****, (), ISSN . , , , ****, (), . () (), . , , , ****, (), . () (), . , ****, (). (), . , ****, (), . , , , (), . [^1]: Note that the dependence of the limit as a function of angular resolution was studied in [@Amsel:2007ki]
--- abstract: 'Learning low-dimensional representations of networks has proved effective in a variety of tasks such as node classification, link prediction and network visualization. Existing methods can effectively encode different structural properties into the representations, such as neighborhood connectivity patterns, global structural role similarities and other high-order proximities. However, except for objectives to capture network structural properties, most of them suffer from lack of additional constraints for enhancing the robustness of representations. In this paper, we aim to exploit the strengths of generative adversarial networks in capturing latent features, and investigate its contribution in learning stable and robust graph representations. Specifically, we propose an Adversarial Network Embedding (ANE) framework, which leverages the adversarial learning principle to regularize the representation learning. It consists of two components, i.e., a structure preserving component and an adversarial learning component. The former component aims to capture network structural properties, while the latter contributes to learning robust representations by matching the posterior distribution of the latent representations to given priors. As shown by the empirical results, our method is competitive with or superior to state-of-the-art approaches on benchmark network embedding tasks.' author: - | Quanyu Dai$^{1}$, Qiang Li$^{1,2}$, Jian Tang$^{3, 4}$, Dan Wang$^{1}$\ $^{1}$Department of Computing, The Hong Kong Polytechnic University, Hong Kong\ $^{2}$School of Software, FEIT, The University of Technology Sydney, Australia\ $^{3}$HEC Montreal, Canada\ $^{4}$Montreal Institute for Learning Algorithms, Canada\ [email protected], [email protected], [email protected], [email protected] bibliography: - 'reference.bib' title: Adversarial Network Embedding --- Introduction ============ Graph is a natural way of organizing data objects with complicated relationships, and encodes rich information of nodes in the graph. For example, paper citation networks capture the information of innovation flow, and can reflect topic relatedness between papers. To analyze graphs, an efficient and effective way is to learn low-dimensional representations for nodes in the graph, i.e., node embedding [@KDD-14-Bryan; @WWW-15-Jian; @CIKM-15-SsCao]. The learned representations should encode meaningful semantic, relational and structural information, so that they can be used as features for downstream tasks such as network visualization, link prediction and node classification. Network embedding is a challenging research problem because of the high-dimensionality, sparsity and non-linearity of the graph data. In recent years, many methods for network embedding have been proposed, such as DeepWalk [@KDD-14-Bryan], LINE [@WWW-15-Jian] and node2vec [@KDD-16-Grover]. They aim to capture various connectivity patterns in network during representation learning. These patterns include relations of local neighborhood connectivity, first and second order proximities, global structural role similarities (i.e. structural equivalence), and other high-order proximities. As demonstrated in the literature, network embedding methods were shown to be more effective in many network analysis tasks than some classical approaches, such as Common Neighbors [@JASIS-Liben-NowellK07] and Spectral Clustering [@DMKD-11-TangL]. Though existing methods are effective in structure preserving with different carefully designed objectives, they suffer from lack of additional constraints for enhancing the robustness of the learned representations. When processing noisy network data, which is very common in real-world applications, these unsupervised network embedding techniques can easily result in poor representations. Thus, it is critical to consider some amount of uncertainty in the process of representation learning. One famous technique for robust representation learning in unsupervised manner is denoising autoencoder [@JMLR-VincentLLBM10]. It obtains stable and robust representations by recovering clean input from the corrupted one, that is denoising criterion. In [@AAAI-16-SsCao], the authors have applied this criterion for network embedding. Recently, many generative adversarial models [@ICLR-16-RadfordMC; @ICLR-16-MakhzaniSJG; @ICLR-17-Donahue; @ICLR-17-Vincent] have also been proposed for learning robust and reusable representations. They have been shown to be effective in learning representations for image [@ICLR-16-RadfordMC] and text data [@NIPS-workshop-16-Glover]. However, none of such models have been specially designed for dealing with graph data. In this paper, we propose a novel approach called Adversarial Network Embedding (ANE) for learning robust network representations by leveraging the principle of adversarial learning [@NIPS-14-GoodfellowPMXWOCB]. In addition to optimize the objective for preserving structure, a process of adversarial learning is introduced for modeling the data uncertainty. Figure \[karate\] presents an illustrative example with the well-known Zachary’s Karate network on the effect of adversarial learning. By comparing representations of two schemes without/with adversarial learning regularization, it can be easily found that the latter scheme obtains more meaningful and robust representations. More specifically, ANE naturally combines a structure preserving component and an adversarial learning component in a unified framework. The former component can help capture network structural properties, while the latter contributes to the learning of more robust representations through adversarial training with samples from some prior distribution. For structure preserving, we propose an inductive variant of DeepWalk that is suitable for our ANE framework. It maintains random walk for exploring neighborhoods of nodes and optimizes similar objective function, but employs parameterized function to generate embedding vectors. Besides, the adversarial learning component consists of two parts, i.e., a generator and a discriminator. It is acting as a regularizer for learning stable and robust feature extractor, which is achieved by imposing a prior distribution on the embedding vectors through adversarial training. To the best of our knowledge, this is the first work to design network embedding model with the adversarial learning principle. We empirically evaluate the proposed ANE approach through network visualization and node classification on benchmark datasets. The qualitative and quantitative results prove the effectiveness of our method. Related Work ============ Network Embedding Methods ------------------------- In recent years, many unsupervised network embedding methods have been proposed, which can be divided into three groups according to the techniques they use, i.e., probabilistic methods, matrix factorization based methods and autoencoder based methods. The probabilistic methods include DeepWalk [@KDD-14-Bryan], LINE [@WWW-15-Jian], node2vec [@KDD-16-Grover] and so on. DeepWalk firstly obtains node sequences from the original graph through random walk, and then learns the latent representations using Skip-gram model [@NIPS-13-Tomas] by regarding node sequences as word sentences. LINE tries to preserve first-order and second-order proximities in two separate objective functions, and then directly concatenates the representations. In [@KDD-16-Grover], the authors proposed to use biased random walk to determine neighboring structure, which can strike a balance between homophily and structural equivalence. It is actually a variant of DeepWalk. Matrix factorization based methods first preprocess the adjacency matrix to capture different kinds of high-order proximities and then decompose the processed matrix to obtain graph embeddings. For example, GraRep [@CIKM-15-SsCao] employs positive pointwise mutual information (PPMI) matrix as the preprocessing based on a proof of the equivalence between a $k$-step random walk in DeepWalk and a $k$-step probability transition matrix. HOPE [@KDD-16-Mingdong] preprocesses the adjacency matrix of the directed graph with high-order proximity measurements, such as Katz Index [@Katz-Index-1953], which can help capture asymmetric transitivity property. M-NMF [@AAAI-17-XiaoW] learns embeddings that can well capture community structure by building upon the modularity based community detection model [@PhysRevE-06-Newman]. Autoencoder is a widely used model for learning compact representations of high-dimensional data, which aims to preserve as much information in the latent space as possible for the reconstruction of the original data [@Sci-06-Hinton]. DNGR [@AAAI-16-SsCao] firstly calculates the PPMI matrix, and then learns the representations through stacked denosing autoencoder. SDNE [@KDD-16-DxW] is a variant of stacked autoencoder which adds a constraint in the loss function to force the connected nodes to have similar embedding vectors. In [@NIPS-16-Kipf], the authors proposed a variational graph autoencoder (VGAE) by using a graph convolutional network [@ICLR-16-KipfW] encoder for capturing network structural properties. Compared to variational autoencoder (VAE) [@ICLR-13-KingmaW], our ANE approach explicitly regularizes the posterior distribution of the latent space while VAE only assumes a prior distribution. Generative Adversarial Networks ------------------------------- Generative Adversarial Networks (GANs) [@NIPS-14-GoodfellowPMXWOCB] are deep generative models, of which the framework consists of two components, i.e., a generator and a discriminator. GANs can be formulated as a minimax adversarial game, where the generator aims to map data samples from some prior distribution to data space, while the discriminator tries to tell fake samples from real data. This framework is not directly suitable for unsupervised representation learning, due to the lack of explicit structure for inference. There are three possible solutions for this problem as demonstrated by existing works. Firstly, some works managed to integrate some structures into the framework to do inference, i.e., projecting sample in data space back into the space of latent features, such as BiGAN [@ICLR-17-Donahue], ALI [@ICLR-17-Vincent] and EBGAN [@ICLR-17-ZhaoML]. These methods can learn robust representations in many applications, such as image classification [@ICLR-17-Donahue] and document retrieval [@NIPS-workshop-16-Glover]. The second approach is to generate representations from the hidden layer of the discriminator, like DCGANs [@ICLR-16-RadfordMC]. By employing fractionally-strided convolutional layers, DCGANs can learn expressive image representations from both the generator and discriminator networks for supervised tasks. The third idea is to use adversarial learning process to regularize the representations. One successful practice is the Adversarial Autoencoders [@ICLR-16-MakhzaniSJG], which can learn powerful representations from unlabeled data without any supervision. Adversarial Network Embedding ============================= In this section, we will first introduce the problem definition and notations to be used. Then, we will present an overview of the proposed adversarial network embedding framework, followed by detailed descriptions of each component. Problem Definition and Notations -------------------------------- Network embedding is aimed at learning meaningful representations for nodes in information network. An information network can be denoted as $\mathcal{G}=(V, E, A)$, where $V$ is the node set, $E$ is a set of edges with each representing the relationship between a pair of nodes, and $A$ is a weighted adjacency matrix with its entries quantifying the strength of the corresponding relations. Particularly, the value of each entry in $A$ is either 0 or 1 in an unweighted graph specifying whether an edge exists between two nodes. Given an information network $\mathcal{G}$, network embedding is doing a mapping from nodes $v_i \in V$ to low-dimensional vectors $\boldsymbol{u_i} \in R^d$ with the formal format as follows: $f: V \mapsto U$, where $\boldsymbol{u_i}^T$ is the $i$th row of $U$ ($U \in R^{N \times d}$, $N=|V|$) and $d$ is the dimension of representations. We call $U$ representation matrix. These representations should encode structural information of networks. An Overview of the Framework ---------------------------- In this work, we leverage adversarial learning principle to help learn stable and robust representations. Figure \[ANE-Framework\] shows the proposed framework of *Adversarial Network Embedding* (ANE), which mainly consists of two components, i.e., a structure preserving component and an adversarial learning component. Specifically, the structure preserving component is dedicated to encoding network structural information into the representations. These information include the local neighborhood connectivity patterns, global structural role similarities, and other high-order proximities. There are many possible alternatives for the implementation of this component. Actually, existing methods [@KDD-14-Bryan; @WWW-15-Jian; @CIKM-15-SsCao] can be considered as structure preserving models, but without any constraints to help enhance the robustness of the representations. In this paper, we propose an inductive DeepWalk for structure preserving. It maintains random walk for exploring neighborhoods of nodes and optimizes similar objective function, but employs parameterized function $G(\cdot)$ to generate embedding vectors. In the training process, parameters of $G(\cdot)$ are directly updated instead of the embedding vectors. Besides, the adversarial learning component consists of two parts, i.e., a generator $G(\cdot)$ and a discriminator $D(\cdot)$. It is acting as a regularizer for learning stable and robust feature extractor, which is achieved by imposing a prior distribution on the embedding vectors through adversarial training. It needs to emphasize that the parameterized function $G(\cdot)$ is shared by both the structure preserving component and the adversarial learning component. These two components will update the parameters of $G(\cdot)$ alternatively in the training process. ![Adversarial Network Embedding Framework[]{data-label="ANE-Framework"}](figures/ANE-Framework.pdf){width="0.95\columnwidth"} Graph Preprocessing ------------------- In real-world applications, information network is usually extremely sparse, which may result in serious over-fitting problem when training deep models. To help alleviate the sparsity problem, one commonly used method is to preprocess the adjacency matrix with high-order proximities [@WWW-15-Jian; @CIKM-15-SsCao]. In this paper, we employ the shifted PPMI matrix $X$ [@NIPS-14-LevyG] as input features for the generator[^1], which is defined as $$X_{ij} = \max\{\log(\frac{M_{ij}}{\sum_{k}M_{kj}})-\log(\beta), 0\},$$ where $M=\hat A+\hat A^2+\cdots +\hat A^t$ can capture different high-order proximities, $\hat{A}$ is the $1$-step probability transition matrix obtained from the weighted adjacency matrix $A$ after a row-wise normalization, and $\beta$ is set to $\frac{1}{N}$ in this paper. Row vector $\boldsymbol{x_i}^T$ in $X$ is the feature vector characterizing the context information of node $v_i$ in the graph $\mathcal{G}$, but with high-dimension. Structure Preserving Model -------------------------- Ideally, existing unsupervised network embedding methods can be utilized as structure preserving component in our framework for encoding node dependencies into representations. However, many of them are transductive methods with an embedding lookup as embedding generator such as DeepWalk and LINE, which are not directly suitable for the generator of the adversarial learning component since we utilize parameterized generator as standard GANs. With parameterized generator, our framework can well deal with networks with node attributes and explore nonlinear properties of network with deep learning models. In this work, we design an inductive variant of DeepWalk that is applicable for both weighted and unweighted graphs. Theoretically, it can also generalize to unseen nodes for networks with node attributes as some inductive methods do [@ICML-16-YangCS; @NIPS-17-HamiltonYL], but we do not explore it in this paper. Besides, we also investigate to use denoising autoencoder [@JMLR-VincentLLBM10] as the structure preserving component. ### Inductive DeepWalk (IDW) The IDW model uses random walk to sample node sequences as that in DeepWalk. Starting from each node $v_i$, $\eta$ sequences are randomly sampled with the length as $l$. In every step, a new node is randomly selected from the neighbors of the current node with the probability proportional to the corresponding weight in matrix $A$. To improve efficiency, the alias table method [@KDD-LiARS14] is employed to sample node from the candidate node set in every sampling step. It only takes $O(1)$ time in a single sampling step. Then, positive node pairs can be constructed from node sequences. For every node sequence $\mathcal{W}$, we determine the positive target-context pairs as the set $\{(w_i,w_j): |i-j|<s\}$, where $w_i$ is the $i$th node in sequence $\mathcal{W}$ and $s$ denotes the context size. Similar to Skip-gram [@NIPS-13-Tomas], a node $v_i$ has two different representations, i.e., a target representation $\boldsymbol{u_i}$ and a context representation $\boldsymbol{u^{\prime}_i}$, which are generated by the target generator $G(\cdot; \boldsymbol{\theta_1})$ and context generator $F(\cdot;\boldsymbol{\theta^{\prime}_1})$, respectively. The generators are parameterized functions which are implemented with neural networks in this work. Given row vector $\boldsymbol{x_i}^T$ in $X$ corresponding to node $v_i$, we have $\boldsymbol{u_i} = G(\boldsymbol{x_i};\boldsymbol{\theta_1})$ and $\boldsymbol{u^{\prime}_i} = F(\boldsymbol{x_i};\boldsymbol{\theta^{\prime}_1})$. To capture network structural properties, we define the following objective function for each positive target-context pair $(v_i, v_j)$ with negative sampling approach: $$\label{IDW-Loss} \begin{array}{ll} \mathcal{O}_{IDW}(\boldsymbol{\theta_1};\boldsymbol{\theta^{\prime}_1}) = \log \sigma(F(\boldsymbol{x_j};\boldsymbol{\theta^{\prime}_1})^{T} G(\boldsymbol{x_i};\boldsymbol{\theta_1})) + \\ \sum_{n=1}^{K}\mathbb{E}_{v_n\sim P_n(v)}[\log \sigma(-F(\boldsymbol{x_n};\boldsymbol{\theta^{\prime}_1})^{T} G(\boldsymbol{x_i};\boldsymbol{\theta_1}))], \end{array}$$ where $\sigma(x)=1/(1+exp(-x))$ is the sigmoid function, $K$ is the number of negative samples for each positive pair, $P_n(v)$ is the noise distribution for sampling negative context nodes, $\boldsymbol{\theta_1}$ and $\boldsymbol{\theta^{\prime}_1}$ are parameters to be learnt. As suggested in [@NIPS-13-Tomas], $P_n(v) = d_v^{3/4}/\sum_{v_i\in V}d_{v_i}^{3/4}$ can achieve quite good performance in practice, where $d_v$ is the degree of node $v$. Adversarial Learning -------------------- The adversarial learning component is employed to regularize the representations. It consists of a generator $G(\cdot;\boldsymbol{\theta_1})$ and a discriminator $D(\cdot;\boldsymbol{\theta_2})$. Specifically, $G(\cdot;\boldsymbol{\theta_1})$ represents a non-linear transformation of input high-dimensional features to embedding vectors. $D(\cdot;\boldsymbol{\theta_2})$ represents the probability of a sample coming from real data. The generator function is shared with the structure preserving component. Different from GANs [@NIPS-14-GoodfellowPMXWOCB], in our framework, a prior distribution $p(\boldsymbol{z})$ is selected as the data distribution for generating real data, while the embedding vectors are regarded as fake samples. In the training process, the discriminator is trained to tell apart the prior samples from the embedding vectors, while the generator is aimed to fit embedding vectors to the prior distribution. This process can be considered as a two-player minimax game with the generator and discriminator playing against each other. The utility function of the discriminator is: $$\begin{aligned} \label{ANE-Discriminator} \begin{split} \mathcal{O}_D(\boldsymbol{\theta_2})=\;&\mathbb{E}_{\boldsymbol{z}\sim p(\boldsymbol{z})}[\log D(\boldsymbol{z};\boldsymbol{\theta_2})] + \\ &\mathbb{E}_{\boldsymbol{x}}[\log(1-D(G(\boldsymbol{x};\boldsymbol{\theta_1});\boldsymbol{\theta_2}))]. \end{split} \end{aligned}$$ In order to camouflage its output as prior samples, the generator is trained to improve the following payoff: $$\label{ANE-Generator} \mathcal{O}_G(\boldsymbol{\theta_1})=\mathbb{E}_{\boldsymbol{x}}[\log(D(G(\boldsymbol{x};\boldsymbol{\theta_1});\boldsymbol{\theta_2}))].$$ We argue that the adversarial learning component can help improve the learned representations in terms of robustness and structural meanings. We instantiate our framework with two structure preserving models, i.e., inductive DeepWalk (IDW) and denoising autoencoder (DAE). We call the ANE framework with IDW as Adversarial Inductive DeepWalk (AIDW) for easy illustration. Actually, with DAE as the structure preserving component, the ANE framework will become an adversarial autoencoder [@ICLR-16-MakhzaniSJG], and we represent it as ADAE to highlight the importance of the denoising criterion in learning representations. During adversarial learning, it is also important to choose a proper prior distribution. Like many practices in GANs research [@ICLR-16-RadfordMC; @ICLR-16-MakhzaniSJG; @ICLR-17-Donahue], the prior distribution is usually defined as Uniform or Gaussian noise which enables GANs to learn meaningful and robust representations against uncertainty. In our experiments, we also considered the ANE framework with both kinds of prior distributions but find no significant difference. One possible reason is both kinds of uncertainty can help the ANE framework to achieve a certain level of robustness against noise. It is likely that a careful choice of prior distribution, possibly guided by prior domain knowledge, may further improve application-specific performance. Algorithm --------- To implement the ANE approach, we consider a joint training procedure with two phases, including a structure preserving phase and an adversarial learning phase. In the structure preserving phase, we optimize objective function (\[IDW-Loss\]) for AIDW. In the adversarial learning phase, a prior distribution is imposed on representations through a minimax optimization problem. Firstly, the discriminator is trained to distinguish between prior samples and embedding vectors. Then, the parameters of the generator are updated to fit the embedding vectors to prior space to fool the discriminator. Besides, some tricks proposed in [@ICML-ArjovskyCB17] can be employed to help improve the stability of learning and avoid the mode collapse problem in traditional GANs training. Experiments =========== Experiment Settings ------------------- ### Datasets We conduct experiments on four real-world datasets with the statistics presented in Table \[tab-dataset\], where $\mathcal{C}$ denotes the label set. Cora and Citeseer are paper citation networks constructed by [@Retr-00-McCallumNRS]. Wiki [@AI-M-08-Sen] is a network with nodes as web pages and edges as the hyperlinks between web pages. We regard these three networks as undirected networks, and do some preprocessing on the original datasets by deleting self-loops and nodes with zero degree. Cit-DBLP is a paper citation network extracted from DBLP dataset [@KDD-08-TangZYLZS]. ---------- -------------- -------------- ----------------------- Dataset $\mid V\mid$ $\mid E\mid$ $\mid\mathcal{C}\mid$ Cora 2,708 5,278 7 Citeseer 3,264 4,551 6 Wiki 2,363 11,596 17 Cit-DBLP 5,318 28,085 3 ---------- -------------- -------------- ----------------------- : Statistics of datasets \[tab-dataset\] ### Baselines We compare our model with several baseline methods, including DeepWalk, LINE, GraRep and node2vec. There are many other network embedding methods, but we do not consider them here, because their performances are inferior to these baseline models as shown in corresponding papers. The descriptions of the baselines are as follows. - **DeepWalk** [@KDD-14-Bryan]: DeepWalk first transforms the network into node sequences by truncated random walk, and then uses it as input to the Skip-gram model to learn representations. - **LINE** [@WWW-15-Jian]: LINE can preserve both first-order and second-order proximities for undirected graph through modeling node co-occurrence probability and node conditional probability. - **GraRep** [@CIKM-15-SsCao]: GraRep preserves node proximities by constructing different $k$-step probability transition matrices. - **node2vec** [@KDD-16-Grover]: node2vec develops a biased random walk procedure to explore neighborhood of a node, which can strike a balance between local properties and global properties of a network. Besides, we consider inductive DeepWalk and denoising autoencoder as another two baseline methods. Note that both of them employ shifted PPMI matrix as preprocessing. ### Parameter Settings For LINE, we follow the settings of parameters in [@WWW-15-Jian]. The embedding vectors are normalized by L2-norm. Besides, we specially preprocess the original sparse networks by adding two-hop neighbors to low degree nodes. For both DeepWalk and node2vec, the window size $s$, the walk length $l$ and the number of walks $\eta$ per node are set to 10, 80 and 10, respectively, for fair comparison. For GraRep, the maximum matrix transition step is set to 4, and the settings of other parameters follow those in [@CIKM-15-SsCao]. Note that the dimension of representations for all methods are set to 128 for fair comparison. For our methods, we only use the most simple structure for the generator. Specifically, the generator is a single-layer network with leaky ReLU activations (with a leak of 0.2) and batch normalization [@ICML-IoffeS15] on the output. The shifted PPMI matrix $X$ is obtained by setting $t$ as 4 for Cora and Citeseer, and 3 for Wiki. For inductive DeepWalk, the number of negative samples $K$ is set to 5, and other parameters are set the same as DeepWalk. For denoising autoencoder, it has only one hidden layer with dimension as 128. For the discriminator of the framework, it is a three-layer neural networks, with the layer structure as 512-512-1. For the first two layers, we use leaky ReLU activations (with leak of 0.2) and batch normalization. For the output layer, we use sigmoid activation. For AIDW and ADAE, the settings of the structure preserving component are the same as those of IDW and DAE, respectively. The prior distribution of the adversarial learning component is set to $z_i\sim U[-1,1]$. We use RMSProp optimizer with learning rate as 0.001. Network Visualization --------------------- ---------------- ----------- ----------- ----------- ----------- ----------- ----------- ----------- ----------- ----------- %Labeled Nodes 10% 20% 30% 40% 50% 60% 70% 80% 90% DeepWalk 71.43 73.83 75.61 76.92 77.79 77.78 78.47 79.17 79.04 LINE 71.26 74.50 76.04 76.81 77.68 77.99 78.46 79.00 79.11 GraRep 74.78 76.78 78.56 78.99 79.39 79.85 79.96 80.94 81.29 node2vec 75.06 78.49 80.06 80.94 81.52 82.07 82.39 83.28 83.17 DAE 75.21 78.07 79.39 80.51 80.91 81.41 82.36 83.23 83.32 ADAE 75.01 77.45 79.65 80.96 81.64 82.11 82.62 83.52 84.10 IDW 66.32 72.21 75.23 76.65 77.66 78.32 79.20 79.93 80.63 AIDW **76.93** **79.50** **81.31** **82.01** **82.28** **83.03** **83.23** **84.46** **84.21** ---------------- ----------- ----------- ----------- ----------- ----------- ----------- ----------- ----------- ----------- \[tab-cora\] ---------------- ----------- ----------- ----------- ----------- ----------- ----------- ----------- ----------- ----------- %Labeled Nodes 10% 20% 30% 40% 50% 60% 70% 80% 90% DeepWalk 49.45 52.68 54.60 55.71 56.44 57.04 57.42 58.04 59.11 LINE 47.70 51.04 52.95 54.19 55.00 55.82 56.02 57.08 57.52 GraRep 51.62 53.29 53.59 53.83 54.55 54.62 54.97 54.90 56.27 node2vec 52.50 55.47 56.66 57.70 58.81 59.26 60.10 60.34 60.58 DAE 51.08 54.54 55.84 56.50 57.47 58.30 58.50 59.19 60.15 ADAE 52.40 55.58 56.64 57.32 58.34 59.60 60.27 60.63 61.31 IDW 45.45 50.47 52.32 53.38 54.75 55.18 55.98 56.23 57.19 AIDW **53.25** **56.76** **57.95** **59.06** **59.45** **59.95** **60.28** **60.87** **62.26** ---------------- ----------- ----------- ----------- ----------- ----------- ----------- ----------- ----------- ----------- \[tab-citeseer\] ---------------- ----------- ----------- ----------- ----------- ----------- ----------- ----------- ----------- ----------- %Labeled Nodes 10% 20% 30% 40% 50% 60% 70% 80% 90% DeepWalk 57.23 61.22 63.53 64.68 65.96 66.24 67.17 68.63 68.69 LINE 56.29 61.63 63.98 65.43 66.25 67.04 67.94 68.86 68.61 GraRep **57.68** 61.14 62.73 63.89 64.86 65.55 66.01 67.55 68.02 node2vec 57.61 61.52 63.47 64.83 65.54 66.16 67.17 68.44 68.69 DAE 57.08 61.63 63.71 65.20 66.84 67.41 67.91 69.03 69.45 ADAE 57.24 61.67 63.85 65.34 66.67 67.11 67.79 69.68 70.59 IDW 56.01 60.77 63.08 64.37 65.66 66.47 67.15 67.86 68.52 AIDW 57.43 **62.14** **64.18** **65.53** **67.07** **68.00** **69.44** **71.63** **72.03** ---------------- ----------- ----------- ----------- ----------- ----------- ----------- ----------- ----------- ----------- \[tab-wiki\] Network visualization is an indispensable way to analyze high-dimensional graph data, which can help reveal intrinsic structure of the data intuitively [@WWW-TangLZM16]. In this section, we visualize the representations of nodes generated by several different models using *t-SNE* [@JMLR-08-Maaten]. We construct a paper citation network, namely Cit-DBLP, from DBLP with papers from three different publication divisions, including Information Sciences, ACM Transactions on Graphics and Human-Computer Interaction. Some statistics of this dataset have been presented in Table \[tab-dataset\]. These papers are naturally classified into three categories based on the research fields they belong to. Figure \[visualization\] shows the visualization of embedding vectors obtained from different models using *t-SNE* tookit under the same parameter configuration. For both DeepWalk and LINE, papers from different categories are mixed with each other in the center of the figure. For LINE, there are 6 clusters with each category corresponding to two separate clusters, which is in conflict with the true structure of the network. Besides, the boundaries between different clusters are not clear. The visualizations of node2vec and IDW form three main clusters, which are better than those of DeepWalk and LINE. However, the boundary between blue cluster and green cluster for node2vec is not clear, while that of red cluster and green cluster is a little messy for IDW. AIDW performs better compared with baseline methods. We can observe that the visualization of AIDW has three clusters with quite large margin between each other. Furthermore, each cluster is linearly separable with another cluster in the figure, which can not be achieved by other baselines as showed by the figures. Intuitively, this experiment demonstrates that adversarial learning regularization can help learn more meaningful and robust representations. Node Classification ------------------- The label information can indicate interests, beliefs or other characteristics of nodes, which can help facilitate many applications, such as friend recommendation in online social networks and targeted advertising. However, in many real-world contexts, only a subset of nodes are labeled. Thus, node classification can be conducted to dig out information of unlabeled nodes. In this section, we conduct multi-class classification on three benchmark datasets, i.e., Cora, Citeseer and Wiki. We range the training ratio from 10% to 90% for comprehensive evaluation. All experiments are carried out with support vector classifier in Liblinear package[^2] [@JRML-08-FanCHWL]. **Results and discussion.** To ensure the reliability, we obtain the experimental results by taking an average of that of 10 runs, which are shown in Tables \[tab-cora\], \[tab-citeseer\] and \[tab-wiki\]. We have the following observations: - IDW produces similar results on Cora and Wiki with DeepWalk, and slightly inferior performance on Citeseer. The proposed model AIDW is built upon IDW with additional adversarial learning component. It consistently outperforms both IDW and DeepWalk on three datasets across all training ratios. For example, on Cora, AIDW gives more than 4% gain in accuracy over DeepWalk under all training ratio settings. It demonstrates that adversarial learning regularization can significantly improve the robustness and discrimination of the learned representations. The quantitative results also verify our previous qualitative findings in network visualization analysis. - ADAE achieves about 1% gain in accuracy over DAE on Citeseer when varying the training ratio from 10% to 90%, slightly better results on Cora, and comparable performance on Wiki. It shows that ANE framework can also guide the learning of more robust embeddings when building upon DAE. However, we notice that ADAE does not achieve obvious improvements over the corresponding structure preserving model as AIDW does. One reason is that denoising criterion already contributes to learning stable and robust representations [@JMLR-VincentLLBM10]. - Overall, the proposed method AIDW consistently outperforms all the baselines. As shown in Tables \[tab-cora\], \[tab-citeseer\] and \[tab-wiki\], node2vec produces better results than DeepWalk, LINE and GraRep on average. Our method can further achieve improvements over node2vec. More specifically, AIDW achieves the best classification accuracy on all three benchmark datasets across different training ratios, with only one exception on Wiki with training ratio as 10%. Model Sensitivity ----------------- In this section, we investigate the performance of AIDW w.r.t parameters and the type of prior on Cora dataset. Specifically, for parameter sensitivity analysis, we examine how the representation dimension $d$, walk-length $l$ and context-size $s$ affect the performance of node classification with the training ratio as 50%. Note that except for the parameter being tested, all other parameters are set to default values. We also compare the performance of AIDW with two different priors, i.e., a Gaussian distribution ($\mathcal{N}(0,1)$) and a Uniform distribution ($U[-1, 1]$). ![Multi-class classification on Cora with two different priors, i.e., Uniform and Gaussian, on AIDW model.[]{data-label="fig:prior"}](figures/prior-Uniform-Gaussian.pdf){width="0.85\columnwidth"} Figure \[parameter-sensitivity\](a) displays the results on the test of dimension $d$. When the dimension increases from 8 to 512, the accuracy shows apparent increase at first, and then tends to saturate once the dimension reaches around 128. Besides, the performance of AIDW is not sensitive on walk-length and context-size. As shown in Figure \[parameter-sensitivity\](b), the accuracy slightly increases first, and then becomes stable when the walk-length varies from 40 to 100. With the increase of context-size, the performance keeps stable first, and then slightly degrades after the context-size is over 6, as shown in Figure \[parameter-sensitivity\](c). The degradation might be caused by the noisy neighborhood information brought in by the large context-size, since the average node degree of Cora is just about 1.95. Figure \[fig:prior\] shows the results of multi-class classification on Cora with training ratio ranging from 10% to 90%. The accuracy curve of AIDW with uniform prior is almost coincided with that of AIDW with gaussian prior. It demonstrates that both types of prior can contribute to learning robust representations with no significant difference. Conclusion ========== An adversarial network embedding framework has been proposed for learning robust graph representations. This framework consists of a structure preserving component and an adversarial learning component. For structure preserving, we proposed inductive DeepWalk to capture network structural properties. For adversarial learning, we formulated a minimax optimization problem to impose a prior distribution on representations to enhance the robustness. Empirical evaluations in network visualization and node classification confirmed the effectiveness of the proposed method. Acknowledgments =============== The authors would like to thank Dr. Liang Zhang of Data Science Lab at JD.com and Prof. Xiaoming Wu of The Hong Kong Polytechnic University for their valuable discussion. Dan Wang’s work is supported in part by HK PolyU G-YBAG. [^1]: Note that we can use other ways to preprocess raw graph data to obtain the input feature X with lower dimension for large graphs. One simple way is to directly use existing scalable methods, e.g. DeepWalk and LINE, to obtain initial embeddings X as input. [^2]: https://www.csie.ntu.edu.tw/ cjlin/liblinear/
--- abstract: 'We show that the sign of magnetic anisotropy energy in quantum Hall ferromagnets is determined by a competition between electrostatic and exchange energies. Easy-axis ferromagnets tend to occur when Landau levels whose states have similar spatial profiles cross. We report measurements of integer QHE evolution with magnetic-field tilt. Reentrant behavior observed for the $\nu = 4$ QHE at high tilt angles is attributed to easy-axis anisotropy. This interpretation is supported by a detailed calculation of the magnetic anisotropy energy.' address: - '$^{1}$Department of Physics, Indiana University, Bloomington IN 47405' - '$^{2}$Institute of Physics ASCR, Cukrovarnická 10, 162 00 Praha 6, Czech Republic' - '$^{3}$Department of Electrical Engineering, Princeton University, Princeton NJ 08544' author: - 'T. Jungwirth$^{1,2}$, S.P. Shukla$^{3}$, L. Smrčka$^2$, M. Shayegan$^{3}$, and A.H. MacDonald$^{1}$' title: Magnetic Anisotropy in Quantum Hall Ferromagnets --- In the quantum Hall effect (QHE) regime, two-dimensional electron systems (2DES) can have ferromagnetic ground states in which electronic spins are completely aligned by an arbitrarily weak Zeeman coupling[@dassarmabook]. However, spin-independence of the Coulombic electron-electron interaction leads to isotropic Heisenberg ferromagnetism, and therefore to loss of ferromagnetic order at any finite temperature[@2dreview]. Richer physics occurs when the two Landau levels that are nearly degenerate differ by more than a spin index. For example, double-layer QHE systems can be regarded as easy-plane (XY) two-dimensional ferromagnets[@ahmgregphil] and exhibit a variety of effects which have received considerable experimental[@dlexpt] and theoretical[@dltheory] attention in recent years. Idealized single-layer QHE systems have a phase transition[@giulianiquinn] in tilted magnetic fields between unpolarized and spin-polarized states, and as we show below, can be regarded as easy-axis (Ising) ferromagnets. In this Letter we report experimental data for a 43 nm wide unbalanced GaAs quantum well in which a loss of the QHE at $\nu = 4$ is observed over a finite range of magnetic-field tilt-angles. We derive a general expression for the magnetic anisotropy energy and propose that its sign is responsible for this observation. We show that in realistic quantum wells either easy-axis or easy-plane anisotropy can occur, depending on spatial profiles of the orbitals of the crossing Landau levels. We discuss the anisotropy energy first from a general point of view, then specialize to two illustrative idealized examples before presenting realistic results for the quantum well of interest. In a strong magnetic field, the single-particle states of a 2DES are grouped into Landau levels with orbital degeneracy $N_{\phi} = A B / \Phi_0$, where $A$ is the system area, $B$ is the field strength, and $\Phi_0$ is the magnetic flux quantum. We consider the case where the Landau level filling factor $\nu \equiv N/N_{\phi}$ is an integer[@fraction] and two different groups of $N_{\phi}$ orbitals are close to degeneracy. We assume that other Landau levels are far enough from the Fermi energy to justify their neglect[@zheng]. Using a [*pseudospin*]{} language[@ahmgregphil] to represent the Landau level index degree of freedom, the class of Hamiltonians we consider can be expressed, up to an irrelevant constant, in the form $$\begin{aligned} &H& = - b \sigma(\vec q=0) + \frac{1}{2A} \sum_{\vec q} \left\{V_{\rho,\rho}(\vec q) \rho(-\vec q) \rho(\vec q) + V_{\sigma,\sigma}(\vec q)\right.\times \nonumber \\ &{ }& \sigma(-\vec q) \sigma(\vec q)+ \left.V_{\rho,\sigma}(\vec q) \left[\rho(-\vec q) \sigma(\vec q) + \sigma(-\vec q) \rho(\vec q)\right] \right\}. \label{hamiltonian}\end{aligned}$$ In Eq. (\[hamiltonian\]), $b$ is half the energy separation between the nearly degenerate Landau levels, and $\rho(\vec q)$ and $\sigma(\vec q)$ are respectively the sum and difference of the density operators[@leshouches] projected onto the up and down pseudospin Landau levels. Note that $b$ is half the [*single-particle*]{} energy difference and does not include mean-field contributions from Coulomb or exchange interactions with electrons in the Landau levels of interest. For simplicity, we have limited the present discussion to cases for which the total number of electrons with each pseudospin index is conserved. The effective interactions that appear in Eq. (\[hamiltonian\]) are related to the effective interactions between pseudospins by the following relations: $V_{\rho,\rho} = (V_{\uparrow,\uparrow} + V_{\downarrow,\downarrow} + 2 V_{\uparrow,\downarrow})/4$, $V_{\sigma,\sigma} = (V_{\uparrow,\uparrow} + V_{\downarrow,\downarrow} - 2 V_{\uparrow,\downarrow})/4$, and $V_{\rho,\sigma} = (V_{\uparrow,\uparrow} - V_{\downarrow,\downarrow})/4$. Our calculation of the pseudospin anisotropy energy is based on the following single Slater determinant wavefunction: $$|\Psi[\hat n] \rangle = \prod_{m=1}^{N_{\phi}} c^{\dagger}_{m,\hat n} |0\rangle \; . \label{wavefunction}$$ Here $m$ labels the orbital states within a Landau level and $\hat n$ denotes the pseudospinor aligned in the $\hat n = [\sin(\theta)\cos(\phi),\sin(\theta)\sin(\phi),\cos(\theta)]$ direction. This many-particle state is fully pseudospin polarized[@caveat1; @future]. A straightforward calculation yields the following result for the dependence of energy on pseudospin orientation: $$\frac{\langle \Psi[\hat n] | H | \Psi[\hat n] \rangle }{N} = - b^* \cos(\theta) + \frac{U_{\sigma,\sigma}}{2} \cos^2(\theta)\; . \label{anisoeng}$$ Here $b^* = b - U_{\rho,\sigma}$ and for all indices $$U_{s,s'} = \int \frac{d \vec q}{(2 \pi)^2} [V_{s,s'}(\vec q=0) - V_{s,s'}(\vec q)] \exp ( - q^2 \ell^2/2)\; , \label{anisointegral}$$ where $\ell=\sqrt{\hbar c/eB}$ is the magnetic length. In Eq. (\[anisoeng\]) we have dropped terms in the energy that are independent of pseudospin orientation. The right hand side of this equation is independent of $\phi$ because the $\hat z$ component of total pseudospin is a good quantum number. For each effective field strength $b^*$, the pseudospin orientation is determined by minimizing the total energy. For $U_{\sigma,\sigma} > 0 $, easy-plane anisotropy, $ \cos(\theta) =0$ at $b^*=0$ and the pseudospin evolves continuously with effective field as illustrated in Fig. \[anis\](a), reaching alignment for $|b^*| > U_{\sigma,\sigma}$. For $U_{\sigma,\sigma} < 0$, easy-axis anisotropy, local minima occur at both $\cos(\theta) = 1$ and $\cos(\theta) = -1$ for $|b^*|<|U_{\sigma,\sigma}|$. If only global pseudospin rotation processes were possible, macroscopic energy barriers would separate these two locally stable states, resulting in hysteretic behavior (see Fig. \[anis\](a)). The sign of $U_{\sigma,\sigma}$ is determined by competition between the two terms in square brackets on the right hand side of Eq. (\[anisointegral\]). The $V_{\sigma,\sigma}(\vec q=0)$ term is an electrostatic energy which is present when the two pseudospin states have different charge density profiles perpendicular to the electron layers. This term favors easy-plane anisotropy.. The $V_{\sigma,\sigma}(\vec q)$ term is the exchange energy which favors easy-axis anisotropy which will always occur when $V_{\sigma,\sigma}(\vec q)$ is an increasing function of wavevector. Transport measurements in the QHE regime are extremely sensitive to the energy gap for charged excitations. Generally, large energy gaps give rise to well developed Hall plateaus and deep minima in the dissipative resistivity. In the Hartree-Fock approximation, the quasiparticle energy gap of anisotropic QHE ferromagnets can be written quite generally as[@tomasthesis] $$\label{gap} \Delta_{HF}=I_{\uparrow\downarrow} -2 U_{\sigma\sigma} + \frac{2 b^*}{\cos(\theta)}\; ,$$ where $I_{\uparrow\downarrow}=\int\frac{dq^2}{(2\pi)^2}\, \exp\left(-q^2\ell^2/2\right)\left(V_{\rho\rho}-V_{\sigma\sigma}\right)$. For the easy-plane case, $\Delta_{HF}$ is a continuous function of the effective field $b^*$, decreasing linearly for $b^*/U_{\sigma\sigma} < -1$, constant for $|b^*|/U_{\sigma\sigma} < 1$ and increasing linearly for $b^*/U_{\sigma\sigma} > 1$. In contrast, if the system has easy-axis anisotropy, $\Delta_{HF}$ decreases to $I_{\uparrow\downarrow}$ at the extremes of the hysteresis loop ($b^*/U_{\sigma\sigma} = \pm 1$) before jumping to $I_{\uparrow\downarrow} + 4|U_{\sigma\sigma}|$ when the pseudospin magnetization reverses. In Fig. \[anis\](b) we summarize the above results by plotting the renormalized Hartree-Fock gap $\Delta^{*}_{HF}=(\Delta_{HF}- I_{\uparrow\downarrow})/2|U_{\sigma\sigma}|$ as a function of $b^*/|U_{\sigma\sigma}|$. In the Hartree-Fock approximation this quantity depends only on the sign of the anisotropy energy. For concreteness, we mention two idealized models which we regard as paradigms for the easy-plane and easy-axis anisotropy cases. For two arbitrarily narrow quantum wells separated by a distance $d$ with full polarization of the true electron spin, we let pseudospin represent the layer index[@fullpolarized]. The “pseudospin” Zeeman field $b$ is then proportional to the bias electric field, $E_g$, created by a gate external to the electron system: $ b = e E_g d/2 $. On the other hand, for a single arbitrarily narrow quantum well with $\nu = 2 m$ in which the real-spin Zeeman coupling has been increased[@giulianiquinn] so as to bring the up-spin $n=m$ Landau level close to degeneracy with the down-spin $n=m-1$ Landau level, we let the pseudospin represent the spin-index of the Landau level close to the Fermi energy. The pseudospin Zeeman coupling for this model is $ b = (g^{*} \mu_B B - \hbar \omega_c + I_0)/2$. Here the first term is the real-spin Zeeman coupling, the second term is the cyclotron energy and the last term is the contribution to $b$ from exchange interactions with frozen Landau levels lying well below the Fermi energy ($I_0/(\sqrt{\pi/2} \, e^2/\epsilon\ell)$ = 1/2, 5/16, and 31/128 for $m$=0, 1, and 2 respectively[@giulianiquinn; @mcdojiliu]). The effective Coulomb interaction energies for the two models are summarized in Tab. \[tab\]. For the ideal double-layer model, the electrostatic term $V_{\sigma,\sigma}(q=0)$ dominates, $V_{\sigma,\sigma}(q)$ is a monotonically decreasing function of $q$ and $U_{\sigma,\sigma}$ is positive. On the other hand for the ideal tilted-field model, the pseudospin wavefunctions differ only in the plane of the 2DES, the electrostatic term is consequently absent, and the exchange term produces easy-axis anisotropy ($U_{\sigma,\sigma}/(\sqrt{\pi/2} \, e^2/\epsilon\ell)$ = -3/16, -33/256, and -107/1024 for $m$=0, 1, and 2 respectively). Now we turn to the discussion of the measured QHE evolution with tilted field, shown in Fig. \[rxx\][@fractions]. In finite width quantum wells, the large tilt angles necessary to bring the up and down pseudospin Landau levels close to degeneracy result in substantial coupling of the in-plane component of the magnetic field to orbital degrees of freedom[@tomas]. These orbital effects can be incorporated [@future] by adjusting the effective interactions appropriately. In particular, for real finite-width quantum wells, the perpendicular charge density profiles of the two pseudospin Landau levels differ, and the electrostatic contribution to $U_{\sigma,\sigma}$ is no longer zero. The sign of the anisotropy energy depends in detail on the quantum well geometry, the tilt angle and the filling factor. The insets in Fig. \[rxx\](b) show charge-density profiles in the studied quantum well for the relevant orbitals at high tilt angles obtained from self-consistent LDA calculations[@tomas]: $n = 0, \downarrow$ and $n = 1, \uparrow$ at $\nu = 2$; $n = 1, \downarrow$ and $n = 2, \uparrow$ at $\nu = 4$. From these orbitals we obtain[@future] that, for $\nu = 2$, $U_{\sigma,\sigma}$ increases substantially with tilt angle, is only marginally negative for $b^*=0$ which occurs at $\theta=72^o$ and becomes positive at larger $\theta$ (see Fig. \[rxx\](b). This results demonstrates that easy-plane anisotropy can occur in realistic single quantum wells. If so, referring to the quasiparticle gap predictions summarized in Fig. \[anis\], a strong QHE may be expected throughout the region of tilt angles where the relevant Landau levels are close to degeneracy. Consistent with this expectation, the experimental data of Fig. \[rxx\] show a strong minimum at $\nu=2$ at all angles near $\theta=72^o$ and no clear evidence for the disappearance of the QHE is observed up to the highest accessible tilt-angles for $\nu=2$. Like the weak dependence of quasiparticle gap on bias potential, noted in experimental studies of double-quantum-well systems[@ezawa], this robustness of the QHE is a general property of easy-plane QHE ferromagnets. At $\nu = 4$, our calculations predict that $b^*=0$ occurs at $\theta=79^o$, and that the density profiles of the two pseudospin states are similar even at high tilt angles, as illustrated in Fig. \[rxx\]. Hence, $U_{\sigma,\sigma}$ is only weakly angle dependent and is still [*negative*]{} around $\theta=79^o$. We attribute the clear degradation of the measured QHE at $\nu =4$ to easy-axis anisotropy. The tilt angle $\theta=80^o$ where the $\nu =4$ QHE disappears is in a good quantitative agreement with the theoretically predicted angle $\theta=79^o$ at which the pseudospin Zeeman field $b^*$ vanishes. We expect transport properties inside the hysteresis loop in the easy-axis case, to have a complicated disorder dependence. Spatially random potentials couple differently to different Landau levels and will produce a random pseudospin magnetic field. This is expected[@imry] to lead to the formation of large domains with particular pseudospin orientations. The dynamics of pseudospin reorientation is likely to be controlled by barriers to domain wall motion. If these are comparable to $k_B T$, the pseudospin will achieve alignment with the effective field on laboratory time scales, $\cos(\theta)$ will change from $-1$ to $1$ at $b^* = 0$, and the energy gap will have a cusp. This scenario appears to apply for recent experiments which study analogous Landau level crossings in the valence band of GaAs[@pepper] and to some tilted field driven transitions at fractional Landau level filling factors[@fractions]. On the other hand, when some domain wall motion barriers are much larger than $k_B T$, we expect that all physical properties will exhibit hysteretic behavior, and that the electronic state will have domain structure for $b^*$ close to zero. Dissipation due to mobile charges created in domain walls[@future] can then lead to a breakdown of the QHE. We expect that dissipative and Hall resistances will then depend on measuring current and sample history, as well as on temperature. In the disorder free limit, easy-axis anisotropy in two-dimensions leads to a finite temperature continuous phase transition in the Ising universality class and stronger thermodynamic anomalies than for the Kosterlitz-Thouless phase transition of easy-plane systems. The transition temperature can be estimated[@future] by balancing energy and entropy terms in the free-energy of long domain walls: $$k_B T_c \sim U_{\sigma,\sigma} (w R / \ell^2)\; , \label{tceq}$$ where $w$ is the domain wall width and $R$ is the domain wall orientation correlation length. The domain wall physics of these easy-axis ferromagnets is unconventional because the spin-stiffness is negative[@future]. Preliminary results from work presently in progress suggest that $w R/ \ell^2$ is substantially larger than one and that the critical temperature should typically exceed $\sim 1$ Kelvin. [*Note added*]{}: A recent experimental study[@woowon] we learned of after this work was completed finds hysteretic behavior in a narrow (25 nm) GaAs quantum well in vicinity of $\nu=2/5$ and 4/9 fractional QHE’s which correspond to integer QHE’s at composite fermion filling factors $\nu=2$ and 4 respectively. In these experiments, Zeeman coupling strength was controlled both by applying hydrostatic pressure and by tilting the field. We believe that the theory developed in this paper explains the origin of the hysteresis found in Ref. [@woowon] at very low temperatures ($T \stackrel{{\protect\textstyle <}}{\sim}$ 200mK). We have not observed similar hysteresis in our data (Fig. \[rxx\]); this may be because of our higher available base temperature (300mK). This work was supported by the National Science Foundation under grants DMR-9623511, DMR-9714055 and INT-9602140, by the Ministry of Education of the Czech Republic under grant ME-104 and by the Grant Agency of the Czech Republic under grant 202/98/0085. For a review of quantum Hall ferromagnets, see S.M. Girvin and A.H. MacDonald in [*Perspectives on Quantum Hall Effects*]{} (Wiley, New York, 1997). For a review of magnetism in two-dimensions see V.L. Pokrovsky and G.V. Uimin, in [*Magnetic Properties of Layered Transition Metal Compounds*]{} (Kluwer, Dordrecht, 1990). A.H. MacDonald, P.M. Platzman, G.S. Boebinger, Phys. Rev. Lett. [**65**]{}, 775 (1990). S.Q. Murphy [*et al.*]{}, Phys. Rev. Lett. [**72**]{}, 728 (1994); Y.W. Suen [*et al.*]{}, Phys. Rev. B [**44**]{}, 5947 (1991); T.S. Lay [*et al.*]{}, [*ibid.*]{} [**50**]{}, 17 725 (1994). X.G. Wen and A. Zee, Phys. Rev. B [**47**]{}, 2265 (1993); Z.F. Ezawa and A. Iwazaki, Int. J. Mod. Phys. B [**19**]{}, 3205 (1992); L. Brey, Phys. Rev. Lett. [**65**]{}, 903 (1990); H.A. Fertig, Phys. Rev. B [**40**]{}, 1087 (1989); O. Narikiyo and D. Yoshioka, J. Phys. Soc. Jap. [**62**]{}, 1612 (1993); R. C$\hat{\rm o}$té, L. Brey, and A.H. MacDonald, Phys. Rev. B [**46**]{}, 10239 (1992); X.M. Chen and J.J. Quinn, [*ibid.*]{} [**45**]{}, 11 054 (1992); K. Moon [*et al.*]{}, Phys. Rev. B [**51**]{}, 5138 (1995); K. Yang [*et al.*]{}, [*ibid.*]{} [**54**]{}, 11 644 (1996). G.F. Giuliani and J.J. Quinn, Phys. Rev. B [**31**]{}, 6228 (1985). The integer filling factor will be odd (even) if an even (odd) number of frozen full Landau levels lie well below the Fermi energy. For the idealized tilted field case, the $n=0$ majority spin Landau level is frozen and $\nu = 2$. Composite-fermion Chern-Simons mean-field-theory suggests that the physics of the transition between polarized and unpolarized states at $\nu = 2/5$ should be similar to the $\nu =2$ physics addressed here. See R.R. Du, A.S. Yeh, H.L. Stormer, D.C. Tsui, L.N. Pfeiffer, and K.W. West, Phys. Rev. Lett. [**75**]{}, 3926 (1995) and work cited therein. Our discussion can be generalized to circumstances where more than two Landau levels are close to degeneracy, as occurs, for example, in double-layer systems with small spin-splittings. For a discussion of ordered states in this case see S. Das Sarma, S. Sachdev, and L. Zheng, preprint \[cond-mat/9709315\] (1997). A.H. MacDonald in [*Les Houches Session LXI: Mesoscopic Quantum Physics*]{}, edited by E. Akkermans, G. Montambeaux, and J.-L. Pichard, (Elsevier, Amsterdam, 1995); L. Świerkowski and A.H. MacDonald, Phys. Rev. B [**55**]{}, R16017 (1997). For the case of $\theta=0$ or $\theta=\pi$, as well as for the case where the pseudospin anisotropy vanishes, this is an exact eigenstate of the many-particle Hamiltonian neglecting only mixing with remote Landau levels; A.H. MacDonald, H.A. Fertig, and Luis Brey, Phys. Rev. Lett. [**76**]{}, 2153 (1996). In general however, anisotropy energy estimates based on these variational wavefunctions will err on the side of easy-axis anisotropy. Approaches which can be used to refine the present estimates will be discussed elsewhere. T. Jungwirth and A.H. MacDonald, to be submitted to Phys. Rev. B. For the calculation of $\Delta_{HF}$ in the double-layer system see, e.g., T. Jungwirth and A.H. MacDonald, Phys. Rev. B [**53**]{}, 9943 (1996). In this idealized model we assume that the true spin-degree of freedom has been frozen by the external magnetic field. A.H. MacDonald, H.C.A. Oji, and K.L. Liu, Phys. Rev. B [**34**]{}, 2681 (1986). See for example, J.P. Eisenstein [*et al.*]{}, Phys. Rev. Lett. [**62**]{}, 1540 (1989); R.G. Clark [*et al.*]{}, Phys. Rev. Lett. [**62**]{}, 1536 (1989); W. Kang [*et al.*]{}, Phys. Rev. B [**56**]{}, R12776 (1997). T.S. Lay [*et al.*]{}, Phys. Rev. B [**56**]{}, R16 017 (1997). A. Sawada, [*et al.*]{} Solid State Commun. [**103**]{}, 447 (1997); J.P. Eisenstein, private communication (1997). Y. Imry and S.K. Ma, Phys. Rev. Lett. [**35**]{}, 1399 (1975). A.J. Daneshvar [*et al.*]{}, Phys. Rev. Lett. [**79**]{}, 4449 (1997). H. Cho [*et al.*]{}, to be submitted to Phys. Rev. Lett. Model $V_{\rho\rho}$ $V_{\sigma\sigma}$ $V_{\rho\sigma}$ -------------- ------------------------------------------------------- ------------------------------------------------------- ---------------------------------------------------------------------- double-layer $(1+e^{-qd})/2q$ $(1-e^{-qd})/2q$ 0 tilted-field $\frac{\left(L_m(q^2/2)+L_{m-1}(q^2/2)\right)^2}{4q}$ $\frac{\left(L_m(q^2/2)-L_{m-1}(q^2/2)\right)^2}{4q}$ $\frac{\left(L_m(q^2/2)\right)^2-\left(L_{m-1}(q^2/2)\right)^2}{4q}$ : Effective Coulomb interactions in units of $2 \pi e^2\ell/\epsilon$ as a function of wavevector $q$ in units of $\ell^{-1}$ for ideal double-layer and tilted-field models. $L_n(x)$ is the Laquerre polynomial.[]{data-label="tab"}
--- author: - bibliography: - 'ref.bib' ---
--- abstract: 'The process $e^{+}e^{-} \to q\bar{q}$ plays an important role in electroweak precision measurements. We are studying this process with ILD full simulation. The key for the reconstruction of the quark pair final states is quark charge identification (ID). We report the progress of charge ID study in detail. In particular, we investigate the performance of the charge ID for each decay mode of the heavy hadrons to know the possibilities of improvements of the charge ID.' author: - | [Y. Uesugi$^1$[^1], H. Yamashiro$^1$, T. Suehara$^1$, T. Yoshioka$^2$, K. Kawagoe$^1$]{}\ \ $^1$Department of Physics, Faculty of Science, Kyushu University\ $^2$Research Center for Advanced Particle Physics, Kyushu University\ 744 Motooka, Nishi-ku, Fukuoka, 819-0395 Japan title: '**Quark charge identification for $e^{+}e^{-}$ to $q\bar{q}$ study**' --- Introduction ============ Two-quark final states in the high energy $e^+e^-$ collisions are important for precise measurements of the electroweak interaction. These simple processes have low background and the uncertainty has a little effect on QED calculation. We are studying the $e^+e^- \to q\bar{q}$ final states in the International Linear Collider (ILC) for a probe to new physics. The angular distribution with respect to the beam axis is used to calculate the sensitivity to new physics beyond the Standard Model[@Yamashiro-proc]. Identification of quark charge is necessary to separate the angular distribution of positive and negative quarks. The efficiency of quark charge ID in the previous study[@Yamashiro] was about 60%, which causes significant performance degradation due to the misidentification of the quark charge. For the performance improvement, we investigate the performance of charge ID of the current software for each decay mode of $b$ hadrons. Simulation condition ==================== We utilized ILCSoft[@ILCSoft] version v01-16 for this study. The event samples of $e^{+}e^{-} \to b\bar{b}$ was generated in center of mass energy of 250 GeV by WHIZARD 1.95[@Whizard]. The full Monte Carlo simulation (MC) was done with Mokka based on Geant4 framework with the reference geometry of the International Large Detector (ILD) concept used in the studies of Detailed Baseline Design report[@DBD], ILD\_v1\_o5 model. The model includes silicon pixel and strip detectors, a time projection chamber, precisely segmented electromagnetic and hadron calorimeters (ECAL and HCAL) and a 3.5 Tesla solenoid magnet. Event reconstruction was done with Marlin processors, including tracking and particle flow reconstruction by PandoraPFA algorithm[@pfa] to obtain track-cluster matching. The reconstructed particles were clustered to two jets with Durham[@Durham] algorithm. Vertex finder and quark charge identification ============================================= The key feature to reconstruct the quark charge is a vertex finder. We used LCFIPlus[@LCFIPlus] to reconstruct vertices. In the LCFIPlus, all tracks are firstly processed with a primary vertex finder based on tear-down technique, with the beam constraint to remove most of tracks consistent to come from the interaction point. Then remaining tracks are processed with a secondary vertex finder based on build-up technique. It does not restrict the number of vertices per jet to reconstuct, but after the jet clustering it combines the vertices if there are more than two vertices in the jet with refitting vertex positions. In addition, it uses tracks which are consistent to cross the line of the primary vertex and secondary vertices to formulate additional pseudo-vertices, which aims to recover vertices having only one track. Overall fraction of having two vertices in $b$ quark jet with the current implementation is around 40%, and having one or more vertices is around 80%. In the following discussion, we treat two vertices separately if we obtain two vertices in a jet, and treat the vertex as combined vertices of $b$ and $c$ decay if we obtain only one vertex. To separate $b$ and $\bar{b}$, the charges of the second and third vertex are calculated as the sum of track charge associated to the vertices. There are four ways of obtaining quark charge as follows: - jet charge (charge sum of all tracks in the jet) (${\rm\Sigma_{all}^{jet}}$) - vertex charge - charge sum of tracks associated to the second vertex ($\rm\Sigma_{vtx}^{2nd}$) - charge sum of tracks associated to the third vertex (if found) ($\rm\Sigma_{vtx}^{3rd}$) - charge sum of tracks associated to the second and third vertex ($\rm\Sigma_{vtx}^{2nd 3rd}$) For jets with two vertices found, we can use $\rm\Sigma_{vtx}^{2nd}$ and $\rm\Sigma_{vtx}^{3rd}$ to separate $b$ decay, but for jets with only one vertex found, we can only use $\rm\Sigma_{vtx}^{2nd 3rd}$. For jets without vertices found, we have to use ${\rm\Sigma_{all}^{jet}}$ but this is not discussed in this study. Decay modes of B mesons ======================= The purpose of charge ID is to distinguish jets from $b$ quarks and jets from $\bar{b}$ quarks. Each $b$ or $\bar{b}$ quark formulates a $b$ hadron after fragmentation, which is usually in the core of the jet. Table \[tab:Bhadrons\] shows $b$ hadrons obtained from $b$ and $\bar{b}$ quarks separately. The charge of the $b$ quark is closely related to the quark constituents in the final state $b$ hadrons, for example $B^-$ and $\bar{B^0}$ only come from $b$ quark and not from $\bar{b}$ quark, and $B^+$ and ${B^0}$ only come from $\bar{b}$ quark and not from $b$ quark if we ignore quark-antiquark oscillation. If we can separate $B^0$ and $\bar{B^0}$, this can enhance the charge ID performance significantly, compared to just identify the charge of the $b$ hadrons. This can be realized by observing decay of $B$ mesons. Here we only focus on the $B^+$, $B^0$ and those antiparticles, however, similar discussion can be done with other decay modes as well. ------------------------- ------------------------------- -------------------------------------------------- ------------------------------------------- $B^{-} (\bar{u}b)$ $\bar{B}^{0} (\bar{d}b)$ $B^{0} (d\bar{b})$ $B^{+} (u\bar{b})$ 42.2% 41.9% $B^{-}_{c} (\bar{c}b)$ $\bar{B}^{0}_{s} (\bar{s}b)$ $B^{0}_{s} (s\bar{b})$ $B^{+}_{c} (c\bar{b})$ 8.0% $\Xi^{-}_{b} (dsb)$ $\Lambda^{0}_{b} (udb)$ $\bar{\Lambda}^{0}_{b} (\bar{u}\bar{d}\bar{b})$ $\Xi^{+}_{b} (\bar{d}\bar{s}\bar{b})$ 6.4% 0.61% $\Omega^{-}_{b} (ssb)$ $\Xi^{0}_{b} (usb)$ $\bar{\Xi}^{0}_{b} (\bar{u}\bar{s}\bar{b})$ $\Omega^{+}_{b} (\bar{s}\bar{s}\bar{b})$ 0.59% 0.010% ------------------------- ------------------------------- -------------------------------------------------- ------------------------------------------- : List of semistable $b$ hadrons produced from $b$ and $\bar{b}$ quarks. The production ratio, obtained from MC information, is also shown for $\bar{b}$, which ignores $b$-$\bar{b}$ oscillation. Production of charmed $b$ hadrons ($B_c$), which usually decay to lighter $b$ hadrons initially, are included in the fraction of lighter $b$ hadrons.[]{data-label="tab:Bhadrons"} Table \[tab:br\] shows the dominant decay modes of $B^+$ and $B^0$ mesons. As shown in the tables, there is some discrepancy of the branching ratios between PDG values and those obtained from the MC samples, which may be due to the $b$-$\bar{b}$ oscillation. For the $B^+$ decay, the dominant decay mode is $B^+ \to \bar{D^0}X$, which gives positive vertex on the decay of $B^+$ and neutral vertex on the subsequent decay of $\bar{D^0}$. In the case of $B^0$ decay, there are two dominant decay modes, $B^0 \to \bar{D^0}X$ and $B^0 \to D^-X$. For the latter decay, the decay vertex of $B^0$ should be positive and the subsequent $D^-$ decay should be negative, which should have separation power from $\bar{B^0}$. For this separation, separation of second and third vertices is critical. Decay modes BR in PDG BR in MC -------------------------- ----------- ---------- $B^{+} \to \bar{D}^{0}X$ 79% 70.51% $B^{+} \to D^{-}X$ 9.9% 9.81% $B^{+} \to D^{0}X$ 8.6% 4.21% $B^{+} \to D^{+}_{s}X$ 7.9% 4.80% $B^{+} \to D^{+}X$ 2.5% 1.65% : Branching ratios (BR) of $B^+$ (left) and $B^0$ (right) mesons. BR in PDG is from [@PDG], and BR in MC is from the event sample.[]{data-label="tab:br"} Decay modes BR in PDG BR in MC -------------------------- ----------- ---------- $B^{0} \to \bar{D}^{0}X$ 47.4% 40.24% $B^{0} \to D^{-}X$ 36.9% 27.59% $B^{0} \to D^{+}_{s}X$ 10.3% 4.21% $B^{0} \to D^{0}X$ 8.1% 11.55% $B^{0} \to D^{+}X$ $<$3.9% 6.91% : Branching ratios (BR) of $B^+$ (left) and $B^0$ (right) mesons. BR in PDG is from [@PDG], and BR in MC is from the event sample.[]{data-label="tab:br"} Current performance of charge ID ================================ The performance of charge ID with LCFIPlus is checked with $B^+$ and $B^0$ data. After separation of the decay mode with MC information, $b$ and $\bar{b}$ quarks are assigned to jets using MC-track matching. The reconstructed vertices are examined and $\rm\Sigma_{vtx}^{2nd}$, $\rm\Sigma_{vtx}^{3rd}$ and $\rm\Sigma_{vtx}^{2nd 3rd}$ are calculated for each jet. Table \[tab:3\] shows those observables of $B^+$ decays, categorized by the number of reconstructed vertices and $B^+$ decay modes (with only first and second dominant decay). For the events with 1 vertex found, it shows that positive $\rm\Sigma_{vtx}^{2nd 3rd}$ is much more than negative charge, which proves that charge ID is possible. However, there is significant amount of “neutral" vertex, which limits the charge ID performance. For the events with 2 vertices found, the positive $\rm\Sigma_{vtx}^{2nd}$ dominates more than with 1 vertex case, thus gives better performance of charge ID. For the special case of $B^{+} \to \bar{D}^{-}X$, $\rm\Sigma_{vtx}^{2nd}$ should be +2 and $\rm\Sigma_{vtx}^{3rd}$ should be -1, which is much easier to identify. particle ------------- ----------------------------- ------------------------- ------------------------- ----------------------------- ------------------------- ------------------------- ----------------------------- ------------------------- ------------------------- 2nd 3rd 2nd 3rd 2nd 3rd symbol $\rm\Sigma_{vtx}^{2nd 3rd}$ $\rm\Sigma_{vtx}^{2nd}$ $\rm\Sigma_{vtx}^{3rd}$ $\rm\Sigma_{vtx}^{2nd 3rd}$ $\rm\Sigma_{vtx}^{2nd}$ $\rm\Sigma_{vtx}^{3rd}$ $\rm\Sigma_{vtx}^{2nd 3rd}$ $\rm\Sigma_{vtx}^{2nd}$ $\rm\Sigma_{vtx}^{3rd}$ charge $<0$ 8.67% 8.06% 22.8% 7.52% 7.21% 16.6% 17.2% 7.85% 67.6% charge $=0$ 35.5% 18.5% 53.0% 38.6% 18.1% 6.07% 20.0% 8.75% 20.0% charge $>0$ 55.7% 73.3% 24.0% 53.8% 74.6% 22.6% 62.7% 83.3% 12.3% : Reconstructed charge of the vertices in $B^+$ decay. The left 3 columns show the charge of all decay modes, and the right 6 columns show the charge of two dominant decay modes.[]{data-label="tab:3"} Separation of $B^{0}$ and $\bar{B^0}$ is apparently more difficult since the total charge of $B^0$ and subsequent charm hadron is neutral. There are two dominant decay modes of $B^0$: $B^{0} \to \bar{D}^{0}X$ and $B^{0} \to D^-X$. The former is quite difficult to distinguish from $\bar{B^0} \to D^0$, since both of second and third vertices are neutral. There may be some possibility to identify kaons to check their charge, but this is beyond the current study. In $B^{0} \to D^-X$ case we have a chance to separate from $\bar{B^0}$ since the first $B^0$ vertex should be positive and subsequent charm vertex should be negative, as shown in Table \[tab:7\]. Here the separation is nearly impossible with one vertex found, but with two vertices there is a significant difference on the positive and negative fractions of $\rm\Sigma_{vtx}^{2nd}$ and $\rm\Sigma_{vtx}^{3rd}$, which gives separation power of $B^0$ and $\bar{B^0}$. particle ------------- ------- ------- ------- 2nd 3rd charge $<0$ 29.5% 11.8% 72.3% charge $=0$ 33.2% 17.6% 17.5% charge $>0$ 37.1% 70.5% 10.0% : Reconstructed charge of the vertices in $B^{0} \to \bar{D}^{-}X$ decay.[]{data-label="tab:7"} Summary and prospects ===================== We investigated the performance of quark charge ID to be used in $e^{+}e^{-} \to q\bar{q}$ study. By separating decay modes, we have a chance to separate $b$ and $\bar{b}$ by using tracks from second and third vertices independently. However, misidentification of the vertex charge is still to be improved. The main reason should be tracks missed to be clustered into the vertex. We will investigate to recover those tracks by checking behaviour of the vertex finder more precisely. There is a trial of such a vertex recovery[@Sviatoslav] which should be revisited. It is also important to increase the fraction of events which can find two vertices since having two vertices gains the performance significantly. We can also consider to use zero-vertex events by looking for secondary particles with larger impact parameters or jet leptons. Investigation of charm jets should also be done as a future plan. Acknowledgements {#acknowledgements .unnumbered} ================ We appreciate the ILD software group for the support and producing event samples. This work was supported by JSPS KAKENHI Grant Number 16H02176. [99]{} H. Yamashiro et al., Study of fermion pair productions at the ILC with center-of-mass energy of 250 GeV, Proc. LCWS2017, arXiv:1801.04671. H. Yamashiro, Master thesis at Kyushu University (2018),\ <http://epp.phys.kyushu-u.ac.jp/thesis/2018MasterYamashiro.pdf> (Japanese) <http://ilcsoft.desy.de/portal/> W. Kilian, T. Ohi, J. Reuter, Eur. Phys. J. [**C71**]{} (2011) 1742. T. Behnke [*et al.*]{}, The International Linear Collider Technical Design Report - Volume 4: Detectors. 2013, arXiv:1306.6329. M. A. Thomson, Nucl. Instrum. Meth. [**A611**]{} (2009) 25-40. S. Catani [*et al.*]{}, Phys. Lett. [**B269**]{} (1991) 432-438. T. Suehara, T. Tanabe, Nucl. Instrum. Meth. [**A808**]{} (2016) 109-116. M. Tanabashi [*et al.*]{} (Particle Data Group), [Phys. Rev. [**D98**]{} (2018) 030001](https://journals.aps.org/prd/abstract/10.1103/PhysRevD.98.030001) S. Bilokin, R. Pöschl, F. Richard, Measurement of $b$ quark EW couplings at ILC, arXiv:1709.04289. [^1]: Presenter. Talk presented at the International Workshop on Future Linear Colliders (LCWS2018), Arlington, Texas,22-26 October 2018.
--- abstract: 'We study the superfluid properties of two-dimensional spin-population-imbalanced Fermi gases to explore the interplay between the Berezinskii-Kosterlitz-Thouless (BKT) phase transition and the possible instability towards the Fulde-Ferrell (FF) state. By the mean-field approximation together with quantum fluctuations, we obtain phase diagrams as functions of temperature, chemical potential imbalance and binding energy. We find that the fluctuations change the mean-field phase diagram significantly. We also address possible effects of the phase separation and/or the anisotropic FF phase to the BKT mechanism. The superfluid density tensor of the FF state is obtained, and its transverse component is found always vanishing. This causes divergent fluctuations and possibly precludes the existence of the FF state at any non-zero temperature.' address: - 'COMP Centre of Excellence, Department of Applied Physics, Aalto University, FI-00076 Aalto, Finland' - 'COMP Centre of Excellence, Department of Applied Physics, Aalto University, FI-00076 Aalto, Finland' - 'COMP Centre of Excellence, Department of Applied Physics, Aalto University, FI-00076 Aalto, Finland' - 'Kavli Institute for Theoretical Physics, University of California, Santa Barbara, California 93106-4030, USA' author: - Shaoyu Yin - 'J.-P. Martikainen' - 'P. Törmä[^1]' title: 'Fulde-Ferrell states and Berezinskii-Kosterlitz-Thouless phase transition in two-dimensional imbalanced Fermi gases' --- introduction ============ Systems at low temperature can exhibit diverse quantum mechanical phenomena, such as superfluidity, superconductivity, Bose-Einstein condensation (BEC), Mott insulators, and various magnetic states. Such phenomena become possible because of the interplay between interactions and low temperature. Fast progress in the ultracold-gas experiments (see e.g. [@Bloch:2008] and references therein) has made these highly controllable systems attractive for the study of correlated quantum states. While remarkable experimental achievements have been reached on different quantum states in various settings, one state of special interest, namely the inhomogeneous superfluidity with non-constant order parameter remains as a challenge. Such possibility has been predicted on spin-population-imbalanced Fermi systems several decades ago [@FF; @LO]. In such a state Cooper pairs can have non-zero total momenta. The simplest case of inhomogeneous superfluidity is the Fulde-Ferrell (FF) state [@FF], where the order parameter is a single plane wave, $\Delta_0e^{i2\mathbf{Q}\cdot\mathbf{x}}$, with $\Delta_0$ being the magnitude of the order parameter and $2\mathbf{Q}$ as the momentum of the pair (sometimes referred as the FF(LO) vector). One may also consider the Larkin-Ovchinnikov (LO) state [@LO] with $\Delta_0\cos(2\mathbf{Q}\cdot\mathbf{x})$, which can be taken as the superposition of two equal FF modes with opposite momenta [@Radzihovsky:2011]. More generally, the nonuniform order parameter can be expressed as a superposition of many possible FFLO vectors by $\sum_\mathbf{Q}\Delta_{0\mathbf{Q}}e^{i2\mathbf{Q}\cdot\mathbf{x}}$. All these states are usually categorized as FFLO states and have been extensively studied. Although an undisputed experimental evidence is still missing, there have already been several experiments in heavy fermion superconductors [@Radovan:2003; @Bianchi:2003; @Won:2004; @Watanabe:2004; @Capan:2004; @Martin:2005; @Kakuyanagi:2005; @Kumagai:2006; @Correa:2007] and organic superconductors [@Lortz:2007; @Coniglio:2010] which report signatures consistent with the predicted FFLO states. The recent realization of imbalanced Fermi gases with ultra-cold atoms [@Zwierlein:2006; @Partridge:2006] has triggered more interest in the FFLO states. An experiment with a one-dimensional (1D) ultracold Fermi gas showed results consistent with the FFLO state [@Liao:2010], but direct observation, especially in higher dimensions, remains as a goal. Besides physical parameters such as temperature $T$, particle density, and interaction strength, dimensionality may also affect the properties of the quantum systems significantly. It is well known that thermal fluctuations become increasingly strong as dimensionality is lowered. The Mermin-Wagner-Hohenberg theorem states clearly that there cannot be any long-range order in uniform 1D or two-dimensional (2D) systems at $T\neq0$ [@MWH]. However, the 2D case turns out to be marginal and a quasi-long-range order can survive at low temperatures in the presence of interactions or trapping potential. This suggests that 2D systems can display very rich phenomena [@Esslinger:2006]. One peculiar possibility in 2D systems is the Berezinskii-Kosterlitz-Thouless (BKT) phase transition [@Berezinskii:1971; @KT]. It describes a mechanism by which the quasi-long-range order of a 2D system is destroyed by the proliferation of free vortices and antivortices when the temperature is higher than a critical value $T_\mathrm{BKT}$. Below $T_\mathrm{BKT}$ the quasi-long-range order is sufficient for the existence of superfluidity. Furthermore, it has been theoretically shown that a 2D quantum gas can also form a BEC in the presence of a trapping potential [@Bagnato:1991]. Since the properties of 2D Fermi gases can be related to other important (quasi-) 2D systems, such as graphene [@Beenakker:2008], and the 2D CuO$_2$ layers which play a significant role in high $T_c$ superconductors [@Dagotto:1994], their scientific importance extends beyond the field of ultra-cold gases. There have been some theoretical studies of various properties of the 2D imbalanced Fermi gases [@Tempere:2007; @He:2008; @Tempere:2009; @Klimin:2011; @Du:2012; @Klimin:2012]. Here we study the possibility and properties of the FFLO phase in a 2D imbalanced Fermi gas, especially the interplay between FFLO states, phase separation, and the BKT phase transition. A similar question was posed briefly in a letter by H. Shimahara [@Shimahara:1998] in the context of a 2D superconductor based on the Ginzburg-Landau (GL) theory, but the anisotropic superfluid density (stiffness) was not taken into account. Recently for imbalanced Fermi gases, the GL theory has been applied to the study of the LO state in various dimensions [@Radzihovsky:2009]. In the present paper we discuss this topic by using mean-field (MF) theory with fluctuations. Our discussion is not limited to a small order parameter and goes beyond the GL theory. Fair amount of relevant theoretical work devoted to the research of FFLO states has been published under different conditions and various dimensions. For three-dimensional (3D) homogeneous imbalanced Fermi gases, the FF state is expected to exist in a narrow sliver in the phase diagram [@Sheehy:dual; @Parish:2007]. In isotropic traps, FFLO features are predicted to appear only as a boundary layer [@Kinnunen:2006; @Jensen:2007], although highly anisotropic traps yield much larger FFLO phase areas [@Machida:2006; @Kim:2011]. Interestingly, in optical lattices the FFLO state has been suggested to be stabilized due to nesting of the Fermi surfaces [@Koponen:2007PRL; @Koponen:2007NJP; @Loh:2010]. For the case of (quasi-) 1D system, where no long-range order exists due to extremely strong fluctuations, the possibility of FFLO state was first discussed in the context of superconductors by using the bosonization of electron gases [@Yang:2001], and later for atomic gases many numerical simulations show the existence of the FFLO state [@Feiguin:2007; @Tezuka:2008; @Batrouni:2008; @Rizzi:2008], which is also supported by a few solvable models [@Machida:1984; @Machida:2005; @Orso:2007; @Hu:2007; @Zhao:2008], and several methods have been proposed for the detection of such 1D FFLO states [@Bakhtiari:2008; @Korolyuk:2010; @Kajala:2011; @Chen:2012; @Lu:2012]. However, the (quasi-) 2D imbalanced case with quasi-long-range order is less explored because of its complexity, especially the marginally strong fluctuations. Some lattice simulations show that the FF state exists with medium filling factor, but it is unclear what happens in the zero-filling-factor limit, i.e. the continuum limit [@Koponen:2007NJP]. Because of the recent progress in ultra-cold atoms, especially the realization of degenerate quasi-2D atomic gases both for bosons [@Hadzibabic:2006] and fermions [@Martiyanov:2010] by using 1D optical lattices with lattice depths $V_0$ in the range of $V_0/h\approx10\cdots100$ kHz (here $h$ is the Planck constant), many important properties of 2D systems have been observed. For Fermi systems, these include studies of pseudogap physics [@Feld:2011] and polarons in imbalanced gases [@Koschorreck:2012]. These ground-breaking experiments provide a strong motivation to address the issue of polarized 2D Fermi gases with the possibility of the FF state. Although recent experiments usually study the quasi-2D gases, for the sake of simplicity, we will focus only on the perfect 2D case which corresponds to the limit of an infinitely deep trapping in the third dimension. Therefore, we will not discuss some interesting phenomena such as the FFLO states in a dimensional crossover [@Kim:2012; @Sun:2013; @Heikkinen:2013]. It is also worth mentioning that, in the opposite limit, i.e. with a very loose trap in the third direction, a 3D gas with 1D periodic potential not only stabilizes the possible FFLO states but also enables the FFLO wavevector to lie skewed with respect to the potential [@Devreese:2011]. This paper is organized as follows. We start, in Sec. \[Sec-action\], with a MF approximation by calculating the saddle-point action of the system. Since fluctuations are not negligible in a 2D system, the fluctuation contribution is included in Sec. \[Sec-fluctuation\]. Based on these results we can proceed, in Sec. \[Sec-Omega\], to minimize the total thermodynamic potential and examine the phase diagram and possible phase transitions in Sec. \[Sec-phasediagram\]. For the sake of simplicity, in the present paper we focus on the FF state. Since it is commonly accepted that the LO state is usually more stable and energetically favorable than the FF state, stability of FF indicates stability of LO as well. We summarize the structure of the paper in the flowchart of Fig. \[flowchart\]. Throughout this paper we use the natural units with $\hbar=k_B=1$. Some notations are defined in the beginning of Appendix \[transformation\]. ![(Color online) Framework of the paper, where the yellow oblate indicates a use of an ansatz, the green rectangles indicate approximations, while orange diamonds imply different applications of the theory. The related sections and important equations and figures are indicated by underlined boldface font.[]{data-label="flowchart"}](flowchart.eps){width="0.78\columnwidth"} Theoretical Framework ===================== Saddle-Point Action {#Sec-action} ------------------- We assume a system of fermions with two species, namely, spin up ($\sigma=\uparrow$) and spin down ($\sigma=\downarrow$). Hamiltonian density in terms of the creation $\hat\psi^\dagger_\sigma(x)$ and annihilation operators $\hat\psi_\sigma(x)$ reads $$\hat H(x)=\sum_\sigma\hat\psi^\dagger_\sigma(x)(\hat\varepsilon-\mu_\sigma)\hat\psi_\sigma(x)-g\hat\psi^\dagger_\uparrow(x)\hat\psi^\dagger_\downarrow(x)\hat\psi_\downarrow(x)\hat\psi_\uparrow(x).$$ Here $\hat\varepsilon$ is the kinetic energy operator and $\mu_\sigma$ the chemical potential for spin $\sigma$ (from which we define $\mu=(\mu_\uparrow+\mu_\downarrow)/2$ and $h=(\mu_\uparrow-\mu_\downarrow)/2$ for later convenience), $g>0$ is the strength of the attractive contact interaction. By using the standard Hubbard-Stratonovich transformation (cf. Appendix \[transformation\]) with the auxiliary field operator $\hat\Delta$ coupled to $\hat\psi^\dagger_\uparrow\hat\psi^\dagger_\downarrow$, we can obtain the effective action $$\begin{aligned} \label{effaction} S_\mathrm{eff}=\mathcal{V}\sum_{iq_n,\mathbf{q}}\frac{|\hat\Delta(q)|^2}{g}-\mathrm{Tr}\ln[\beta\mathbf{G}^{-1}(k,k')],\end{aligned}$$ where $\mathcal{V}=V\beta$ with $V$ as the volume and $\beta$ as the inverse of temperature $T$, $k$ (as well as $q$) includes both the Matsubara frequency $ik_n$ and the vector space momentum $\mathbf{k}$, and the inverse of the Nambu propagator $\mathbf{G}^{-1}(k,k')$ is a $2\times2$ matrix in the Nambu space given by $$\left(\begin{array}{cc} (ik'_n-\epsilon_\mathbf{k'}+\mu_\uparrow)\delta_{k,k'} & \hat\Delta(k-k')\\ \hat\Delta^*(-k+k') & (ik'_n+\epsilon_\mathbf{k'}-\mu_\downarrow)\delta_{k,k'} \end{array}\right).$$ Here $\epsilon_\mathbf{k}$ is the kinetic energy of a particle with momentum $\mathbf{k}$, and $\mathrm{Tr}$ means the trace over the Nambu space, the momentum space, and the Matsubara frequencies. In the MF approximation, the field operator $\hat\Delta$ is replaced by its saddle-point value, namely the order parameter $\Delta_\mathrm{s}$, which satisfies $\partial S_\mathrm{eff}/\partial\Delta^*_\mathrm{s}=0$. In the case of balanced Fermi gases, the momenta of the paired fermions are equal in magnitude but with opposite directions, such that $\Delta_\mathrm{s}$ is a constant. However with imbalance, the pairs might have non-zero momenta, which results in the FF(LO) states. Here we examine the FF state with $\Delta_\mathrm{s}=\Delta_0e^{2i\mathbf{Q}\cdot\mathbf{x}}$. Its phase part can be absorbed by a momentum shift in the corresponding Fermi fields $\hat\psi_\sigma$ (cf. Appendix \[transformation\]), yielding $\tilde\Delta_\mathrm{s}=\Delta_0$ and a new Nambu propagator $\tilde{\mathbf{G}}_\mathrm{s}^{-1}(k,k')=\tilde{\mathbf{G}}_\mathrm{s}^{-1}(k)\delta_{k,k'}$ which is diagonal in momentum space, and $$\tilde{\mathbf{G}}_\mathrm{s}^{-1}(k)=\left(\begin{array}{cc} ik_n-\epsilon_\mathbf{Q+k}+\mu_\uparrow & \Delta_0\\ \Delta_0 & ik_n+\epsilon_\mathbf{Q-k}-\mu_\downarrow \end{array}\right),\label{shiftpropinv}$$ which can be straightforwardly inverted as $$\begin{aligned} \tilde{\mathbf{G}}_\mathrm{s}(k)=&\frac{1}{(ik_n-\epsilon_\mathbf{Q+k}+\mu_\uparrow)(ik_n+\epsilon_\mathbf{Q-k}-\mu_\downarrow)-\Delta_0^2}\nonumber\\ &\times\left(\begin{array}{cc} ik_n+\epsilon_\mathbf{Q-k}-\mu_\downarrow & -\Delta_0\\ -\Delta_0 & ik_n-\epsilon_\mathbf{Q+k}+\mu_\uparrow\end{array}\right).\end{aligned}$$ It is useful to note that the denominator of $\tilde{\mathbf{G}}_\mathrm{s}$ is simply $\mathrm{det}(\tilde{\mathbf{G}}_\mathrm{s}^{-1})=1/\mathrm{det}(\tilde{\mathbf{G}}_\mathrm{s})$. Substituting $\tilde\Delta_\mathrm{s}=\Delta_0$ and $\tilde{\mathbf{G}}_\mathrm{s}^{-1}$ into Eq. (\[saddle\]), we get the saddle-point action which reads, after Matsubara summation, $$\label{saddleaction} S_\mathrm{s}=\frac{\mathcal{V}\Delta_0^2}{g}-\sum_\mathbf{k}\{\ln[2\cosh(\beta E_\mathbf{Qk})+2\cosh(\beta h_\mathbf{Qk})]-\beta\xi_\mathbf{Qk}\},$$ where $\xi_\mathbf{Qk}=\frac{\mathbf{Q}^2+\mathbf{k}^2}{2m}-\mu$, $E_\mathbf{Qk}=\sqrt{\xi_\mathbf{Qk}^2+\Delta_0^2}$, and $h_\mathbf{Qk}=h-\frac{\mathbf{Q}\cdot\mathbf{k}}{m}$. Here a quadratic dispersion is assumed for concreteness. Fluctuations {#Sec-fluctuation} ------------ In order to go beyond the MF approximation, we introduce fluctuations to the order parameter. Conventionally, for the study of the 2D BKT phase transition, it is convenient to work with a phase fluctuation via $\Delta\rightarrow\Delta_0e^{i\theta(x)}$. More generally we could have $\Delta\rightarrow(\Delta_0+\eta(x))e^{i\theta(x)}$, where two real fields $\eta(x)$ and $\theta(x)$ represent the amplitude and the phase fluctuations, respectively. For the FF ansatz, we use $(\Delta_0+\eta(x))e^{2i\mathbf{Q}\cdot\mathbf{x}+i\theta(x)}$, such that $\theta(x)$ fluctuates around the phase of the FF saddle-point ansatz. Notice that while $\theta(x)$ is not necessarily small, its derivatives can be taken as small perturbative parameters since we can expect a smooth phase change of the order parameter in the space-time when $T$ is not very high and the fluctuation picture is valid. For this reason, it is more convenient to start the derivation in the coordinate space rather than in the momentum space. Also, in order to separate the perturbative part in $\mathbf{G}^{-1}$ more easily, we first apply a phase rotation to the Nambu basis to absorb the phase of $\Delta$ [@Diener:2008] by the transformation $$\label{gaugetransform} \hat\Psi(x)\rightarrow\tilde{\hat\Psi}(x)=U(x)\hat\Psi(x),$$ with $$U(x)=\left(\begin{array}{cc} e^{-i\mathbf{Q}\cdot\mathbf{x}-i\theta(x)/2} & 0\\ 0 & e^{i\mathbf{Q}\cdot\mathbf{x}+i\theta(x)/2} \end{array}\right).$$ This is a generalization of the momentum shift we used in Sec. \[Sec-action\] to get $\tilde{\mathbf{G}}_\mathrm{s}^{-1}$. Note that there is no mixing between the two fields of different species since $U$ is diagonal. Correspondingly $$\begin{aligned} \tilde{\mathbf{G}}^{-1}(x,x')&=U(x)\mathbf{G}^{-1}(x,x')U^\dagger(x')=\left(\begin{array}{cc} -\frac{i}{2}\partial_\tau\theta-\partial_\tau-\hat\varepsilon_{\mathbf{Q}+\frac{\nabla\theta}{2}}+\mu_\uparrow & \Delta_0+\eta(x)\\ \Delta_0+\eta(x) & \frac{i}{2}\partial_\tau\theta-\partial_\tau+\hat\varepsilon_{-\mathbf{Q}-\frac{\nabla\theta}{2}}-\mu_\downarrow \end{array}\right)\delta(x-x'),\end{aligned}$$ where $\hat\varepsilon_{\pm(\mathbf{Q}+\frac{\nabla\theta}{2})}$ means the momentum of the energy operator is shifted by $\pm(\mathbf{Q}+\frac{\nabla\theta}{2})$, e.g. $\hat\varepsilon_{\pm(\mathbf{Q}+\frac{\nabla\theta}{2})}f(\mathbf{k})=f(\mathbf{k})\epsilon_{\mathbf{k}\pm(\mathbf{Q}+\frac{\nabla\theta}{2})}$. Meanwhile, the order parameter becomes $\tilde\Delta(x)=\Delta_0+\eta(x)$ with the Fourier transform $$\label{deltaq} \tilde\Delta(q)=\Delta_0\delta_{q,0}+\eta(q).$$ Now we can separate out a perturbative matrix $\tilde{\mathbf{K}}$ from $\tilde{\mathbf{G}}^{-1}=\tilde{\mathbf{G}}^{-1}_\mathrm{s}+\tilde{\mathbf{K}}$ with $\eta$ and $\nabla\theta$ as small variables, where $$\tilde{\mathbf{G}}^{-1}_\mathrm{s}=\left(\begin{array}{cc} -\partial_\tau-\hat\varepsilon_\mathbf{Q}+\mu_\uparrow & \Delta_0\\ \Delta_0 & -\partial_\tau+\hat\varepsilon_{-\mathbf{Q}}-\mu_\downarrow \end{array}\right)\delta(x-x')$$ is the Fourier transform of Eq. (\[shiftpropinv\]), while $$\begin{aligned} \tilde{\mathbf{K}}(x,x')&=\left(\begin{array}{cc} -\frac{i}{2}\partial_\tau\theta-\hat\varepsilon_{\mathbf{Q}+\frac{\nabla\theta}{2}}+\hat\varepsilon_\mathbf{Q} & \eta(x)\\ \eta(x) & \frac{i}{2}\partial_\tau\theta+\hat\varepsilon_{-\mathbf{Q}-\frac{\nabla\theta}{2}}-\hat\varepsilon_{-\mathbf{Q}} \end{array}\right)\delta(x-x')\\ &=\left(\begin{array}{cc} -\frac{i}{2}\partial_\tau\theta+\frac{i}{2m}(\nabla\theta\cdot\nabla_\mathbf{Q}+\frac{1}{2}\nabla_\mathbf{Q}\cdot\nabla\theta)-\frac{(\nabla\theta)^2}{8m} & \eta(x)\\ \eta(x) & \frac{i}{2}\partial_\tau\theta+\frac{i}{2m}(\nabla\theta\cdot\nabla_\mathbf{-Q}+\frac{1}{2}\nabla_\mathbf{-Q}\cdot\nabla\theta)+\frac{(\nabla\theta)^2}{8m} \end{array}\right)\delta(x-x').\nonumber\end{aligned}$$ Here in the last line we separated the perturbative $\nabla\theta$ from the non-relativistic dispersion $\hat\varepsilon_{\pm(\mathbf{Q}+\frac{\nabla\theta}{2})}\equiv-\frac{\nabla_{\pm(\mathbf{Q}+\frac{\nabla\theta}{2})}^2}{2m}$. Note that our derivation was quite general until this point and most of it is equally valid, for example, in optical lattices with a different dispersion. From here on our formulae apply only in homogeneous space because of the specific quadratic dispersions. The Fourier transform of $\tilde{\mathbf{K}}(x,x')$ is $$\label{perturbationK} \tilde{\mathbf{K}}(k,k')=\sum_q\left[\eta(q)\sigma_1-\frac{q_n\theta(q)}{2}\sigma_3-\frac{i\theta(q)}{4m}(\mathbf{k}^2-\mathbf{k'}^2+3\mathbf{q}\cdot\mathbf{Q}\sigma_3)\right]\delta_{k-k',q}+\sum_{q,q'}\frac{\theta(q)\theta(q')\mathbf{q}\cdot\mathbf{q}'}{8m}\sigma_3\delta_{k-k',q+q'}\equiv\tilde{\mathbf{K}}_1+\tilde{\mathbf{K}}_2,$$ where the Pauli matrices $\sigma_1=\left(\begin{matrix}0&\ 1\\1&\ 0\end{matrix}\right)$ and $\sigma_3=\left(\begin{matrix}1&\ 0\\0&\ -1\end{matrix}\right)$ operating in the Nambu space were introduced to make expressions more compact. Besides, as $\mathbf{q}\theta(q)$ corresponds to $\nabla\theta$, in the Fourier transformation sense, and $q_n\theta(q)$ to $\partial_\tau\theta$, we take them as the small parameters of the same order as $\eta(q)$. Therefore in Eq. (\[perturbationK\]) the double-sum term labelled as $\tilde{\mathbf{K}}_2$ corresponds to the second-order perturbation, while the remaining part $\tilde{\mathbf{K}}_1$ is the first order perturbation. Now we can obtain the effective action by using $\tilde{\mathbf{G}}^{-1}(k,k')=\tilde{\mathbf{G}}^{-1}_\mathrm{s}(k)\delta_{k,k'}+\tilde{\mathbf{K}}(k,k')$, with $\tilde{\mathbf{G}}^{-1}_\mathrm{s}(k)$ from Eq. (\[shiftpropinv\]) and $\tilde{\mathbf{K}}(k,k')$ from Eq. (\[perturbationK\]), inserted into Eq. (\[effaction\]) together with $\tilde\Delta(q)$ from Eq. (\[deltaq\]). Subtracting the saddle-point action $S_\mathrm{s}=S_\mathrm{eff}(\tilde\Delta_\mathrm{s})=S_\mathrm{eff}(\Delta_0\delta_{q,0})$, we find the fluctuation action $$\begin{aligned} \label{sfl} S_\mathrm{fl}&=S_\mathrm{eff}(\Delta)-S_\mathrm{s}\\ &=\mathcal{V}\sum_q\frac{\Delta_0\delta_{q,0}\eta^*(q)+\Delta_0\delta_{q,0}\eta(q)+|\eta(q)|^2}{g}\nonumber\\ &\qquad\quad-\mathrm{Tr}\ln[1+\tilde{\mathbf{G}}_\mathrm{s}\tilde{\mathbf{K}}]\nonumber\\ &=\frac{2\mathcal{V}\Delta_0\eta(0)}{g}+\frac{\mathcal{V}\sum_q|\eta(q)|^2}{g}-\sum_{k}\mathrm{tr}\tilde{\mathbf{G}}_\mathrm{s}(k)\tilde{\mathbf{K}}(k,k)\nonumber\\ &\quad\quad+\frac{1}{2}\sum_{k,k'}\mathrm{tr}\tilde{\mathbf{G}}_\mathrm{s}(k)\tilde{\mathbf{K}}_1(k,k')\tilde{\mathbf{G}}_\mathrm{s}(k')\tilde{\mathbf{K}}_1(k',k)+\cdots,\nonumber\end{aligned}$$ where only terms up to the second order are kept. Note that $\tilde{\mathbf{K}}(k,k)=\eta(0)\sigma_1-\sum_q\frac{\theta(q)\theta(-q)\mathbf{q}^2}{8m}\sigma_3=\eta(0)\sigma_1-\sum_q\frac{|\theta(q)|^2\mathbf{q}^2}{8m}\sigma_3$, where the term linear in the perturbative fields is simply $\eta(0)\sigma_1$. With two perturbative fields $\eta$ and $\theta$, the saddle-point condition $(\partial S/\partial\Delta)_{\Delta=\Delta_\mathrm{s}}=0$ requires $\left(\frac{\partial S}{\partial\eta}\right)_{\theta=0}=0$ and $\left(\frac{\partial S}{\partial\theta}\right)_{\eta=0}=0$, where the total action $S=S_\mathrm{s}+S_\mathrm{fl}$. These ensure the vanishing of terms linear in $\eta$ and $\theta$ in the expansion of $S$. Since the linear term of $S_\mathrm{fl}$ is independent of $\theta$, one can obtain only one equation from $\eta$, i.e. constraint on the amplitude of the order parameter. By collecting the terms linear in $\eta$ from Eq. (\[sfl\]), we get $$\begin{aligned} &\frac{2\mathcal{V}\Delta_0\eta(0)}{g}-\sum_k\mathrm{tr}\tilde{\mathbf{G}}_\mathrm{s}(k)\eta(0)\sigma_1\nonumber\\ &\qquad\qquad\qquad\qquad=2\eta(0)\Delta_0\left[\frac{\mathcal{V}}{g}+\sum_k\mathrm{det}\tilde{\mathbf{G}}_\mathrm{s}(k)\right],\nonumber\end{aligned}$$ so the saddle-point condition becomes $$\label{gapequation} \frac{\mathcal{V}}{g}+\sum_k\mathrm{det}\tilde{\mathbf{G}}_\mathrm{s}(k)=0.$$ This result is equivalent to the gap equation which we get by taking the partial derivative of the MF action $S_\mathrm{s}$ with respect to $\Delta_0$. On the other hand, the absence of $\theta$ in the linear expansion of the action means that the saddle-point condition is not enough to determine the phase of the order parameter. We attribute this to the special form of the FF ansatz. As both the FF vector and the phase fluctuation appear in the phase of the order parameter, $i[2\mathbf{Q\cdot x}+\theta(x)]$, it is always possible to redefine $Q$ by separating an arbitrary term linear in $\mathbf{x}$ from $\theta(x)$. This will cause some ambiguity when we determine $Q$, which is to be discussed in detail in Sec. \[Sec-Omega\]. After removing the linear terms according to Eq. (\[gapequation\]), we can rewrite Eq. (\[sfl\]) in the Gaussian form, $$\label{Gauss-fl-action} S_\mathrm{fl}=\frac{1}{2}\sum_q(\eta^*(q),\theta^*(q))\mathbf{D}\left(\begin{array}{c}\eta(q)\\\theta(q)\end{array}\right),$$ where $$\begin{aligned} \label{Dij} \mathbf{D}_{11}&=\frac{2\mathcal{V}}{g}+\sum_k\mathrm{tr}\tilde{\mathbf{G}}_\mathrm{s}(k)\sigma_1\tilde{\mathbf{G}}_\mathrm{s}(k+q)\sigma_1,\nonumber\\ \mathbf{D}_{12}&=-\mathbf{D}_{21}=i\sum_k\mathrm{tr}\tilde{\mathbf{G}}_\mathrm{s}(k)\mathbf{J}\tilde{\mathbf{G}}_\mathrm{s}(k+q)\sigma_1,\nonumber\\ \mathbf{D}_{22}&=\sum_k\left[\frac{\mathbf{q}^2}{4m}\mathrm{tr}\tilde{\mathbf{G}}_\mathrm{s}(k)\sigma_3+\mathrm{tr}\tilde{\mathbf{G}}_\mathrm{s}(k)\mathbf{J}\tilde{\mathbf{G}}_\mathrm{s}(k+q)\mathbf{J}\right],\end{aligned}$$ and $$\mathbf{J}\equiv\frac{iq_n\sigma_3}{2}-\frac{\mathbf{(k+q)}^2-\mathbf{k}^2+3\mathbf{q}\cdot\mathbf{Q}\sigma_3}{4m}.$$ Eqs. (\[Gauss-fl-action\]) and (\[Dij\]) are generalizations of the results of Eq. (54) in Ref. [@Diener:2008] (we believe the results there were accidentally divided by two twice) to include the possibility of the FF state. Phase Fluctuation and Superfluid Density {#Sec-phasefl} ---------------------------------------- To study the BKT phase transition, it is customary to include only the phase fluctuation and therefore set $\eta=0$. As a result, we now focus only on $\mathbf{D}_{22}$ (cf. the form of Eq. (\[Gauss-fl-action\])). Its Matsubara summation is complicated, however, when the phase fluctuation is smooth enough the momentum $q$ can be taken as a small parameter. Since $\mathbf{D}_{22}$ vanishes at the low-frequency and long-wavelength limit, i.e. $iq_n\rightarrow0$ and $\mathbf{q}\rightarrow0$, we expand the fluctuation action Eq. (\[Gauss-fl-action\]) with only $\mathbf{D}_{22}\neq0$ and keep the leading (quadratic) order of $q$, and get an approximation for $S_\mathrm{fl}$ as $$\label{flaction} S_\mathrm{w}=\frac{\mathcal{V}}{2}\sum_q(\kappa q_n^2+\tilde\rho_{ij}q_iq_j)|\theta(q)|^2.$$ The expressions for $\kappa$ and $\tilde\rho_{ij}$ are (for an equivalent derivation based on the direct expansion of the saddle-point action, cf. Appendix. \[action-fl\]) $$\begin{aligned} \kappa=&\frac{1}{V}\sum_\mathbf{k}\frac{\Delta_0^2X_\mathbf{k}+\beta E_\mathbf{Qk}\xi_\mathbf{Qk}^2Y_\mathbf{k}}{4E_\mathbf{Qk}^3},\label{kappa}\\ \tilde\rho_{ij}=&\frac{1}{V}\sum_\mathbf{k}\left[\frac{\delta_{ij}}{4m}\left(1-\frac{\xi_\mathbf{Qk}}{E_\mathbf{Qk}}X_\mathbf{k}\right)-\frac{\beta Y_\mathbf{k}k_i^2\delta_{ij}}{4m^2}\right.\nonumber\\&\left.\qquad\qquad-3Z_\mathbf{k}k_zQ\delta_{iz}\delta_{jz}\vphantom{\frac{1}{2}}\right]-\frac{9\kappa Q^2\delta_{iz}\delta_{jz}}{4m^2},\label{rho}\end{aligned}$$ where the direction of $\mathbf{Q}$ is chosen as the z-axis, and $$\begin{aligned} X_\mathbf{k}&\equiv\frac{\sinh(\beta E_\mathbf{Qk})}{\cosh(\beta E_\mathbf{Qk})+\cosh(\beta h_\mathbf{Qk})},\nonumber\\ Y_\mathbf{k}&\equiv\frac{1+\cosh(\beta E_\mathbf{Qk})\cosh(\beta h_\mathbf{Qk})}{[\cosh(\beta E_\mathbf{Qk})+\cosh(\beta h_\mathbf{Qk})]^2},\nonumber\end{aligned}$$ and $$Z_\mathbf{k}\equiv\frac{\beta\xi_\mathbf{Qk}}{4E_\mathbf{Qk}m^2}\frac{\sinh(\beta E_\mathbf{Qk})\sinh(\beta h_\mathbf{Qk})}{[\cosh(\beta E_\mathbf{Qk})+\cosh(\beta h_\mathbf{Qk})]^2}.$$ Definitions of $\xi_\mathbf{Qk}$, $E_\mathbf{Qk}$ and $h_\mathbf{Qk}$ were given after Eq. (\[saddleaction\]). $S_\mathrm{w}$ describes a Bose gas of spin waves with an anisotropic superfluid density tensor $\tilde\rho_{ij}$. As we know from the isotropic case where $\tilde\rho_{ij}=\rho_0\delta_{ij}$, the spin-wave contribution to the action has a spectrum $\omega_\mathrm{w}(\mathbf{q})=v_\mathrm{w}|\mathbf{q}|$ with the wave speed $v_\mathrm{w}=\sqrt{\rho_0/\kappa}$, and the thermodynamic potential $\Omega_\mathrm{w}=\frac{1}{\mathcal{V}}\sum_\mathbf{q}\ln[1-e^{-\beta\omega_\mathrm{w}(\mathbf{q})}]$ [@Botelho:2006]. The BKT transition temperature is determined by [@KT; @Nelson:1977] $$\label{BKT} T_\mathrm{BKT}=\frac{\pi}{2}\rho_0(T_\mathrm{BKT}).$$ For the FF state with diagonal but anisotropic superfluid density, $\tilde\rho_{ij}=\tilde\rho_{ii}\delta_{ij}$, we have a similar $\Omega_\mathrm{w}$ but with $\tilde\omega_\mathrm{w}(\mathbf{q})=\sqrt{\sum_i\tilde\rho_{ii}q_i^2/\kappa}$. However the relation in Eq. (\[BKT\]) is not directly applicable for anisotropic $\tilde\rho_{ij}$. Since the BKT criterion is based on a thermodynamic argument of energy and entropy [@KT], and in the diagonal but anisotropic case the energy associated with the vortices is proportional to the geometric mean of the diagonal elements in the superfluid density tensor, i.e. $\sqrt{\Pi_i\tilde{\rho}_{ii}}$ in 2D [@Williams:1998], it is natural to expect correspondingly $T_\mathrm{BKT}=\frac{\pi}{2}\sqrt{\Pi_i\tilde{\rho}_{ii}(T_\mathrm{BKT})}$. The interplay between the FF state and the BKT phase transition is one of the main interests of this paper. Thermodynamic Potential and Equations {#Sec-Omega} ------------------------------------- The total thermodynamic potential $\Omega=\Omega_\mathrm{s}+\Omega_\mathrm{w}=(S_\mathrm{s}+S_\mathrm{w})/\mathcal{V}$ is given by $$\begin{aligned} \Omega=&-\frac{1}{\mathcal{V}}\sum_\mathbf{k}\{\ln[2\cosh(\beta E_\mathbf{QK})+2\cosh(\beta h_\mathbf{QK})]-\beta\xi_\mathbf{QK}\}\nonumber\\ &+\frac{\Delta_0^2}{g}+\frac{1}{\mathcal{V}}\sum_\mathbf{q}\ln\left[1-e^{-\beta\tilde\omega_\mathrm{w}(\mathbf{q})}\right].\end{aligned}$$ In this expression the assumption of smooth and slowly varying phase fluctuation does not take into account the presence of vortices and antivortices. In general the phase fluctuations can be separated into the sum of a static vortex part and a spin-wave part [@Botelho:2006], but the vortices can be assumed to be relatively few in number when $T$ is not high. Although the vortex part might be relatively more important at very low temperatures, where the spin-wave part is suppressed but a vortex lattice can be formed, the vortex contribution to the number equations can still be (typically) small. Therefore, we choose to focus on the spin-wave fluctuations only. However, we emphasize that the effect of vortices is indeed included in the present study since the BKT transition temperature given by Eq. (\[BKT\]) is based on the proliferation of free vortices. At $T>T_\mathrm{BKT}$, the vortex contribution will become large, which indicates the collapse of the spin-wave description. From $\Omega$, we can obtain several equations (Eqs. (\[gapeq\])-(\[numbereqs\])) to solve. The gap equation $(\partial\Omega_\mathrm{s}/\partial\Delta_0)_{\mu,\beta,h,Q}=0$, without fluctuation contribution according to the saddle-point condition, $$\label{gapeq} \frac{2}{g}-\frac{1}{V}\sum_\mathbf{k}\frac{X_\mathbf{k}}{E_\mathbf{QK}}=0.$$ When $\Delta_0=0$, there is no need to consider $\mathbf{Q}$ which is in the phase of $\Delta$. With $\Delta_0\neq0$, a non-zero $\mathbf{Q}$ means the FF state. However, as shown in Sec. \[Sec-fluctuation\], the term linear in $\theta$ in the perturbative expansion of $S_\mathrm{fl}$ vanishes intrinsically. Therefore, the equation for $Q$ does not come directly from the saddle-point condition. In order to determine $Q$, there are two possible approaches. First, by taking the FF vector as the phase part of the order parameter, which could be treated the same as the amplitude part $\Delta_0$, we can still determine $Q$ directly from the saddle-point action in the same way as $\Delta_0$ is determined from the gap equation, i.e. $(\partial\Omega_\mathrm{s}/\partial Q)_{\beta,\mu,h,\Delta_0}=0$, or explicitly $$\label{Qeqs} \frac{1}{V}\sum_\mathbf{k}\left[\frac{Q}{m}-\frac{\sinh(\beta E_\mathbf{QK})\frac{\xi_\mathbf{Qk}Q}{E_\mathbf{QK}m}-\sinh(\beta h_\mathbf{QK})\frac{\mathbf{k\cdot Q}}{mQ}}{\cosh(\beta E_\mathbf{QK})+\cosh(\beta h_\mathbf{QK})}\right]=0.$$ Note that, although it does not conflict with the fact that the term linear in $\theta$ vanishes in the expansion of the fluctuation action, Eq. (\[Qeqs\]) might turn out to be trivial, if its left-hand side vanishes intrinsically as a special property of the FF state. Alternatively, as we cannot obtain the constraint of $Q$ from the saddle-point condition, it is reasonable to use the minimum of $\Omega$ rather than $\Omega_\mathrm{s}$ as the criterion for $Q$. Therefore we have $$\label{Qeq} (\partial\Omega/\partial Q)_{\beta,\mu,h,\Delta_0}=0.$$ Usually the first approach is much simpler and will be used throughout this paper, but Eq. (\[Qeq\]) will be discussed when necessary. In addition to these, we also have the number equations $$\label{numbereqs} n=-(\partial\Omega/\partial\mu)_{\beta,h,\Delta_0,Q},\ \ \delta n=-(\partial\Omega/\partial h)_{\beta,\mu,\Delta_0,Q},$$ where $n=n_\uparrow+n_\downarrow$ and $\delta n=n_\uparrow-n_\downarrow$ are the total particle density and the density difference, respectively. Note that the fluctuations affect the number equations. The partial derivatives are results of the standard thermodynamic relations $n=-(\partial\Omega/\partial\mu)_{\beta,h}$ and $\delta n=-(\partial\Omega/\partial h)_{\beta,\mu}$ expanded by using the chain rule and noting that the partial derivatives of $\Omega$ with respect to $\Delta_0$ or $Q$ vanish according to the saddle-point conditions. (In the way we have phrased the problem, $\partial\Omega_\mathrm{w}/\partial\Delta_0$ is not included in accordance with the saddle-point condition for the order parameter. For the partial derivative with respect to $Q$, if the constraint in Eq. (\[Qeqs\]) is used, then the same argument for $\partial\Omega_\mathrm{w}/\partial\Delta_0$ also applies to $\partial\Omega_\mathrm{w}/\partial Q$.) On the other hand, Diener [*et al*]{}. found that including more partial derivatives by forcing the gap equation to include the fluctuation term (referred to as the “self-consistent feedback of Gaussian fluctuation on the saddle point") will either violate the Goldstone’s theorem in the Cartesian representation (with fluctuation as $\Delta=\Delta_0+\eta$) or result in ultraviolet divergence in the polar representation (with $\Delta=\Delta_0e^{i\theta}$) [@Diener:2008]. Phase diagram of 2D Fermi gases {#Sec-phasediagram} =============================== For our aim to examine the BKT phase transition of an imbalanced system with the FF ansatz, we have to specify some details more concretely. As the BKT phase transition appears in 2D systems, the 2D contact-interaction coupling constant is renormalized like (see, e.g. Ref. [@Botelho:2006]) $$\frac{1}{g}=\frac{1}{V}\sum_\mathbf{k}\frac{1}{2\epsilon_\mathbf{k}+E_b},$$ where $E_b$ is the 2D binding energy (taken as positive) of a two-particle bound state, which can be related to the 2D $s$-wave scattering length $a_s=\hbar/\sqrt{mE_b}$. The two spacial dimensions will be denoted as $x$ and $z$, then $$\Omega_\mathrm{w}=\frac{1}{\mathcal{V}}\sum_\mathbf{q}\ln\left(1-e^{-\beta\sqrt{\frac{\tilde\rho_{xx}}{\kappa}q_x^2+\frac{\tilde\rho_{zz}}{\kappa}q_z^2}}\right)$$ with the explicit expressions $$\begin{aligned} \tilde\rho_{xx}=&\frac{1}{V}\sum_\mathbf{k}\left[\frac{1}{4m}\left(1-\frac{\xi_\mathbf{Qk}}{E_\mathbf{Qk}}X_\mathbf{k}\right)-\frac{\beta Y_\mathbf{k}k_x^2}{4m^2}\right],\nonumber\\ \tilde\rho_{zz}=&\frac{1}{V}\sum_\mathbf{k}\left[\frac{1}{4m}\left(1-\frac{\xi_\mathbf{Qk}}{E_\mathbf{Qk}}X_\mathbf{k}\right)-\frac{\beta Y_\mathbf{k}k_z^2}{4m^2}-3Z_\mathbf{k}k_zQ\right]\nonumber\\&-\frac{9\kappa Q^2}{4m^2}.\nonumber\end{aligned}$$ It turns out that in the continuum limit the 2D integral in $\Omega_\mathrm{w}$ can be carried out explicitly, with a result $$\Omega_\mathrm{w}=\frac{-\zeta(3)\kappa}{2\pi\beta^3\sqrt{\tilde\rho_{xx}\tilde\rho_{zz}}},$$ where $\zeta$ is the Riemann zeta function. It is clear that a meaningful spin-wave-like phase fluctuation requires that both $\tilde\rho_{xx}$ and $\tilde\rho_{zz}$ are positive ($\kappa$ is positive definite according to Eq. (\[kappa\])). This is quite natural since with negative superfluid density in either direction, the fluctuation in the corresponding mode can proliferate to decrease the energy of the system, such that any negative superfluid density results in the dynamical instability. We can solve Eqs. (\[gapeq\])-(\[numbereqs\]) self-consistently with given $T$, $E_b$, and $\delta n$ as input parameters. However, it is easier to calculate with fixed $h$, as we then do not need to solve the equation for $\delta n$. In the end it is simple, if required, to map the $h$-dependent results to the $\delta n$-dependent ones. For the numerical calculations we choose the particle mass as $m=1/2$ and the total particle density $n=1/2\pi$ such that the 2D Fermi energy $E_F=2\pi n/2m=1$. Without the FF State -------------------- As is known the FF(LO) state, if it exists, often occupies only a very narrow region of the parameter space. Therefore, we start the calculation with $Q=0$. In this case the angle dependence in momentum integrations can be removed, which reduces the numerical complexity. Then also $\tilde\rho_{ij}=\tilde\rho_0\delta_{ij}$ is isotropic, with $$\tilde\rho_0=\int\frac{kdk}{2\pi}\left[\frac{1}{4m}\left(1-\frac{\xi_\mathbf{Qk}}{E_\mathbf{Qk}}X_\mathbf{k}\right)-\frac{\beta Y_\mathbf{k}k^2}{8m^2}\right]_{Q=0},$$ and $\Omega_\mathrm{w}$ reduces to $\Omega_\mathrm{w}=-\zeta(3)\kappa/(2\pi\beta^3\tilde\rho_0)$. Before proceeding to numerical calculations, we first clarify the phase structure qualitatively. The phase diagram is determined by the minimum of the thermodynamic potential. At high $T$ and small $E_b$, pairing is not favored, and the minimum of $\Omega$ lies at $\Delta=0$, which we refer as $\Omega_\mathrm{N}$, and the system is in the normal phase (NP). With decreasing $T$ or increasing $E_b$, the minimum is at non-zero $\Delta$, and the pairing sets in. At the MF level, the phase diagram can be qualitatively understood by a small-$\Delta_0$ expansion of $\Omega$ around the phase transition, $$\label{Omegaexp} \Omega=\Omega_\mathrm{N}+a\Delta_0^2+b\Delta_0^4+\mathcal{O}(\Delta_0^6),$$ where $a$ and $b$ are functions of the system parameters obtained as $a=\frac{1}{2}\frac{\partial^2\Omega}{\partial\Delta_0^2}\big|_{\Delta_0=0}$ and $b=\frac{1}{24}\frac{\partial^4\Omega}{\partial\Delta_0^4}\big|_{\Delta_0=0}$. ### Mean-Field Results First we consider the easier MF case by neglecting the fluctuations. In the balanced case, $$b=\int d^2k\left\{\frac{\mathrm{sech}^2(\beta\xi_\mathbf{Qk}/2)[\sinh(\beta\xi_\mathbf{Qk})-\beta\xi_\mathbf{Qk}]}{16\xi_\mathbf{Qk}^3}\right\}_{Q=0}$$ is positive definite. On the other hand, $a$ changes from positive to negative continuously with decreasing $T$ or increasing $E_b$. When $a>0$, the minimum is at $\Delta_0=0$, i.e. the normal state; while for $a<0$, the minimum starts to deviate from the normal state so that $\Delta_0\approx\sqrt{-a/2b}$. Such a phase transition into paired states takes place at $a=0$ and is continuous. The imbalanced case is more complicated as $b$ can become negative at large $h$. In this case higher order coefficients are positive and guarantee that the minimum of $\Omega$ is at finite $\Delta_0$. With negative $b$, if $a\leq0$, the gap equation has only one non-trivial solution corresponding to the global minimum of $\Omega_\mathrm{s}$, and all the particles are paired with non-zero $\Delta_0$ as the BCS state. In order to conform to usual terminology we call it simply the superfluid (SF) state or phase, although strictly speaking superfluidity implies non-zero superfluid density and phase coherence rather than just non-vanishing gap parameter. But if $a>0$, the gap equation may have two non-trivial solutions, with the smaller one corresponding to a local maximum and the larger one to a local minimum. If $b$ is sufficiently negative, this local minimum can be lower than $\Omega_\mathrm{N}$ and becomes the global minimum. This phase transition taking place at non-zero $\Delta_0$ is of first-order. Such a possibility begins at the point where both $a$ and $b$ vanish, i.e. the tricritical point [@Parish:2007]. The MF phase diagrams are shown in Fig. \[phasediagramMF\]. ![(Color online) Mean field phase diagrams as functions of $E_b$ and $T$ without the FF ansatz at $h=0.5$ and $h=1$, respectively. The phase boundaries plotted as solid curves correspond to first-order phase transitions, while the dashed curves correspond to continuous phase transitions. The tricritical point is indicated by a brown dot where three different phases meet, i.e. the normal phase (NP) (white), the phase separation (PS) region (between the red and the blue curves), and the superfluid (SF) phase. The contours show the values of superfluid density (in the units of total density, $n=1/2\pi$) in the PS region and the SF phase, which is positive definite and approaches $n/2$ as $T\rightarrow0$ in the SF phase, since then all particles are fully paired. In the PS region it can be larger than $n/2$ because the superfluid only takes part of the spatial volume. We emphasize that the superfluid density shown here is a MF result and its non-zero value does not necessarily mean superfluidity. The phase boundaries agree with the results in Ref. [@Tempere:2009].[]{data-label="phasediagramMF"}](MF10.eps){width="0.8\columnwidth"} When $T$ is below the tricritical value, there is a region of phase-separation (PS) where no solution satisfying both the gap and the number equations can be found. In fact, the NP and the SF phase coexist there. The ratio of particles in these two phases is constrained by the total number density. This PS region has one boundary with the pure NP where all the particles stay unpaired, and another boundary with the pure SF phase where all the particles are paired. Between these two boundaries, the two minima of $\Omega$ remain the same, as required by the phase equilibrium condition of both phases having the same pressure. In Fig. \[Omega\] the curves of $\Omega(\Delta_0)$ demonstrate these cases explicitly. Here we plot the total thermodynamic potential instead of the saddle-point value. It turns out that at the temperature $T=0.1$ the effect of fluctuations is so small that the contribution to $\Omega$ is very small (cf. the boundaries shown in Fig. \[phasediagramFL\]). In this sense Fig. \[Omega\] is a useful reference for both the present and the next subsections since the way to determine the boundaries of phase-separation region is the same with and without fluctuations. We emphasize that, although our qualitative discussion about the phase transition used small-$\Delta_0$ expansion, all the numerical results presented here and hereafter are based on full calculations of the thermodynamic potential for each case. ![(Color online) The thermodynamic potential $\Omega$ (with $\Omega_\mathrm{w}$ included) as functions of $\Delta_0$ at $h=0.5$ and $T=0.1$, with $E_b=0.2$ (red) for the NP with only one minimum at $\Delta_0=0$; $E_b=0.24$ (orange) for the NP with an unstable local minimum at $\Delta_0\approx0.68$; $E_b\approx0.26$ (green) for the NP-PS boundary where two minima $\Omega_\mathrm{N}$ and $\Omega(\Delta_0\approx0.73)$ are equal; $E_b\approx0.29$ (blue) for the PS-SF boundary with two equal minima $\Omega_\mathrm{N}$ and $\Omega(\Delta_0\approx0.74)$; $E_b=0.35$ (purple) for the SF phase with $\Delta_0\approx0.82$ while $\Omega_\mathrm{N}$ becomes a local minimum, and $E_b=0.6$ (black) for the SF phase with $\Delta_0\approx1.1$ as the only one minimum. Note that when all the particles are in the NP, $\mu$ is completely determined by $T$ and $h$ (here $\mu\approx1.0$), consequently $\Omega_\mathrm{N}$ is constant.[]{data-label="Omega"}](Omega.eps){width="0.8\columnwidth"} ### Including Fluctuations The above arguments are qualitatively valid when contributions from the phase fluctuations are included. Obviously, the phase fluctuations in the order parameter should not change $\Omega_\mathrm{N}$ as $\rho$ and $\Omega_\mathrm{w}$ vanish in the NP. However for the NP-PS boundary, the inclusion of the phase fluctuations for the paired states may cause a history-dependent behavior: the boundary depends on from which phase the system approaches it. Because the existence of pairs is the premise of the phase fluctuation (in our model that focuses on phase, not amplitude fluctuations), if the system starts from the NP side, all particles are unpaired such that no contribution from fluctuation should be included, and the boundary condition is $\Omega_\mathrm{N}=\Omega_\mathrm{s}(\Delta_0)$, which is exactly the MF case. However, if the boundary is approached from the PS region, the fluctuation contribution to the SF state is present since the pairs already exist. Then the equilibrium requires $\Omega_\mathrm{N}=\Omega(\Delta_0)$. Whether or not we include $\Omega_\mathrm{w}$ gives rise to different NP-PS boundary. Because $\Omega_\mathrm{w}$ is negative definite, $\Omega(\Delta_0)<\Omega_\mathrm{s}(\Delta_0)$, the NP-PS boundary obtained by $\Omega_\mathrm{N}=\Omega(\Delta_0)$ lies at smaller $E_b$ or higher $T$ compared to the MF case, and the difference increases at larger $T$ as $\Omega_\mathrm{w}$ becomes more significant. As all the pairs break up across the boundary, the disappearance of fluctuation contribution results in a sudden increase from $\Omega(\Delta_0)$ to $\Omega_\mathrm{s}(\Delta_0)$. However, if the fluctuation contribution to the thermodynamic potential happens to be positive (distinct from the spin-wave-like fluctuation which is negative definite), there will not be such history dependence. In that case the fluctuation makes the paired state less favored and the boundary is always obtained by $\Omega_\mathrm{N}=\Omega(\Delta_0)$, which lies at larger $E_b$ or lower $T$. On the other hand, the PS-SF boundary is independent of how it is approached, unless there is some contributions to $\Omega_\mathrm{N}$ which changes discontinuously across this boundary. Theoretically such a sudden change of $\Omega$ across the boundary would be quite general even if we were to consider interaction effects more carefully. Because the order parameter changes discontinuously in the first-order phase transition, the change of fluctuation contributions in one phase is also discontinuous across the boundary. It would be unlikely that this discontinuity could be exactly compensated by contributions of the other phase which is continuous across the boundary (e.g. the normal state continuing from NP to PS). Strictly speaking, in the NP amplitude fluctuations might result in pairs which would be associated with the phase fluctuations. The possibility to create pairs due to the fluctuations increases dramatically as the boundary is approached because the difference between the two local minima of $\Omega$ decreases to zero. This effect is even stronger at high temperatures. Therefore, we expect the NP-PS boundary should be determined by $\Omega_\mathrm{N}=\Omega(\Delta_0)$. However, because here we only focus on the phase fluctuations for the study of the BKT mechanism, the amplitude fluctuations are beyond the scope of this paper. Furthermore, a better treatment of the normal states including the effects of interactions will certainly modify the NP-PS and the NP-SF boundaries as well. In strongly interacting systems, proper description of the normal state can be non-trivial and various Fermi-liquid, pseudogap, etc. approaches have been developed. Such a more elaborate description of the normal phase might remove the history-dependent behavior discussed above. The phase diagrams including the fluctuations are shown in Fig. \[phasediagramFL\]. As is clear the effect of fluctuations is significant compared to the MF results. At high $T$, a considerable region where paired states could exist in the MF case turns into pure NP due to the fact that the number equations could not be simultaneously satisfied. This region expands with increasing temperature as the fluctuations become large. Consequently, the SF phase sets in with non-zero $\Delta_0$, as can be seen from the color scales in Fig. \[phasediagramFL\], thus the NP-SF phase transition becomes of first order, but note that there is no phase-coexistence at this first-order phase transition. Most interestingly, the tricritical point does not exist any longer. Instead, the PS ends with a region where we could not find any solution satisfying the equilibrium condition. Furthermore, we found the NP-PS and the PS-SF boundaries can overlap if $h$ is small. This means that, with the same $T$ and $E_b$, there can be two sets of solutions to Eq. (\[gapeq\]) and the phase equilibrium condition. One solution corresponds to the number constraint Eq. (\[numbereqs\]) satisfied in the NP, while the other to the number equation satisfied in the SF phase. We attribute the disappearance of the tricritical point to different fluctuation contributions to the coexisting phases. This result is distinct from the 3D case [@Parish:2007], where the tricritical point would play an important role in the phase diagram even at non-zero temperatures. In addition to the dimensionality, the main difference is that the fluctuations used in Ref. [@Parish:2007] were of the Nozi[è]{}res-Schmitt-Rink (NSR) form, which considers the pair fluctuations on the second-order phase boundary where $\Delta_0$ is small. However on the first-order boundaries, where the order parameter changes discontinuously, the NSR fluctuation is not suitable. In general, the NSR form is applicable when fluctuations are small. In this respect, the 2D and the 3D systems are different. The NSR fluctuation is widely used in 3D cases where the fluctuation is relatively weak, but the phase fluctuations which affect the first-order phase transition become much more important for the 2D cases. ![(Color online) Phase diagrams including the fluctuations. The MF phase boundaries (thin curves) and the tricritical points are shown for comparison. The new PS-SF boundaries (thick blue) show the strong effect of fluctuations. The difference between thick and thin red curves shows the history-dependence of the NP-PS boundary. The PS region does not end with a tricritical point but with a region where no solution can be found to satisfy the equilibrium condition, as indicated by pink dotted lines. Besides the solid curves corresponding to first-order phase transitions and the dashed curves to continuous phase transitions, the black dot-dashed curves correspond to the topological BKT phase transition with $T_\mathrm{BKT}$ obtained by Eq. (\[BKT\]). The curves of $T_\mathrm{BKT}$ bend in the PS region as the corresponding superfluid density increases in the superfluid portion. The colored region shows the values of non-zero order parameter $\Delta_0$, but only the part below $T_\mathrm{BKT}$ can be taken as a superfluid, while the remaining part is the pseudogap phase where no phase coherence exists and the superfluid density vanishes in accordance with the BKT mechanism. The first-order NP-SF phase boundaries are not quite smooth because of the numerical difficulty to find the exact locations of this phase transition.[]{data-label="phasediagramFL"}](FL10D.eps){width="1\columnwidth"} With the FF State ----------------- Now we consider the FF state by turning on $Q$ as a free parameter. The previous case without including the FF state will be referred to as the non-FF case for the sake of simplicity. We can discuss the problem qualitatively as before by adding to Eq. (\[Omegaexp\]) the spatial variation of $\Delta$ as $$\label{OmegaexpFF} \Omega=\Omega_\mathrm{N}+a|\Delta|^2+b|\Delta|^4+c|\nabla\Delta|^2+d|\nabla\Delta|^4+\cdots,$$ where the expansion is up to quartic order, though even higher order expansion is possible [@Combescot:2002]. With the FF ansatz $\Delta_0e^{2i\mathbf{Q}\cdot\mathbf{x}}$, the new terms correspond to an expansion in $Q$. The quadratic term $c|\nabla\Delta|^2$ plays the role of the kinetic energy of the pairs. Similar to the non-FF case, the signs of $c$ and $d$ determine the minimum of $\Omega$ along the $Q$ axis. However, as now $\Omega$ depends on both $\Delta_0$ and $Q$, a simple discussion with only one parameter is not enough. Furthermore, we find numerically that the coefficient $c$ of the total thermodynamic potential is always positive in the low temperature range of interest, which means that, unlike a 3D mass-imbalanced system [@Gubbels:2009], in the present system there is no Lifshitz point. Consequently, it is impossible to have the FF state starting from $Q=0$ and a complete calculation with $Q$ as a free parameter is necessary. We will start with the simpler case at zero temperature and then continue to the finite temperature case. ### Zero Temperature Limit Zero temperature limit, although impossible to be realized experimentally, provides clear physical insight and useful limiting behavior at low temperatures, since many calculations can be carried out analytically. At $T=0$, $\Omega_\mathrm{w}$ vanishes and $\Omega$ reduces to $$\begin{aligned} \Omega^{T0}=&\frac{\Delta_0^2}{g}-\frac{1}{V}\sum_\mathbf{k}[\mathrm{Max}(E_\mathbf{QK},|h_\mathbf{QK}|)-\xi_\mathbf{QK}]\nonumber\\ =&\int\frac{kdk}{2\pi}\left(\frac{\Delta_0^2}{2\epsilon_\mathbf{k}+E_b}-E_\mathbf{QK}+\xi_\mathbf{QK}\right)\nonumber\\ &+\int\frac{kdk}{2\pi}\left(\int_0^{\theta_1}+\int_{\theta_2}^\pi\right)\frac{d\theta}{\pi}(E_\mathbf{QK}-|h_\mathbf{QK}|),\end{aligned}$$ where $\theta_{1,2}=\Re\left[\arccos\left(\frac{m(h\pm E_\mathbf{QK})}{kQ}\right)\right]$ such that $|h_\mathbf{QK}|>E_\mathbf{QK}$ is satisfied within the ranges $[0,\theta_1)$ and $(\theta_2,\pi]$. Here $\Re$ means taking the real part. It is easy to find that, as $Q\rightarrow0$, $\theta_1\rightarrow0$ and $\theta_2\rightarrow\pi\Theta(E_k-h)$ with $E_k=\sqrt{\xi_k^2+\Delta_0^2}$, $\xi_k=\frac{k^2}{2m}-\mu$ and $\Theta$ being the Heaviside step function. The first integral, being angle-independent and analytically integrable, integrates to $\frac{m}{8\pi}\left[2\Delta_0^2\ln\frac{\xi_Q+E_Q}{E_b}-\left(\xi_Q-E_Q\right)^2\right]$. As $Q\rightarrow0$, the non-FF expression for $\Omega^{T0}$ is consistent with the result in Ref. [@Tempere:2007]. The phase diagram is determined by the global minimum of $\Omega^{T0}$ in the $\Delta_0$-$Q$ plane. There can be three different local minima, namely $\Omega_\mathrm{N}$ of the normal state with $\Delta_0=0$, $\Omega_\mathrm{SF}$ of the paired state with $\Delta_0\neq0$ but $Q=0$, and $\Omega_\mathrm{FF}$ of the FF state with both $\Delta_0$ and $Q$ non-zero, each of which can become the global minimum depending on the parameters. Coexistence is possible between the SF and the FF phases, as well as between the SF phase and the NP. Such coexistence is not possible between the NP and the FF phase since in the cases we have studied $\Omega_\mathrm{FF}$ is always lower than $\Omega_\mathrm{N}$ when the FF state exists. This issue has been discussed more extensively in Ref. [@Conduit:2008]. Fig. \[OmegaContour\] shows various examples of the contour plots of the thermodynamic potential $\Omega^{T0}$. It should be noted that the FF state sets in with infinitesimal $\Delta_0$ but non-vanishing $Q$. However, any state with $\Delta_0=0$ should be taken as the normal state since a non-zero $Q$ has no contribution when $\Delta_0=0$. Therefore, the corresponding NP-FF phase transition is still continuous, which is different from the non-FF case and not associated with a negative coefficient $c$ in Eq. (\[OmegaexpFF\]). The complete phase diagram at $T=0$ is shown in Fig. \[phasediagramT0\], from which we see the FF state exists in a horn-shaped area and gives way to the normal state when $h$ or $E_b$ becomes large, resulting in two parts of the PS region: one as the coexistence of the FF and the SF phases (PS$_\mathrm{F}$) at smaller $h$ and $E_b$, and the other of the NP and the SF phase (PS$_\mathrm{N}$). This phase diagram can be taken as the generalization of the previous results of 2D imbalanced Fermi gases in homogeneous case [@He:2008] or in lattices [@Kujawa-Cichy:2011]. While these studies did not consider FFLO states, they found similar phase boundaries as we do in Fig. \[phasediagramT0\] with their partially polarized phases replaced by our FF phases at small imbalance. ![(Color online) Contour plots of $\Omega^{T0}$ at $h=0.8$ for various $E_b$ corresponding to different phases at $T=0$. (a) $E_b=0.3$ (NP); (b) $E_b=0.5$ (FF); (c) $E_b=0.6$ (PS$_\mathrm{F}$); (d) $E_b=0.7$ (PS$_\mathrm{N}$); (e) $E_b=0.9$ (SF). For the acronyms of the phases see the text. All the axes have the same scale as in (e). The global minima are indicated by red dots. Similar results have been shown in Ref. [@Conduit:2008]. The pit close to the $Q$-axis in (a) indicates the emergence of a FF state in (b) at finite $Q$. The contour labels are the values of $\Omega^{T0}$ divided by $n=1/2\pi$, i.e. the thermodynamic potential per particle.[]{data-label="OmegaContour"}](OmegaContour_e.eps){width="0.59\columnwidth"} ![(Color online) Phase diagram of a 2D imbalanced Fermi gas at $T=0$. Here NP is the normal phase, PS$_\mathrm{F}$ is the phase separation region with the FF and the superfluid (SF) states coexisting, while in PS$_\mathrm{N}$ the SF phase and the NP coexist. The phase boundaries plotted as solid curves correspond to first-order phase transitions, while the dashed curves correspond to continuous phase transitions. The colors in the FF-related regions show the values of $Q$ of the FF states.[]{data-label="phasediagramT0"}](T0Q.eps){width="1\columnwidth"} The superfluid density at $T=0$ can also be calculated. It is important to note that $$\begin{aligned} \tilde{\rho}_{xx}^{T0}&=\frac{n}{4m}-\int\frac{kdk}{2\pi}\frac{k(\sin\theta_2+\sin\theta_1)}{4mQ\pi}\equiv\frac{\partial_Q\Omega^{T0}}{4Q},\nonumber\end{aligned}$$ which means that the FF state whose $Q$ satisfies $\partial_Q\Omega^{T0}=0$ always has a superfluid density tensor with a vanishing component along the direction perpendicular to the FF vector. This property of the transverse superfluid density (stiffness) has been pointed out in Refs.[@Radzihovsky:2009; @Radzihovsky:2011] based on the GL theory and a symmetry argument. It means that there is no energy cost to generate fluctuations along the $x$ direction, which can be understood from the divergence of $\Omega_\mathrm{w}$ as $\tilde{\rho}_{xx}=0$ in the denominator. It is not a serious problem at $T=0$ as the thermal fluctuation is not considered, however a vanishing or small superfluid density will cause difficulties when we use the spin-wave description of the phase fluctuations at finite temperatures. ### Finite Temperature Similar to the non-FF case, we first present the MF results with the FF ansatz in Fig. \[MFFF\], which can be taken as finite-temperature extensions of the results of Fig. \[phasediagramT0\]. Compared to the non-FF cases, the PS$_\mathrm{F}$ regions are shifted a bit towards the SF-phase side and also shrink slightly. On the MF level, the range of $E_b$ with the possibility of FF state shrinks smoothly with increasing $T$, and at higher temperatures the FF states can survive around the FF-PS$_\mathrm{F}$ boundaries, where the peaks of $\Delta_0$ are located (but in general the values of $Q$ increase with $E_b$). However, in order to draw more reliable conclusions we must include the fluctuations for such a 2D system. ![image](05.eps){width="0.67\columnwidth"} ![image](05D.eps){width="0.67\columnwidth"} ![image](05Q.eps){width="0.67\columnwidth"} ![image](08.eps){width="0.67\columnwidth"} ![image](08D.eps){width="0.67\columnwidth"} ![image](08Q.eps){width="0.67\columnwidth"} Being aware of the vanishing transverse superfluid density at $T=0$, we first check the behavior of the superfluid density $\tilde{\rho}$ at non-zero temperatures, which has a significant effect on the phase fluctuations. Fig. \[kappa-rhoFF\] shows the $T$-dependence of $\tilde{\rho}_{xx}$ and $\tilde{\rho}_{zz}$, as well as $\kappa$ given in Eq. (\[kappa\]), of a FF state, where we see that $\tilde{\rho}_{xx}$ always vanishes and $\tilde{\rho}_{zz}$ can become negative at high $T$, while $\kappa$ is positive definite. In fact, we find numerically that the relation $\tilde{\rho}_{xx}=\frac{\partial_Q\Omega_\mathrm{s}}{4Q}$ is still true at finite temperature, such that the FF state always has divergent transverse fluctuations if the FF vector is determined by Eq. (\[Qeqs\]), i.e. the condition $(\partial\Omega_\mathrm{s}/\partial Q)_{\beta,\mu,h,\Delta_0}=0$. It is interesting to explore this property from another angle. Starting with the vanishing transverse superfluid density of the FF ansatz, which may be argued based on symmetry, the relation $\partial_Q\Omega_\mathrm{s}=4Q\tilde{\rho}_{xx}$ means that the left-hand side of Eq. (\[Qeqs\]) vanishes identically for a FF state. Then the absence of terms linear in $\theta$ in the expansion of $S_\mathrm{fl}$ is a natural consequence: Since the transverse fluctuations are unconstrained, it is physically justified to be unable to determine the value of $Q$ from the saddle-point condition. Therefore, all these special properties of the FF ansatz are connected. In this respect, we try to determine $Q$ by Eq. (\[Qeq\]) rather than (\[Qeqs\]), and the results are shown in Fig. \[kappa-rhoFF-PSFfullQ\]. While $\tilde{\rho}_{xx}$ is then non-zero it is still quite small. As a rough estimate, if we take $\tilde{\rho}_{xx}$ depending on $T$ linearly at low temperatures, where the non-zero $\tilde{\rho}_{zz}$ and $\kappa$ change very little, then $\Omega_\mathrm{w}\propto\frac{T^3\kappa}{\sqrt{\tilde{\rho}_{xx}\tilde{\rho}_{zz}}}$ is approximately proportional to $T^{5/2}$, which vanishes at $T=0$ but increases very fast with $T$. Such an increase is significant also due to the small coefficient of the proportionality $\tilde{\rho}_{xx}(T)\sim1.5\times10^{-2}T$. For this reason we do not expect this different approach to change our conclusions considerably. ![$\kappa$, $\tilde{\rho}_{xx}$ and $\tilde{\rho}_{zz}$ of the FF state as functions of $T$. The parameters are chosen from the FF-PS$_\mathrm{F}$ boundary at $T=0$. $T$ ranges from $0$ to where the FF state can still be found.[]{data-label="kappa-rhoFF"}](kappa-rhoFF-PSF.eps){width="0.8\columnwidth"} ![The same as Fig. \[kappa-rhoFF\] but with $Q$ determined by Eq. (\[Qeq\]). The transverse superfluid density $\tilde{\rho}_{xx}$ is magnified $100$ times for the sake of clarity.[]{data-label="kappa-rhoFF-PSFfullQ"}](kappa-rhoFF-PSFfullQ.eps){width="0.8\columnwidth"} Because of the divergent fluctuations, it is impossible to determine part of the finite temperature phase diagram where the fluctuation contributions to the FF state should be included, such as the NP-FF and the FF-PS$_\mathrm{F}$ boundaries. However, the PS$_\mathrm{F}$-SF boundary does not have such a numerical difficulty since the FF state is empty and its phase fluctuations do not need to be included. Despite the incompleteness, we still present our results in Fig. \[phasediagramQ\] for various $h$. Different from the non-FF case, the PS-SF boundary now starts with a PS$_\mathrm{F}$-SF segment at low temperatures, which lies to the right of the corresponding PS-SF boundary in the non-FF case. Then this PS$_\mathrm{F}$-SF segment gradually approaches the latter, and finally merges into it as the FF state gives way to the normal state. We find that for $h=0.2,0.3$, and $0.4$, the PS$_\mathrm{F}$-SF boundaries extend above the corresponding $T_\mathrm{BKT}$ obtained in the isotropic non-FF case. This suggests that the effect of anisotropic superfluidity might be relevant to the BKT mechanism. It should be pointed out that in the PS$_\mathrm{F}$ region there are two superfluid densities associated with the FF ($\tilde{\rho}^\mathrm{FF}$) and the SF phases ($\tilde{\rho}^\mathrm{SF}$). Correspondingly there are two critical temperatures $T_\mathrm{BKT}^\mathrm{FF}$ and $T_\mathrm{BKT}^\mathrm{SF}$, respectively. Here $\tilde{\rho}^\mathrm{SF}$ is isotropic and qualitatively the same as the non-FF case, while for $\tilde{\rho}^\mathrm{FF}$ the criterion should be $T_\mathrm{BKT}^\mathrm{FF}=\frac{\pi}{2}\sqrt{\tilde{\rho}_{xx}^\mathrm{FF}(T_\mathrm{BKT}^\mathrm{FF})\tilde{\rho}_{zz}^\mathrm{FF}(T_\mathrm{BKT}^\mathrm{FF})}$. Since $\tilde{\rho}_{xx}^\mathrm{FF}=0$, $T_\mathrm{BKT}^\mathrm{FF}$ would be zero (or almost zero, if there can be some mechanisms to suppress the marginally divergent fluctuation of the FF state, e.g. finite-size effects or broken symmetries). Even if $Q$ is determined from the full thermodynamic potential, see Fig. \[kappa-rhoFF-PSFfullQ\], we can estimate $T_\mathrm{BKT}^\mathrm{FF}$ to be less than $10^{-3}$. ![image](phasediagramQ01.eps){width="0.73\columnwidth"} ![image](phasediagramQ02.eps){width="0.66\columnwidth"} ![image](phasediagramQ03.eps){width="0.66\columnwidth"} ![image](phasediagramQ04.eps){width="0.72\columnwidth"} ![image](phasediagramQ05.eps){width="0.67\columnwidth"} ![image](phasediagramQ06.eps){width="0.66\columnwidth"} Because of the very strong phase fluctuations from the vanishing $\tilde{\rho}_{xx}^\mathrm{FF}$, we expect the FF state to be destroyed at very low $T$. Since above $T_\mathrm{BKT}$ any quasi-long-range phase coherence could not survive, a constant $Q$ in the phase of a plane-wave ansatz characterizing the FF state is not consistent with the BKT mechanism. Consequently it is likely that the FF-PS$_\mathrm{F}$ boundary will probably be replaced by the NP-PS boundary of the non-FF case. This is supported by the fact that the difference between $\Omega_\mathrm{FF}$ and $\Omega_\mathrm{N}$ is quite small, and the order parameter $\Delta_0$ of the FF state ($\Delta_\mathrm{FF}$) is not very large. Therefore, it is reasonable to expect that the FF state will be easily replaced by the normal state when the fluctuations are strong. Meanwhile, the isotropic SF phase will behave similarly as in the non-FF case, and $T_\mathrm{BKT}^\mathrm{SF}$ should behave as $T_\mathrm{BKT}$ in Fig. \[phasediagramFL\]. Then the behavior of the system will be the same as in the non-FF case. Strictly speaking, $T_\mathrm{BKT}$ sets a threshold for the fluctuation where the assumption of the smooth fluctuations of a spin-wave form, or equivalently, the small-$q$ expansion of $S_\mathrm{fl}$, turns out invalid due to the proliferation of free vortices which destroy the (quasi-) long-range order and the phase coherence. As was predicted by Nelson and Kosterlitz [@Nelson:1977] in the isotropic case the superfluid density jumps from $\rho=2T_\mathrm{BKT}/\pi$ to zero as $T$ crosses $T_\mathrm{BKT}$ from below. This has been observed experimentally in 2D $^4$He films [@Bishop:1978] and recently also in cold Bose gases [@Noh:2013]. We expect it is also true for the anisotropic FF state. On this account, above $T_\mathrm{BKT}$ the action in Eq. (\[flaction\]) is no longer of the spin-wave form. Since the FF state is unstable at finite temperature due to the fluctuations, the PS$_\mathrm{F}$-SF boundaries shown in Fig. \[phasediagramQ\] are also vulnerable, consequently the region between the PS$_\mathrm{F}$-SF boundary (thick green curves) and the PS-SF boundary (thick blue curves) in the non-FF case would be affected by the instability. On the PS$_\mathrm{F}$ side across the PS$_\mathrm{F}$-SF boundary where particles start to occupy the FF state, the fluctuations at finite temperatures will destroy the FF state, and a new equilibrium between the NP and the SF phase is established instead. As a final remark, taking the BKT mechanism into account, our results with the fluctuations above $T_\mathrm{BKT}$ shown in the phase diagrams are not quantitatively reliable because the small-$q$ expansion becomes less reliable, although qualitatively they still give some useful information. Our calculations already show that the regions of the phase diagrams with paired states are reduced significantly from the MF results at high $T$ due to fluctuations. In order to draw more quantitative conclusions, a more complete calculation of the thermal fluctuations at higher temperatures is required, e.g. with the original fluctuation action in Eqs. (\[Gauss-fl-action\]) and (\[Dij\]). In addition, throughout our calculation the NP is taken as a free Fermi gas. This could be improved by describing it as a Fermi liquid [@Baym:1991] which would lower the energy of the normal phase. We expect that this difference can modify the phase diagrams quantitatively, but not change them qualitatively. Summary and Discussions ======================= By studying the phase diagram of 2D imbalanced Fermi gases based on the thermodynamic potential on the MF level, we find the existence of the FF state at zero temperature. The possibility of FF state at finite temperatures and its effect on the BKT mechanism are discussed by including phase fluctuations. We also obtained the superfluid density tensor for the anisotropic FF state which always has vanishing transverse component. The effect of the phase fluctuations is demonstrated, which turn out to be very strong for the FF state and possibly destroy the FF-related phases at finite temperatures. Therefore, it would be quite hard to experimentally observe the FF state in nearly infinite continuum 2D Fermi gases, unless extremely low temperatures can be achieved. Since the strong phase fluctuations destroying the quasi-long-range order and phase coherence result in a breakdown of the spin-wave approximation of the fluctuation action, an improved study of the FF state at finite temperatures should take account of the fluctuations more completely. We note, as an interesting line of research, that a dispersion relation for collective excitations including higher order terms $\propto q^4$ has been introduced for unitary Fermi gases by Salasnich [*et al.*]{} [@Salasnich:2008] and applied at the finite (low) temperature in Ref. [@Salasnich:2010]. Besides, a recent experiment with Niobium nitride films [@Yong:2013] has shown that the standard BKT mechanism which only considers the phase fluctuations might not be enough to accurately describe the 2D superconductor (superfluid) phase transition, and a comprehensive consideration including also the amplitude fluctuations might be necessary [@Erez:2013]. Another candidate for an inhomogeneous order parameter is the LO state which does not have the problem of a vanishing transverse superfluid density [@Radzihovsky:2011], however it was also claimed to be unstable to a nematic phase at non-zero temperatures [@Radzihovsky:2009]. There has been one paper studying the BKT phase transition of the LO (stripe) state for an anisotropic 2D system composed of coupled 1D tubes [@Lin:2011], where several different BKT critical temperatures associated with different defects are discussed and found to be linearly dependent on the intertube coupling. Nevertheless, it is an open and important question whether a more general FFLO-type state can be stable against thermal fluctuations and how this might affect the BKT mechanism. In addition, other mechanisms such as optical lattices and trapping potentials can reduce the role of fluctuations because of broken symmetries [@Koponen:2007PRL; @Loh:2010]. Also a mass imbalance [@He:2006; @Conduit:2008] or spin-orbit coupling effects [@Wu:2013; @Liu:2013] can enhance the Fermi-surface asymmetry and increase the stability of the FFLO state. These topics will be considered in our future work. Acknowledgements {#acknowledgements .unnumbered} ================ This work was supported by the Academy of Finland through its Centers of Excellence Program (2012-2017) and under Projects No. 263347, No. 141039, No. 251748, No. 135000, and No. 272490. This research was supported in part by the National Science Foundation under Grant No. NSF PHY11-25915. Hubbard-Stratonovich Transformation {#transformation} =================================== For the sake of clarity, let us first introduce the notations used in the appendices as well as in the main text. Within the Euclidean space-time (dimension 1+d with 1 for time and d for space), the coordinate vector is denoted as $x=(\tau,\mathbf{x})$, and the momentum as $k=(i(ik_n),\mathbf{k})$ with the Matsubara frequency $ik_n=(2n+1)\pi/\beta$ (fermionic) or $ik_n=2n\pi/\beta$ (bosonic), where $\beta=1/T$ is the inverse of temperature. However, when there is no risk of confusion between vectors and numbers, sometimes $x$ (or $k$) can also be used for the norm of $\mathbf{x}$ (or $\mathbf{k}$). The vector product in d-space is indicated as $\mathbf{k}\cdot\mathbf{x}$ while the product of space-time vectors is written as $kx$, e.g. $ikx=i\mathbf{k}\cdot\mathbf{x}-ik_n\tau$. The discrete momentum space and the continuous coordinate space are linked via the Fourier transformation and the Fourier series formulae $f(k)=\frac{1}{\mathcal{V}}\int f(x)e^{-ikx}dx$ and $f(x)=\sum_kf(k)e^{ikx}$, where $\sum_k$ includes the summation over the Matsubara frequencies as well as the momenta, and $\mathcal{V}=V\beta$ with $V$ as the total volume of the d-dimensional space. In the continuum limit the summation over the spacial momenta can be carried out as an integration. According to the standard Hubbard-Stratonovich transformation, a bosonic field operator $\hat\Delta(x)$ is introduced via the functional integral relation $1\propto\int\mathcal{D}\hat\Delta^*\mathcal{D}\hat\Delta e^{-\int dx[\hat\Delta^*(x)-g\hat\psi^\dagger_\uparrow(x)\hat\psi^\dagger_\downarrow(x)](1/g)[\hat\Delta(x)-g\hat\psi_\downarrow(x)\hat\psi_\uparrow(x)]}$ which is inserted to the microscopic partition function $Z=\int\mathcal{D}\hat\psi^\dagger_\uparrow\mathcal{D}\hat\psi_\uparrow\mathcal{D}\hat\psi^\dagger_\downarrow\mathcal{D}\hat\psi_\downarrow e^{-S}$. Here $$S=\int dx\left[\sum_\sigma\hat\psi^\dagger_\sigma(x)\partial_\tau\hat\psi_\sigma(x)+\hat{H}(x)\right]=\int dxdx'\left[-\sum_\sigma\hat\psi^\dagger_\sigma(x)G^{-1}_{0\sigma}(x,x')\hat\psi_\sigma(x')-g\hat\psi^\dagger_\uparrow(x)\hat\psi^\dagger_\downarrow(x')\hat\psi_\downarrow(x')\hat\psi_\uparrow(x)\delta(x-x')\right],$$ where $G^{-1}_{0\sigma}(x,x')=(-\partial_\tau-\hat\varepsilon+\mu_\sigma)\delta(x-x')$ is the inverse of a free fermion propagator for species $\sigma$. Then a new action $\tilde{S}$ in the resulting partition function $Z=\int\mathcal{D}\hat\psi^\dagger_\uparrow\mathcal{D}\hat\psi_\uparrow\mathcal{D}\hat\psi^\dagger_\downarrow\mathcal{D}\hat\psi_\downarrow\mathcal{D}\hat\Delta^*\mathcal{D}\hat\Delta e^{-\tilde{S}}$ can be written in a quadratic form, $$\tilde{S}=\int dxdx'\left\{\frac{|\hat\Delta(x)|^2}{g}\delta(x-x')-\sum_\sigma\hat\psi^\dagger_\sigma(x)G^{-1}_{0\sigma}(x,x')\hat\psi_\sigma(x')-\left[\hat\psi^\dagger_\uparrow(x)\hat\Delta(x)\hat\psi^\dagger_\downarrow(x')+\hat\psi_\downarrow(x')\hat\Delta^*(x)\hat\psi_\uparrow(x)\right]\delta(x-x')\right\}.$$ By using the Nambu-Gorkov basis $\hat\Psi^\dagger=(\hat\psi^\dagger_\uparrow, \hat\psi_\downarrow)$ and $\hat\Psi=(\hat\psi_\uparrow, \hat\psi^\dagger_\downarrow)^T$, the action can be expressed as $$\tilde{S}=\int dxdx'\left[\frac{|\hat\Delta(x)|^2}{g}\delta(x-x')-\hat\Psi^\dagger(x)\mathbf{G}^{-1}(x,x')\hat\Psi(x')\right],$$ where $$\mathbf{G}^{-1}(x,x')=\left(\begin{array}{cc} -\partial_\tau-\hat\varepsilon+\mu_\uparrow & \hat\Delta(x)\\ \hat\Delta^*(x) & -\partial_\tau+\hat\varepsilon-\mu_\downarrow \end{array}\right)\delta(x-x').$$ In the momentum space the action becomes $$\tilde{S}=\mathcal{V}\sum_q\frac{|\hat\Delta(q)|^2}{g}-\mathcal{V}\sum_{k,k'}\hat\Psi^\dagger(-k)\mathbf{G}^{-1}(k,k')\hat\Psi(k'),$$ with $$\begin{aligned} \mathbf{G}^{-1}&(k,k')=\nonumber\\ &\left(\begin{array}{cc} (ik'_n-\epsilon_\mathbf{k'}+\mu_\uparrow)\delta_{k,k'} & \hat\Delta(k-k')\\ \hat\Delta^*(-k+k') & (ik'_n+\epsilon_\mathbf{k'}-\mu_\downarrow)\delta_{k,k'} \end{array}\right)\nonumber\end{aligned}$$ as the Fourier transform of $\mathbf{G}^{-1}(x,x')$. Integrating out the Fermi fields, we get the effective bosonic action $$\begin{aligned} \label{eff} S_\mathrm{eff}&=\int dx\frac{|\hat\Delta(x)|^2}{g}-\mathrm{Tr}\ln[-\beta\mathbf{G}^{-1}(x,x')]\nonumber\\ &=\mathcal{V}\sum_{iq_n,\mathbf{q}}\frac{|\hat\Delta(q)|^2}{g}-\mathrm{Tr}\ln[-\beta\mathbf{G}^{-1}(k,k')],\end{aligned}$$ where $\mathrm{Tr}$ means the trace over the Nambu space as well as the (1+d) coordinate or momentum space. Since for a matrix operation $\mathrm{tr}\ln=\ln\mathrm{det}$ (here $\mathrm{tr}$ means only the trace in the Nambu space) and $\mathbf{G}^{-1}$ is a $2\times2$ matrix, the minus sign inside the logarithm makes no difference and will be dropped for simplicity. Now the original functional integral of Fermi fields has been transformed into an integral over the Bose field $\hat\Delta$. However, since the action is a complicated function of $\hat\Delta$, in general it cannot be carried out explicitly unless some approximation is made. A widely used one is the MF approximation, also referred to as the saddle-point method, which is a good approximation if the fields vary smoothly and no strong correlation is present. In MF approximation the integral over the field $\hat\Delta$ is replaced by using its expectation value $\langle\hat\Delta\rangle=\Delta_\mathrm{s}$. This parameter is also referred to as the order parameter, and it satisfies the saddle-point condition $\delta S_\mathrm{s}/\delta\Delta^*_\mathrm{s}=0$. For a constant $\Delta_\mathrm{s}$, $\Delta_\mathrm{s}(k-k')=\Delta_\mathrm{s}\delta_{k,k'}$ and $\mathbf{G}_\mathrm{s}^{-1}\equiv\mathbf{G}^{-1}(\Delta_\mathrm{s})$ is diagonal in momentum space. In this case the functional integral reduces to $Z\propto e^{-S_\mathrm{s}}$ with [@Stoof:2009; @Tempere:2007] $$\label{saddle} S_\mathrm{s}\equiv S_\mathrm{eff}(\Delta_\mathrm{s})=\frac{\mathcal{V}|\Delta_\mathrm{s}|^2}{g}-\sum_k\ln[\mathrm{det}\beta\mathbf{G}_\mathrm{s}^{-1}(k)].$$ In general $\Delta_\mathrm{s}$ might not be constant, and in momentum space $\mathbf{G}^{-1}$ might not be diagonal. This may cause some problems, especially when we need to invert $\mathbf{G}^{-1}$ into $\mathbf{G}$. However, for the FF ansatz $\Delta_\mathrm{s}(x)=\Delta_0e^{2i\mathbf{Q}\cdot\mathbf{x}}$ whose Fourier transform is $\Delta_\mathrm{s}(k)=\Delta_0\delta_{\mathbf{k},2\mathbf{Q}}$, the coordinate-dependent phase can be removed by shifting the momenta of $\hat\psi_\uparrow(\mathbf{k})$ and $\hat\psi_\downarrow(\mathbf{k})$ into $\mathbf{Q+k}$ and $\mathbf{Q-k}$, respectively, which automatically means that the total momentum is $2\mathbf{Q}$. This shift is a special case of the gauge transformation in Eq. (\[gaugetransform\]). The resulting $\tilde{\mathbf{G}}_\mathrm{s}^{-1}$ becomes diagonal as $$\begin{aligned} \tilde{\mathbf{G}}_\mathrm{s}^{-1}(k,k')&=\tilde{\mathbf{G}}_\mathrm{s}^{-1}(k)\delta_{k,k'}\nonumber\\ =&\left(\begin{array}{cc} ik_n-\epsilon_\mathbf{Q+k}+\mu_\uparrow & \Delta_0\\ \Delta_0 & ik_n+\epsilon_\mathbf{Q-k}-\mu_\downarrow \end{array}\right)\delta_{k,k'},\nonumber\end{aligned}$$ and $\Delta_\mathrm{s}$ reduces to $\Delta_0$. Fluctuation Action Obtained from the Saddle-Point Action {#action-fl} ======================================================== In Appendix \[transformation\] the derivation of the saddle-point action does not involve fluctuations. For an arbitrary form of the order parameter, the inverse Nambu propagator $\mathbf{G}_\mathrm{s}^{-1}$ is usually not diagonal, which hinders the derivation of the explicit expression of the action. The FF ansatz is a very special case for which the momentum shift makes $\tilde{\mathbf{G}}_\mathrm{s}^{-1}$ diagonal. In this case the derivation is almost the same as in the case of a constant $\Delta_\mathrm{s}$. One cannot expect a simple shift or transformation for an order parameter with random fluctuations. However, as will be seen below, the small-$q$ expansion used in Sec. \[Sec-phasefl\] to obtain the fluctuation action in a spin-wave form actually relaxes the momentum constraint, e.g. $\delta_{k-k',q}$ in Eq. (\[perturbationK\]), which is the source of the problematic off-diagonal terms. With the small-$q$ expansion it becomes possible to generalize the saddle-point calculation to include smooth fluctuations. First we note that there is a way to simplify the expression of the full inverse Nambu propagator $\tilde{\mathbf{G}}^{-1}$. Since in Eq. (\[perturbationK\]) the $\theta$-dependence appears only in the diagonal terms of $\tilde{\mathbf{K}}$, it can be absorbed into the chemical potentials in $\tilde{\mathbf{G}}^{-1}_\mathrm{s}$ [@Tempere:2009]. Then we can split $\tilde{\mathbf{G}}^{-1}$ into $\bar{\mathbf{G}}^{-1}_\mathrm{s}+\bar{\mathbf{K}}$, where $\bar{\mathbf{K}}(k,k')=\eta(k-k')\sigma_1$ and $\bar{\mathbf{G}}^{-1}_\mathrm{s}$ is simply $\tilde{\mathbf{G}}^{-1}_\mathrm{s}$ with $\mu_\sigma$ replaced by $\bar\mu_\sigma$, $$\begin{aligned} \bar\mu_\uparrow&=\mu_\uparrow-\frac{i\partial_\tau\theta}{2}+\frac{i(\nabla\theta\cdot\nabla_\mathbf{Q}+\frac{1}{2}\nabla_\mathbf{Q}\cdot\nabla\theta)}{2m}-\frac{(\nabla\theta)^2}{8m},\nonumber\\ \bar\mu_\downarrow&=\mu_\downarrow-\frac{i\partial_\tau\theta}{2}-\frac{i(\nabla\theta\cdot\nabla_\mathbf{-Q}+\frac{1}{2}\nabla_\mathbf{-Q}\cdot\nabla\theta)}{2m}-\frac{(\nabla\theta)^2}{8m}.\nonumber\end{aligned}$$ Their Fourier transforms are $$\begin{aligned} \bar\mu_\uparrow(k,k')&=\mu_\uparrow\delta_{k,k'}+\sum_q\left[-\frac{q_n\theta(q)}{2}-\frac{i\theta(q)}{4m}(\mathbf{k}^2-\mathbf{k'}^2+3\mathbf{q}\cdot\mathbf{Q})\right]\delta_{k-k',q}+\sum_{q,q'}\frac{\theta(q)\theta(q')\mathbf{q}\cdot\mathbf{q}'}{8m}\delta_{k-k',q+q'},\nonumber\\ \bar\mu_\downarrow(k,k')&=\mu_\downarrow\delta_{k,k'}-\sum_q\left[\frac{q_n\theta(q)}{2}-\frac{i\theta(q)}{4m}(\mathbf{k}^2-\mathbf{k'}^2-3\mathbf{q}\cdot\mathbf{Q})\right]\delta_{k-k',q}+\sum_{q,q'}\frac{\theta(q)\theta(q')\mathbf{q}\cdot\mathbf{q}'}{8m}\delta_{k-k',q+q'}.\end{aligned}$$ Note that it would seem as if by absorbing the $\theta$-dependent terms into the chemical potentials, we do not only simplify the perturbative matrix $\bar{\mathbf{K}}$ but also loosen the requirement that $\theta$ should be smooth in space-time. Furthermore, if we only consider the phase fluctuations, the perturbative part $\bar{\mathbf{K}}$ vanishes and the remaining part $\bar{\mathbf{G}}^{-1}_\mathrm{s}$ is in a saddle-point form. However, this simplification is only superficial since it moves the difficulties into $\bar{\mathbf{G}}^{-1}_\mathrm{s}$, because $\bar\mu_\sigma$ is no longer a c-number but an operator which involves off-diagonal terms in the momentum space. In small-$q$ expansion, we assume that the fluctuations of $\theta$ change much more smoothly and slowly than the Fermi fields, so that the functional integral over the Fermi fields can be carried out adiabatically. In this way the momentum (or position) of $\theta$ is no longer associated with the Fermi fields since the field $\theta$ can be taken as a constant, and the difficulty of the off-diagonal terms no longer exists. It then becomes possible to carry out independent Fourier transformations of the Fermi fields without involving $\theta(x)$ and we get $$\begin{aligned} \bar\mu_\uparrow&=\mu_\uparrow-\frac{i\partial_\tau\theta}{2}+\frac{i[i\nabla\theta\cdot(\mathbf{k+Q})+\frac{1}{2}\nabla_\mathbf{Q}\cdot\nabla\theta]}{2m}-\frac{(\nabla\theta)^2}{8m},\nonumber\\ \bar\mu_\downarrow&=\mu_\downarrow-\frac{i\partial_\tau\theta}{2}-\frac{i[i\nabla\theta\cdot(\mathbf{k-Q})+\frac{1}{2}\nabla_\mathbf{-Q}\cdot\nabla\theta]}{2m}-\frac{(\nabla\theta)^2}{8m},\nonumber\end{aligned}$$ which are diagonal in momentum space. From these we define $$\begin{aligned} \label{shiftmu} \bar\mu&=\mu-\frac{i\partial_\tau\theta}{2}-\frac{\nabla\theta\cdot\mathbf{Q}}{2m}+i\frac{\nabla_\mathbf{Q}\cdot\nabla\theta-\nabla_\mathbf{-Q}\cdot\nabla\theta}{8m}-\frac{(\nabla\theta)^2}{8m},\nonumber\\ \bar h&=h-\frac{\nabla\theta\cdot\mathbf{k}}{2m}+i\frac{\nabla_\mathbf{Q}\cdot\nabla\theta+\nabla_\mathbf{-Q}\cdot\nabla\theta}{8m}.\end{aligned}$$ Since these “barred" chemical potentials can be taken as c-numbers during the fermionic functional integral, using $\bar\mu_\sigma$ instead of $\mu_\sigma$ will not change the derivation for the saddle-point action in Appendix \[transformation\]. Now with the phase fluctuations only, the results in Eqs. (\[saddle\]) and (\[saddleaction\]) can be directly generalized by Eq. (\[shiftmu\]), that is, we get $\bar S_\mathrm{s}$ with $\{\bar\mu,\bar h\}$ replacing $\{\mu,h\}$ in $S_\mathrm{s}$. To complete this adiabatic approximation of the functional integral, we have to introduce an extra integral over the coordinate of $\theta$, divided by $\mathcal{V}$ to insure correct dimensions. It means that we use the space-time average of the fluctuations corresponding to the long-wavelength and low-frequency limit. Keeping the quadratic order of the derivatives of $\theta$ in the expansion of $\bar S_\mathrm{s}$, we get $\bar S_\mathrm{s}=S_\mathrm{s}+\bar S_\mathrm{fl}$ with $S_\mathrm{s}$ given in Eq. (\[saddle\]) (or Eq. (\[saddleaction\])) and $$\begin{aligned} \label{barSfl} \bar S_\mathrm{fl}=&\frac{\mathcal{V}}{2}\int\frac{dx}{\mathcal{V}}\left[\kappa\left(\frac{\partial\theta}{\partial\tau}\right)^2+\rho_{ij}\nabla_i\theta\nabla_j\theta+B_{++}(\nabla_\mathbf{Q}\cdot\nabla\theta)^2\right.\nonumber\\ &\quad\left.+B_{--}(\nabla_\mathbf{-Q}\cdot\nabla\theta)^2+B_{+-}(\nabla_\mathbf{Q}\cdot\nabla\theta)(\nabla_\mathbf{-Q}\cdot\nabla\theta)\vphantom{\frac{1}{2}}\right.\nonumber\\ &\left.+(\mathbf{A}_+\cdot\nabla\theta)\nabla_\mathbf{Q}\cdot\nabla\theta+(\mathbf{A}_-\cdot\nabla\theta)\nabla_\mathbf{-Q}\cdot\nabla\theta\vphantom{\left(\frac{}{}\right)^2}\right],\end{aligned}$$ where $$\begin{aligned} \kappa=&\frac{1}{V}\sum_\mathbf{k}\frac{\Delta_0^2X_\mathbf{k}+\beta E_\mathbf{Qk}\xi_\mathbf{Qk}^2Y_\mathbf{k}}{4E_\mathbf{Qk}^3},\nonumber\\ \rho_{ij}=&\frac{1}{V}\sum_\mathbf{k}\left[\frac{\delta_{ij}}{4m}\left(1-\frac{\xi_\mathbf{Qk}}{E_\mathbf{Qk}}X_\mathbf{k}\right)-\frac{\beta Y_\mathbf{k}k_ik_j}{4m^2}\right.\nonumber\\&\left.\qquad\qquad-Z_\mathbf{k}(k_iQ_j+Q_ik_j)\vphantom{\frac{1}{2}}\right]-\frac{\kappa Q_iQ_j}{m^2}.\nonumber\end{aligned}$$ These generalize the results in Ref. [@Tempere:2009] by including the FF ansatz. $\mathbf{A}_\pm$ and $B_{s_1s_2}$ are complicated functions of $\mu$, $h$, $\beta$, $\Delta_0$ and $\mathbf{Q}$, and the summation over spacial indices $i$ and $j$ is assumed, the factor $\mathcal{V}/2$ is taken out for later convenience. Besides, some terms linear in $\partial_\tau\theta$, $\nabla\theta$, $\nabla_\mathbf{\pm Q}\cdot\nabla\theta$ and their mixed products $\partial_\tau\theta\nabla\theta$, $\partial_\tau\theta\nabla_\mathbf{\pm Q}\cdot\nabla\theta$ are omitted since their contributions vanish after the overall integral (for $\partial_\tau\theta$, note that $\theta$ is bosonic so $\int_0^\beta d\tau\partial_\tau\theta=0$ due to the periodic boundary condition). The presence of the FF vector makes the expression of $\bar S_\mathrm{fl}$ very complicated. If $\mathbf{Q}=0$, we find that the contribution from $\mathbf{A}_\pm$ vanishes after the overall integral and the one from $B_{s_1s_2}$ corresponds to higher order correction (quartic in momentum after Fourier transformation), so only the first two terms survive. But with $\mathbf{Q}\neq0$, there are lower order contributions from $\mathbf{A}_\pm$ and $B_{s_1s_2}$ which are relevant. These can be calculated by using the following Fourier transformations $$\begin{aligned} &\int\frac{dx}{\mathcal{V}}(\nabla\theta)\nabla_\mathbf{\pm Q}\cdot\nabla\theta=\sum_{q,p}\int\frac{dx}{\mathcal{V}}[\nabla\theta(q)e^{iqx}]\nabla_\mathbf{\pm Q}\cdot\nabla\theta(p)e^{ipx}=\sum_{q,p}i\mathbf{q}\theta(q)i(\mathbf{p\pm Q})\cdot i\mathbf{p}\theta(p)\delta_{q,-p}\approx\pm\sum_{q}i\mathbf{q}|\theta(q)|^2\mathbf{Q\cdot q},\nonumber\\ &\int\frac{dx}{\mathcal{V}}(\nabla_{s_1\mathbf{Q}}\cdot\nabla\theta)(\nabla_{s_2\mathbf{Q}}\cdot\nabla\theta)=\sum_{q,p}\int\frac{dx}{\mathcal{V}}[\nabla_{s_1\mathbf{Q}}\cdot\nabla\theta(q)e^{iqx}][\nabla_{s_2\mathbf{Q}}\cdot\nabla\theta(p)e^{ipx}]\nonumber\\ &\qquad\quad=\sum_{q,p}[i(\mathbf{q}+s_1\mathbf{Q})\cdot i\mathbf{q}\theta(q)][i(\mathbf{p}+s_2\mathbf{Q})\cdot i\mathbf{p}\theta(p)]\delta_{q,-p}=\sum_{q}[\mathbf{q}^4-s_1s_2(\mathbf{Q\cdot q})^2]|\theta(q)|^2\approx-s_1s_2\sum_{q}(\mathbf{Q\cdot q})^2|\theta(q)|^2,\nonumber\end{aligned}$$ where we used $\theta(-q)=\theta^*(q)$ for real $\theta(x)$ and kept terms up to the quadratic order of $q$. Together with the Fourier transform of the first two terms in $\bar S_\mathrm{fl}$, we finally get $$\begin{aligned} \bar S_\mathrm{fl}=&\frac{\mathcal{V}}{2}\sum_q[\kappa q_n^2+\rho_{ij}q_iq_j+i(\mathbf{A}_+-\mathbf{A}_-)\cdot\mathbf{q}(\mathbf{Q\cdot q})\nonumber\\&\qquad+(B_{+-}-B_{++}-B_{--})(\mathbf{Q\cdot q})^2]|\theta(q)|^2\nonumber\\=&\frac{\mathcal{V}}{2}\sum_q(\kappa q_n^2+\tilde\rho_{ij}q_iq_j)|\theta(q)|^2,\end{aligned}$$ where $\tilde\rho_{ij}\equiv\rho_{ij}+\mathbf{A}_iQ_j+\mathbf{A}_jQ_i+BQ_iQ_j$ with $$\begin{aligned} \mathbf{A}\equiv&\frac{i}{2}(\mathbf{A}_+-\mathbf{A}_-)=-\frac{\kappa}{2m^2}\mathbf{Q}-\sum_\mathbf{k}\frac{1}{2}Z_\mathbf{k}\mathbf{k},\nonumber\\ B\equiv&B_{+-}-B_{++}-B_{--}=-\frac{\kappa}{4m^2}.\nonumber\end{aligned}$$ In conclusion we find, $$\begin{aligned} \tilde\rho_{ij}=&\frac{1}{V}\sum_\mathbf{k}\left[\frac{\delta_{ij}}{4m}\left(1-\frac{\xi_\mathbf{Qk}}{E_\mathbf{Qk}}X_\mathbf{k}\right)-\frac{\beta Y_\mathbf{k}k_ik_j}{4m^2}\right.\nonumber\\&\left.\qquad\qquad-\frac{3Z_\mathbf{k}}{2}(k_iQ_j+Q_ik_j)\vphantom{\frac{1}{2}}\right]-\frac{9\kappa Q_iQ_j}{4m^2},\nonumber\end{aligned}$$ which is in general not diagonal if $\mathbf{Q}\neq0$. However we can choose the direction of $\mathbf{Q}$ as, e.g. the z-axis, then $h_\mathbf{Qk}=h-\frac{Qk_z}{m}$, such that $X_\mathbf{k}$, $Y_\mathbf{k}$ and $Z_\mathbf{k}$ are even in all spatial momentum components $k_i$ except for $k_z$ (note that $\xi_\mathbf{Qk}$ and $E_\mathbf{Qk}$ are always even in $\mathbf{k}$). Therefore, $\sum_\mathbf{k}Y_\mathbf{k}k_ik_j=\sum_\mathbf{k}Y_\mathbf{k}k_i^2\delta_{ij}$, $\sum_\mathbf{k}Z_\mathbf{k}k_i=\sum_\mathbf{k}Z_\mathbf{k}k_z\delta_{iz}$, and $\tilde\rho_{ij}$ reduces to $$\begin{aligned} \tilde\rho_{ij}=&\frac{1}{V}\sum_\mathbf{k}\left[\frac{\delta_{ij}}{4m}\left(1-\frac{\xi_\mathbf{Qk}}{E_\mathbf{Qk}}X_\mathbf{k}\right)-\frac{\beta Y_\mathbf{k}k_i^2\delta_{ij}}{4m^2}\right.\nonumber\\&\left.\qquad\qquad\qquad-3Z_\mathbf{k}k_zQ\delta_{iz}\delta_{jz}\vphantom{\frac{1}{2}}\right]-\frac{9\kappa Q^2\delta_{iz}\delta_{jz}}{4m^2}.\nonumber\end{aligned}$$ This expression is diagonal, but with $\tilde\rho_{zz}$ different from other diagonal elements. The results of $\kappa$ and $\tilde\rho_{ij}$ obtained in this way are consistent with those obtained in Sec. \[Sec-phasefl\] by the direct small-$q$ expansion of $\mathbf{D}_{22}$ in Eq. (\[Dij\]). Similar derivation and results were presented in a recent paper for the 3D case [@Devreese:2013]. We emphasize that our derivation is generally applicable to other dimensions than two as well. [s40]{} I. Bloch, J. Dalibard, and W. Zwerger, Rev. Mod. Phys. [**80**]{}, 885 (2008). P. Fulde and R. A. Ferrell, Phys. Rev. [**135**]{}, A550 (1964). A. I. Larkin and Yu. N. Ovchinnikov, Sov. Phys. JETP [**20**]{}, 762 (1965). L. Radzihovsky, Phys. Rev. A. [**84**]{}, 023611 (2011). H. A. Radovan, N. A. Fortune, T. P. Murphy, S. T. Hannahs, E. C. Palm, S. W. Tozer, and D. Hall, Nature [**425**]{}, 51 (2003). A. Bianchi, R. Movshovich, C. Capan, P. G. Pagliuso, and J. L. Sarrao, Phys. Rev. Lett. [**91**]{}, 187004 (2003). H. Won, K. Maki, S. Haas, N. Oeschler, F. Weickert, and P. Gegenwart, Phys. Rev. B [**69**]{}, 180504(R) (2004). T. Watanabe, Y. Kasahara, K. Izawa, T. Sakakibara, Y. Matsuda, C. J. van der Beek, T. Hanaguri, H. Shishido, R. Settai, and Y. Onuki, Phys. Rev. B [**70**]{}, 020506(R) (2004). C. Capan, A. Bianchi, R. Movshovich, A. D. Christianson, A. Malinowski, M. F. Hundley, A. Lacerda, P. G. Pagliuso, and J. L. Sarrao, Phys. Rev. B [**70**]{}, 134513 (2004). C. Martin, C. C. Agosta, S. W. Tozer, H. A. Radovan, E. C. Palm, T. P. Murphy, and J. L. Sarrao, Phys. Rev. B [**71**]{}, 020503(R) (2005). K. Kakuyanagi, M. Saitoh, K. Kumagai, S. Takashima, M. Nohara, H. Takagi, and Y. Matsuda, Phys. Rev. Lett. [**94**]{}, 047602 (2005). K. Kumagai, M. Saitoh, T. Oyaizu, Y. Furukawa, S. Takashima, M. Nohara, H. Takagi, and Y. Matsuda, Phys. Rev. Lett. [**97**]{}, 227002 (2006). V. F. Correa, T. P. Murphy, C. Martin, K. M. Purcell, E. C. Palm, G. M. Schmiedeshoff, J. C. Cooley, and S. W. Tozer, Phys. Rev. Lett. [**98**]{}, 087001 (2007). R. Lortz, Y. Wang, A. Demuer, P. H. M. B[ö]{}ttger, B. Bergk, G. Zwicknagl, Y. Nakazawa, and J. Wosnitza, Phys. Rev. Lett. [**99**]{}, 187002 (2007). W. A. Coniglio, L. E. Winter, K. Cho, C. C. Agosta, B. Fravel, and L. K. Montgomery, Phys. Rev. B [**83**]{}, 224507 (2011). M. W. Zwierlein, A. Schirotzek, C. H. Schunck, and W. Ketterle, Science [**311**]{}, 492 (2006). G. B. Partridge, W. Li, R. I. Kamar, Y.-a. Liao, and R. G. Hulet, Science [**311**]{}, 503 (2006). Y.-a. Liao, A. S. C. Rittner, T. Paprotta, W. Li, G. B. Partridge, R. G. Hulet, S. K. Baur, and E. J. Mueller, Nature [**467**]{}, 567 (2010). N. D. Mermin and H. Wagner, Phys. Rev. Lett. [**17**]{}, 1133 (1966); P. C. Hohenberg, Phys. Rev. [**158**]{}, 383 (1967). T. Esslinger and G. Blatter, Nature [**441**]{}, 1053 (2006). V. L. Berezinskii, Sov. Phys. JETP [**32**]{}, 493 (1971). J. M. Kosterlitz and D. J. Thouless, J. Phys. C [**5**]{}, L124 (1972); J. Phys. C [**6**]{}, 1181 (1973). V. Bagnato and D. Kleppner, Phys. Rev. A [**44**]{}, 7439 (1991). C. W. J. Beenakker, Rev. Mod. Phys. [**80**]{}, 1337 (2008). E. Dagotto, Rev. Mod. Phys. [**66**]{}, 763 (1994). J. Tempere, M. Wouters, and J. T. Devreese, Phys. Rev. B [**75**]{}, 184526 (2007). L. He and P. Zhuang, Phys. Rev. A [**78**]{}, 033613 (2008). J. Tempere, S. N. Klimin, and J. T. Devreese, Phys. Rev. A [**79**]{}, 053637 (2009). S. N. Klimin, J. Tempere, J. T. Devreese, and B. Van Schaeybroeck, Phys. Rev. A [**83**]{}, 063636 (2011). Jiajia Du, Junjun Liang, and J.-Q. Liang, Phys. Rev. A [**85**]{}, 033610 (2012). S. N. Klimin, J. Tempere, and J. T. Devreese, New J. Phys. [**14**]{}, 103044 (2012). H. Shimahara, J. Phys. Soc. Jpn. [**67**]{}, 1872 (1998). L. Radzihovsky and A. Vishwanath, Phys. Rev. Lett. [**103**]{}, 010404 (2009). D. E. Sheehy and L. Radzihovsky, Phys. Rev. Lett. [**96**]{}, 060401 (2006); Ann. Phys. [**322**]{}, 1790 (2007). M. M. Parish, F. M. Marchetti, A. Lamacraft, and B. D. Simons, Nature Phys. [**3**]{}, 124 (2007). J. Kinnunen, L. M. Jensen, and P. T[ö]{}rm[ä]{}, Phys. Rev. Lett. [**96**]{}, 110403 (2006). L. M. Jensen, J. Kinnunen, and P. T[ö]{}rm[ä]{}, Phys. Rev. A [**76**]{}, 033620 (2007). K. Machida, T. Mizushima, and M. Ichioka, Phys. Rev. Lett. [**97**]{}, 120407 (2006). D.-H. Kim, J. J. Kinnunen, J.-P. Martikainen, and P. T[ö]{}rm[ä]{}, Phys. Rev. Lett. [**106**]{}, 095301 (2011). T. K. Koponen, T. Paananen, J.-P. Martikainen, and P. T[ö]{}rm[ä]{}, Phys. Rev. Lett. [**99**]{}, 120403 (2007). T. K. Koponen, T. Paananen, J.-P. Martikainen, M. R. Bakhtiari, and P. T[ö]{}rm[ä]{}, New J. Phys. [**10**]{}, 045014 (2008). Y. L. Loh and N. Trivedi, Phys. Rev. Lett. [**104**]{}, 165302 (2010). K. Yang, Phys. Rev. B [**63**]{}, 140511(R) (2001). A. E. Feiguin and F. Heidrich-Meisner, Phys. Rev. B [**76**]{}, 220508 (2007). M. Tezuka and M. Ueda, Phys. Rev. Lett. [**100**]{}, 110403 (2008). G. G. Batrouni, M. H. Huntley, V. G. Rousseau, and R. T. Scalettar, Phys. Rev. Lett. [**100**]{}, 116405 (2008). M. Rizzi, M. Polini, M. A. Cazalilla, M. R. Bakhtiari, M. P. Tosi, and R. Fazio, Phys. Rev. B [**77**]{}, 245105 (2008). K. Machida and H. Nakanishi, Phys. Rev. B [**30**]{}, 122 (1984). T. Mizushima, K. Machida, and M. Ichioka, Phys. Rev. Lett. [**94**]{}, 060404 (2005). G. Orso, Phys. Rev. Lett. [**98**]{}, 070402 (2007). H. Hu, X.-J. Liu, and P. D. Drummond, Phys. Rev. Lett. [**98**]{}, 070403 (2007). E. Zhao and W. V. Liu, Phys. Rev. A [**78**]{}, 063605 (2008). M. R. Bakhtiari, M. J. Leskinen, and P. T[ö]{}rm[ä]{}, Phys. Rev. Lett. [**101**]{}, 120404 (2008). A. Korolyuk, F. Massel, and P. T[ö]{}rm[ä]{}, Phys. Rev. Lett. [**104**]{}, 236402 (2010). J. Kajala, F. Massel, and P. T[ö]{}rm[ä]{}, Phys. Rev. A [**84**]{}, 041601(R) (2011). A-H. Chen and G. Xianlong, Phys. Rev. B [**85**]{}, 134203 (2012). H. Lu, L. O. Baksmaty, C. J. Bolech, and H. Pu, Phys. Rev. Lett. [**108**]{}, 225302 (2012). Z. Hadzibabic, P. Kr[ü]{}ger, M. Cheneau, B. Battelier, and J. Dalibard, Nature [**441**]{}, 1118 (2006). K. Martiyanov, V. Makhalov, and A. Turlapov, Phys. Rev. Lett. [**105**]{}, 030404 (2010). M. Feld, B. Fr[ö]{}hlich, E. Vogt, M. Koschorreck, and M. K[ö]{}hl, Nature [**480**]{}, 75 (2011). M. Koschorreck, D. Pertot, E. Vogt, B. Fr[ö]{}hlich, M. Feld, and M. K[ö]{}hl, Nature [**485**]{}, 619 (2012). D.-H. Kim and P. T[ö]{}rm[ä]{}, Phys. Rev. B [**85**]{}, 180508(R) (2012)). K. Sun and C. J. Bolech, Phys. Rev. A [**87**]{}, 053622 (2013). M. O. J. Heikkinen, D.-H. Kim, P. T[ö]{}rm[ä]{}, Phys. Rev. B [**87**]{}, 224513 (2013). J. P. A. Devreese, M. Wouters, and J. Tempere, Phys. Rev. A [**84**]{}, 043623 (2011). R. B. Diener, R. Sensarma, and M. Randeria, Phys. Rev. A [**77**]{}, 023626 (2008). S. S. Botelho and C. A. R. S[á]{} de Melo, Phys. Rev. Lett. [**96**]{}, 040404 (2006). D. R. Nelson and J. M. Kosterlitz, Phys. Rev. Lett. [**39**]{}, 1201 (1977). G. A. Williams and E. Varoquaux, Jour. Low Temp. Phys. [**113**]{}, 405, (1998). R. Combescot and C. Mora, Eur. Phys. J. B [**28**]{}, 397 (2002). K. B. Gubbels, J. E. Baarsma, and H. T. C. Stoof, Phys. Rev. Lett. [**103**]{}, 195301 (2009). G. J. Conduit, P. H. Conlon, and B. D. Simons, Phys. Rev. A [**77**]{}, 053617 (2008). A. Kujawa-Cichy and R. Micnas, Eur. Phys. Lett. [**95**]{}, 37003 (2011) D. J. Bishop and J. D. Reppy, Phys. Rev. Lett. [**40**]{}, 1727 (1978). J. Noh, J. Lee, and J. Mun, arXiv:1305.1423 (2013). G. Baym and C. Pethick, [*Landau Fermi-Liquid Theory*]{} (Wiley, Ney York 1991). L. Salasnich and F. Toigo, Phys. Rev. A [**78**]{}, 053626 (2008). L. Salasnich, Phys. Rev. A [**82**]{}, 063619 (2010). J. Yong, T. R. Lemberger, L. Benfatto, K. Ilin, and M. Siegel, Phys. Rev. B [**87**]{}, 184505 (2013). A. Erez and Y. Meir, Phys. Rev. B [**88**]{}, 184510 (2013). C. Lin, X. Li, and W. V. Liu, Phys. Rev. B [**83**]{}, 092501 (2011). L. He, M. Jin, and P. Zhuang, Phys. Rev. B [**74**]{}, 024516 (2006). F. Wu, G.-C. Guo, W. Zhang, and W. Yi, Phys. Rev. Lett. [**110**]{}, 110401 (2013) X.-J. Liu and H. Hu, Phys. Rev. A [**87**]{}, 051608 (2013). H. T. C. Stoof , K. B. Gubbels, and D. B. M. Dickerscheid, [*Ultracold Quantum Fields*]{} (Springer, Dordrecht, 2009). J. P. A. Devreese and J. Tempere, arXiv:1310.3840 (2013). [^1]: [email protected]
--- abstract: 'The meson-baryon coupled channel unitary approach with the local hidden gauge formalism is extended to the hidden beauty sector. A few narrow $N^*$ and $\Lambda^*$ resonances around 11 GeV are predicted as dynamically generated states from the interactions of heavy beauty mesons and baryons. Production cross sections of these predicted resonances in $pp$ and $ep$ collisions are estimated as a guide for the possible experimental search at relevant facilities.' author: - | Jia-Jun Wu$^{1}$, Lu Zhao$^{1}$ and B. S. Zou$^{1,2}$\ $^1$ Institute of High Energy Physics, CAS, P.O.Box 918(4), Beijing 100049, China\ $^2$ Theoretical Physics Center for Science Facilities, CAS, Beijing 100049, China date: 'June 26, 2011' title: 'Prediction of super-heavy $N^*$ and $\Lambda^*$ resonances with hidden beauty' --- Introduction {#s1} ============ In conventional quark models, all established baryons are ascribed into simple 3-quark (qqq) configurations [@PDG]. The excited baryon states are described as excitation of individual constituent quarks, similar to the cases for atomic and nuclear excitations. However, unlike atomic and nuclear excitations, the typical hadronic excitation energies are comparable with constituent quark masses. Hence to drag out a $q\bar q$ pair from gluon field could be a new excitation mechanism besides the conventional orbital excitation of original constituent quarks. Some baryon resonances are proposed to be meson-baryon dynamically generated states [@Weise; @or; @Oset; @meiss; @Inoue; @lutz; @Hyodopk] or states with large ($qqqq\bar q$) components [@Riska; @Liubc; @Zou10]. A difficulty to pin down the nature of these baryon resonances is that the predicted states from various models are around the same energy region and there are always some adjustable ingredients in each model to fit the experimental data. A typical example is $N^*(1535)$ which has large couplings to the strangeness. In the 3-quark (qqq) configurations, it is described as the orbital angular momentum $L=1$ excitation of a quark. But phenomenological studies suggest that it may be a quasi-bound state of $K\Sigma$ system [@siegel; @inoue; @Nievesar], or a hidden strangeness 5-quark state [@Liubc; @Geng:2008cv]. In order to clearly demonstrate the new excitation mechanism with some of its corresponding states, in Ref.[@charm], the meson-baryon coupled channel unitary approach with the local hidden gauge formalism was performed for the hidden charm sector and several narrow $N^*$ and $\Lambda^*$ resonances with hidden charm were predicted to exist. If found experimentally, these resonances definitely could not be described as three constituent quark states. Here, we extend the study to the hidden beauty sector. Some super-heavy $N^*$ and $\Lambda^*$ resonances with hidden beauty are predicted to exist, with mass around 11 GeV and width smaller than 10 MeV. If these resonances would be experimentally confirmed, they should be part of the heaviest super-heavy island of $N^*$ and $\Lambda^*$ state. As a guild to the future experimental search for these new predicted states, their production cross sections in $pp$ and $ep$ collisions are estimated. In the next section, we present the formalism and ingredients for the study of interactions between heavy beauty meson and baryon with the Valencia approach, and give some detailed discussion on the intermediate meson-baryon loop G functions. In section \[s3\], our numerical results for the masses and widths of the predicted super-heavy $N^*$ and $\Lambda^*$ states are given, followed by a discussion. In section \[s4\], effects of momentum dependent terms in the effective potential are investigated. In section \[s5\], the calculation about production of these predicted states from $pp$ and $ep$ collisions is presented. Finally, a short summary is given in the last section. Formalism for Meson-Baryon Interaction {#s1} ====================================== We follow the recent work of Ref. [@charm] on the interactions between charmed mesons and baryons, and replace charm quark by beauty quark. The $PB\to PB$ and $VB\to VB$ interactions by exchanging a vector meson are considered, as shown by the Feynman diagrams in Fig. \[fe\]. ![Feynman diagrams for the pseudoscalar-baryon (a) or vector-baryon (b) interaction via the exchange of a vector meson ($P_{1}$, $P_{2}$ are $B^0$, $B^+$ or $B^{0}_{s}$, and $V_{1}$, $V_{2}$ are $B^{0*}$, $B^{+*}$ or $B^{0*}_{s}$, and $B_{1}$, $B_{2}$ are $\Sigma_{b}$, $\Lambda_{b}$, $\Xi_{b}$, $\Xi'_{b}$ or $\Omega_{b}$, and $V^{*}$ is $\rho$, $K^{*}$, $\phi$ or $\omega$).[]{data-label="fe"}](feynman.eps){width="0.7\columnwidth"} The effective Lagrangians for the interactions involved are [@ramos]: $$\begin{aligned} {\cal L}_{VVV}&=&ig\langle V^\mu[V^{\nu},\partial_\mu V_{\nu}]\rangle\nonumber\\ {\cal L}_{PPV}&=&-ig\langle V^\mu[P,\partial_\mu P]\rangle\nonumber\\ {\cal L}_{BBV}&=&g (\langle\bar{B}\gamma_\mu [V^\mu,B]\rangle+\langle\bar{B}\gamma_\mu B\rangle\langle V^\mu\rangle)\ \label{eq:lag}\end{aligned}$$ where $P$ and $V$ stand for pseudoscalar and vector mesons of the 16-plet of SU(4), respectively. Using the same approach of Ref.[@charm], we keep only the $\gamma^{0}$ component of Eq.(\[eq:lag\]), while the three momentum versus the mass of the meson can be neglected under the low energy approximation. Similarly, the $q^2/M^2_V$ term in the vector meson propagator is neglected so that the propagator is approximately equal to $g^{\mu\nu}/M^{2}_{V}$. Note when we consider transitions from heavy mesons to light ones later on, we perform the exact calculation without such approximation. Then with $g=M_V/2f$ the transition potentials corresponding to the diagrams of Fig. \[fe\] are given by $$\begin{aligned} V_{ab(P_{1}B_{1}\to P_{2}B_{2})}&=&\frac{C_{ab}}{4f^{2}}(E_{P_{1}}+E_{P_{2}})\label{vpbb},\\ V_{ab(V_{1}B_{1}\to V_{2}B_{2})}&=&\frac{C_{ab}}{4f^{2}}(E_{V_{1}}+E_{V_{2}})\vec{\epsilon}_1\cdot\vec{\epsilon}_{2},\label{vvbb}\end{aligned}$$ where the $a,b$ stand for different channels of $P_{1}(V_{1})B_{1}$ and $P_{2}(V_{2})B_{2}$, respectively. The $E$ is the energy of corresponding particle. The $\vec{\epsilon}$ is the polarization vector of the initial or final vector. And the $\epsilon_{1,2}^{0}$ component is neglected consistently with taking $\vec{p}/M_V\sim 0$, with $\vec{p}$ the momentum of the vector meson. Here we only change the charm quark to beauty quark, so the $C_{ab}$ coefficients are exactly the same as those in Ref.[@charm], so that there are only two cases, (I, S) = (1/2, 0) and (0, -1), which have attractive potentials. We list the values of the $C_{ab}$ coefficients for $PB\to PB$ for these two cases in Table I and Table II, respectively. $B \Sigma_{b}$ $B \Lambda_{b}$ $\eta_{b} N$ $\pi N$ $\eta N$ $\eta' N$ $K \Sigma$ $K \Lambda$ ----------------- ---------------- ----------------- --------------- --------- --------------- ----------- ------------ ------------- $B \Sigma_{b}$ $-1$ $ 0$ $-\sqrt{3/2}$ $-1/2$ $-1/\sqrt{2}$ $1/2$ $1 $ $ 0$ $B \Lambda_{b}$ $ 1$ $\sqrt{3/2}$ $-3/2$ $1/\sqrt{2}$ $-1/2$ $0$ $ 1$ : Coefficients $C_{ab}$ in Eq. (\[vpbb\]) for $(I,S)=(1/2, 0)$[]{data-label="zcof"} $B_{s} \Lambda_{b}$ $B \Xi_{b}$ $B \Xi^{'}_{b}$ $\eta_{b}\Lambda$ $\pi \Sigma$ $\eta \Lambda$ $\eta' \Lambda$ $\bar{K}N$ K $\Xi$ --------------------- --------------------- ------------- ----------------- ----------------------- ---------------------- ----------------------- ------------------------ ------------- ---------------------- -- $B_{s} \Lambda_{b}$ $0$ $-\sqrt{2}$ $0$ $1$ $0$ $\sqrt{\frac{1}{3}}$ $\sqrt{\frac{2}{3}}$ $-\sqrt{3}$ $0$ $B \Xi_{b}$ $-1$ $0$ $\sqrt{\frac{1}{2}}$ $-\frac{3}{2}$ $\sqrt{\frac{1}{6}}$ $-\sqrt{\frac{1}{12}}$ $ 0$ $\sqrt{\frac{3}{2}}$ $B \Xi^{'}_{b}$ $-1$ $-\sqrt{\frac{3}{2}}$ $\sqrt{\frac{3}{4}}$ $-\sqrt{\frac{1}{2}}$ $\frac{1}{2}$ $0$ $\sqrt{\frac{1}{2}}$ $\eta_{b}\Lambda$ $0$ $0$ $ 0$ $0$ $0$ $0$ : Coefficients $C_{ab}$ in Eq. (\[vpbb\]) for $(I,S)=(0,-1)$[]{data-label="mcof"} With the transition potential, the coupled-channel scattering matrix can be obtained by solving the coupled-channel Bethe-Salpeter equation in the on-shell factorization approach of Refs.[@or; @meiss] $$\begin{aligned} T=[1-VG]^{-1}V \label{Bethe}\end{aligned}$$ with G being the loop function of a meson (P), or a vector (V), and a baryon (B). The $\vec{\epsilon}_1\cdot\vec{\epsilon}_2$ factor of Eq. (\[vvbb\]) factorizes out also in $T$. For the G loop function, there are usually two ways to regularize it. The first one is using dimensional regularization by means of the formula $$\begin{aligned} G&\!=\!&i2M_{B}\int\frac{d^{4}q}{(2\pi)^{4}}\frac{1}{(P\!-\!q)^{2} \!-\!M^{2}_{B}\!+\!i\varepsilon}\frac{1}{q^{2}\!-\!M^{2}_{P}\!+\!i\varepsilon},\nonumber\\ &=&\frac{2M_{B}}{16\pi^2}\big\{a_{\mu}+\textmd{ln}\frac{M^{2}_{B}}{\mu^{2}} +\frac{M^{2}_{P}-M^{2}_{B}+s}{2s}\textmd{ln}\frac{M^{2}_{P}}{M^{2}_{B}}\nonumber\\ &&+\frac{\bar{q}}{\sqrt{s}}\big[\textmd{ln}(s-(M^{2}_{B}-M^{2}_{P})+2\bar{q}\sqrt{s})+\textmd{ln}(s+(M^{2}_{B}-M^{2}_{P})+2\bar{q}\sqrt{s})\nonumber\\ &&-\textmd{ln}(-s-(M^{2}_{B}-M^{2}_{P})+2\bar{q}\sqrt{s})-\textmd{ln}(-s+(M^{2}_{B}-M^{2}_{P})+2\bar{q}\sqrt{s})\big]\big\}\ ,\label{Gf}\end{aligned}$$ where $q$ is the four-momentum of the meson, $P$ is the total four-momentum of the meson and the baryon, $s=P^2$, $\bar q$ denotes the three momentum of the meson or baryon in the center of mass frame, $\mu$ is a regularization scale, which we put 1000 MeV here. Changes in the scale are reabsorbed in the subtraction constant $a_{\mu}$ to make results scale independent. $a_{\mu}$ is of the order of $-2$, which is the natural value of the subtraction constant [@ollerulf]. When we look for poles in the second Riemann sheet, we should change $q$ to $-q$ when $\mathrm{\sqrt{s}}$ is above the threshold in Eq.(\[Gf\]) [@luisaxial]. The second way to regularize the $G$ loop function is by putting a cutoff in the three-momentum: $$\begin{aligned} G&=&i2M_{B}\int\frac{d^{4}q}{(2\pi)^{4}}\frac{1}{(P-q)^{2}-M^{2}_{B}+i\varepsilon}\frac{1}{q^{2}-M^{2}_{P}+i\varepsilon}\nonumber\\ &=&\int^{\Lambda}_{0}\frac{\bar q^{2}d\bar q}{4\pi^{2}}\frac{2M_{B}(\omega_{P}+\omega_{B})}{\omega_{P}\,\omega_{B}\,(s-(\omega_{P}+\omega_{B})^{2}+i\epsilon)}\ ,\label{Gf2}\end{aligned}$$ where $\omega_{P}=\sqrt{\bar q^{2}+M^{2}_{P}}$, $\omega_{B}=\sqrt{\bar q^{2}+M^{2}_{B}}$, and $\Lambda$ is the cutoff parameter in the three-momentum of the function loop. Here we give some detailed discussion on these two types of G function. Firstly the free parameters are $a_{\mu}$ in Eq.(\[Gf\]) and $\Lambda$ in Eq.(\[Gf2\]). The value of $\Lambda$ is around 0.8 GeV, which are within the natural range for effective theories [@meiss]. Then the $a_\mu$ parameter is determined by requiring that the two G functions from Eq.(\[Gf\]) and Eq.(\[Gf2\])take the same value at threshold. This value also leads to similar shape near threshold for the two G functions as shown in Fig.\[pgf\]. In Fig.\[pgf\], the real part and imaginary part of two G functions vs the energy difference between the center mass energy and the corresponding threshold for $K\Sigma$, $\bar D\Sigma_c$ and $B\Sigma_b$ channels are demonstrated. In Table.\[gfunt\], the parameters for different G functions and channels are listed. While the imaginary parts of two G functions are exactly the same, there are some differences for the real parts of two G functions and the differences become bigger for heavier channels. For the same $\Lambda$ value, the magnitude of $a_\mu$ depends on the threshold of channels and gets bigger for heavier channels. One point worth mentioning is that for the $B\Sigma_b$ channel the real part of the G function given by Eq.(\[Gf\]) is larger than zero for energies more than 50 MeV below the threshold as shown in Fig.\[pgf\]. As we know, if the interaction is repulsive potential, [*i.e.*]{}, the value of the potential $V$ is positive, there should be no bound state. However, when the real part of G function is also positive below the threshold, the pole can still be found in the model T matrix with a repulsive potential. These poles far below threshold are beyond the valid region of the model approximation and should be discarded. Since varying the $G$ function in a reasonable range does not influence our conclusion qualitatively, we present our numerical results in the dimensional regularization scheme with $a_\mu=-3.71$, in this paper. ![The real part (left) and imaginary part (right) of two G functions vs the energy difference between the C.M. energy and the threshold energy. The solid lines are for Eq.(\[Gf2\]), and dashed lines are for Eq.(\[Gf\]). The thickest lines are for $B\Sigma_b$ channel, the thinnest ones are for $K\Sigma$ channel, and middle ones are for $\bar{D}\Sigma_c$ channel. The used parameters are listed in the Table.\[gfunt\] with $\Lambda=0.8GeV$.[]{data-label="pgf"}](reg.eps "fig:"){width="0.49\columnwidth"} ![The real part (left) and imaginary part (right) of two G functions vs the energy difference between the C.M. energy and the threshold energy. The solid lines are for Eq.(\[Gf2\]), and dashed lines are for Eq.(\[Gf\]). The thickest lines are for $B\Sigma_b$ channel, the thinnest ones are for $K\Sigma$ channel, and middle ones are for $\bar{D}\Sigma_c$ channel. The used parameters are listed in the Table.\[gfunt\] with $\Lambda=0.8GeV$.[]{data-label="pgf"}](img.eps "fig:"){width="0.49\columnwidth"} Threshold(GeV) ------------------- ---------------- ---------- ---------- ---------- ---------- ---------- $\Lambda(GeV)$ 0.7 0.8 0.9 1.0 1.1 $B \Sigma_{b}$ $11.087$ $-3.679$ $-3.715$ $-3.751$ $-3.786$ $-3.822$ $\bar{D}\Sigma_c$ $4.231$ $-2.196$ $-2.283$ $-2.369$ $-2.453$ $-2.536$ $K\Sigma$ $1.688$ $-1.297$ $-1.463$ $-1.619$ $-1.766$ $-1.905$ : The parameters for two types of G functions in the cases of $K\Sigma$, $\bar D\Sigma_c$ and $B\Sigma_b$ interactions, with $a_\mu$ for Eq.(\[Gf\]) and $\Lambda$ for Eq.(\[Gf2\]). The listed $a_\mu$ and $\Lambda(GeV)$ give the same value of two G functions at the corresponding threshold.[]{data-label="gfunt"} With the potential and $G$ function fixed, the unitary T amplitude can be obtained by Eq.(\[Bethe\]). The poles in the $T$ matrix are looked for in the complex plane of $\sqrt{s}$. Those appearing in the first Riemann sheet below threshold are considered as bound states whereas those located in the second Riemann sheet and above the threshold of some channel are identified as resonances. As previously discussed, the poles will be kept only when the real part of Eq.(\[Gf\]) is negative. From the T matrix for the $PB\to PB$ and $VB\to VB$ coupled-channel systems, we can find the pole positions $z_R$. Six poles are found in the real axes below corresponding thresholds and therefore they are bound states. For these cases the coupling constants are obtained from the amplitudes in the real axis. These amplitudes behave close to the pole as: $$\begin{aligned} T_{ab}=\frac{g_{a}g_{b}}{\sqrt{s}-z_{R}} .\end{aligned}$$ We can use the residue of $T_{aa}$ to determine the value of $g_{a}$, except for a global phase. Then, the other couplings are derived from $$\begin{aligned} g_{b}=\lim_{\sqrt{s}\rightarrow z_{R}}(\frac{g_{a}T_{ab}}{T_{aa}})\ .\label{coupling2}\end{aligned}$$ Numerical results for the super-heavy $N^*$ and $\Lambda^*$ {#s3} ============================================================ Firstly, we discuss the (I, S) = (1/2, 0) sector. There are two channels, $B\Sigma_b$ and $B\Lambda_b$. The masses of these particles are taken from [@PDG], $m_{B}=5.279$ GeV, $m_{B^*}=5.325$ GeV, $m_{\Sigma_b}=5.808$ GeV and $m_{\Lambda_b}=5.620$ GeV. With the approach outlined in the last section, the obtained pole positions $z_R$ and coupling constants $g_\alpha$ are listed in Tables \[nos\] for $PB \to PB$ and $VB \to VB$. Because these poles are bound states for each channel, they have zero width when neglecting transitions mediated by t-channel exchange of heavy beauty mesons. To consider some possible decay channels for them, such as $\pi N$, $\eta N$, $K\Sigma$, $\eta_b N$ and so on, we estimate these decays through heavy beauty meson exchanges by means of box diagrams as in Refs.[@raquel; @geng; @charm]. We neglect transitions to the hidden charm channels such as $\bar{D}\Sigma_{c}$ and $\bar{D}\Lambda^+_{c}$, because they need t-channel exchange of too heavy vector meson constituted of charm and beauty quarks. The results for $PB$ and corresponding $VB$ channels are listed in Table \[noswidth\]. Comparing results in Table \[nos\] and Table \[noswidth\], the influence of these additional coupled channels to the masses of predicted states is negligible. This is because the transition potential by exchanging heavy beauty vector meson is much smaller than the potential by exchanging light vector meson. $z_R$ (MeV) ------------- -------------------- --------------------- $B \Sigma_{b}$ $B \Lambda_{b}$ $11052$ $2.05$ $0$ $B^{*} \Sigma_{b}$ $B^{*} \Lambda_{b}$ $11100$ $2.02$ $0$ : Pole positions $z_R$ and coupling constants $g_a$ for the states in (I, S) = (1/2, 0) sector.[]{data-label="nos"} $M$ (MeV) $\Gamma$ (MeV) ----------- ---------------- ---------- ------------ ---------------- -------------- ----------- -- -- $\pi N$ $\eta N$ $\eta' N$ $K \Sigma$ $\eta_bN$ $11052$ $1.38$ $0.10$ $0.21 $ $0.11$ $0.42$ $0.52$ $\rho N$ $\omega N$ $K^{*} \Sigma$ $\Upsilon N$ $11100$ $1.33$ $0.09$ $0.30$ $0.39$ $0.51$ : Mass ($M$), total width ($\Gamma$), and partial decay widths ($\Gamma_i$) for (I, S) = (1/2, 0) sector.[]{data-label="noswidth"} We also do not consider the coupled channel effect between $VB$ and $PB$ channels as in Ref.[@charm]. The reason is that the transition potentials $PB\to VB$ are much smaller than the potentials of $PB\to PB$ or $VB\to VB$. Taking $B\Sigma_b \to B^*\Sigma_b$ through t-channel pion exchange as an example, the $B^* \pi B$ coupling is proportional to $(p_{B}-p_{\pi})_\mu \varepsilon^\mu_{B^*}$ and is zero in the static limit which ignores the three momenta of mesons and assumes $\varepsilon^\mu_{B^*}=(0,\vec{\varepsilon}_{B^*})$. Going beyond the static limit will give a non-zero transition potential but still much smaller than its diagonal partners. This has been demonstrated by the production rate of $J/\psi / \eta_c$ from $\bar pp$ collisions in Ref.[@charm]. The cross section for $\bar pp\to\bar pp J/\psi$ through $\bar D\Sigma_c$ bound state is smaller than that for $\bar pp\to\bar pp \eta_c$ by more than an order of magnitude for similar excess energies. Therefore, the coupled channel effect between $VB$ and $PB$ channels is expected to have negligible influence on our predicted states. One problem associated to the beauty sector should be addressed here. As shown in Fig.\[pgf\], the loop functions of the hidden beauty sector, calculated with the cut-off or with dimensional regularization, show a quite different energy dependence and cannot be made similar over a reasonable range of values, as is the case for the hidden strange sector. This is due to that the on-shell momentum in the beauty channel shows a much stronger energy dependence than in the lightest channel. The results listed in Tables \[nos\],\[noswidth\] are obtained in the dimensional regularization scheme, where the subtraction constant is adjusted to the value of the $\Lambda=0.8$ GeV cut-off loop-function at threshold. However, the binding energy is found to be about 35 MeV for the $B\Sigma_b$ channel, which lies quite away of its threshold, where the real parts of the two loop functions are very different. This makes the choice of matching point for the two loop functions questionable. In order to get some feeling about the choice of the match point, it is also tried to match the two loop functions at 30 MeV below threshold. Then the regularization subtraction constant moves from -3.715 to -3.774, and the binding energy moves from 35 MeV to 59 MeV. If the $\Lambda=0.8$ GeV cut-off loop function is used directly, then the binding energy increases to 145 MeV. So the simple Valencia model for the beauty sector works not as good as for the hidden strange sector. The uncertainty for the concrete binding energies is quite large of the order of tens to a hundred MeV. But the qualitative conclusion for possible existence of bound state should be very solid. Then we discuss the (I, S) = (0, -1) sector. There are 3 channels, $B_s\Lambda_b$, $B\Xi_b$ and $B\Xi'_b$. The masses of $B$, $B_s$, $\Xi_b$ and $\Lambda_b$ have been precisely measured and can be taken from Ref.[@PDG]. $m_{B_s}=5.366$ GeV, $m_{B^*_s}=5.4128$ GeV and $m_{\Xi_b}=5.7924$ GeV. The $\Xi'_b$ has not been observed yet. Its mass has been predicted to be 5.922 GeV in Ref.[@wangzhigang] and 5.960 GeV in Ref.[@jenkins]. We choose a middle value 5.940 GeV in this paper. From Table \[mcof\], the $B\Xi'_b$ channel is decoupled from other two channels, so there should be a bound state for this channel, the same as corresponding vector-meson-baryon channel, $B^*\Xi'_b$. For this channel, the results are listed in Table \[mspoles\]. For the coupled $B_s\Lambda_b$ and $B\Xi_b$ channels, the T matrix can be written as: $$T=\frac{1}{1-V'G_{B\Xi_b}}\left( \begin{array}{cc} V^2_{B_s\Lambda_b \to B\Xi_b}G_{B\Xi_b} & V_{B_s\Lambda_b \to B\Xi_b} \\ V_{B_s\Lambda_b \to B\Xi_b} & V' \end{array} \right) \label{tmatrix}$$ with $ V'=V_{B\Xi_b \to B\Xi_b}+V^2_{B_s\Lambda_b \to B\Xi_b}G_{B_s\Lambda_b}$. The $V'$ is negative and hence provides an attractive potential. For $a_\mu=-3.71$, one pole is found for the coupled-channel system, with mass between the two thresholds of $B_s\Lambda_b$ (10.986 GeV) and $B\Xi_b$ (11.071 GeV). The pole position depends on the value of $a_\mu$ as demonstrated in Table \[mspoles\] and can move to below the $B_s\Lambda_b$ threshold when the magnitude of $a_\mu$ increases, such as for $a_\mu=-3.82$ corresponding to the $\Lambda=1.1$ GeV. $\Lambda$(GeV) $a_\mu$ ---------------- ------------------------------------- --------------- --------- $B_{s} \Lambda_{b}$ and $B \Xi_{b}$ $B\Xi'_{b}$ $0.7$ $-3.68$ $11030-0.60i$ $11198$ $0.8$ $-3.71$ $11021-0.59i$ $11191$ $0.9$ $-3.75$ $11004-0.49i$ $11178$ $1.0$ $-3.78$ $10990-0.24i$ $11167$ $1.1$ $-3.82$ $10970$ $11151$ : Pole positions $z_R$ with different $a_\mu$ for $PB \to PB$ in (I, S) = (0, -1) sector. \[mspoles\] The coupling constants and the possible decay channels of these two resonances are listed in Tables \[msg\] and \[mswidth\] for $a_\mu=-3.71$. Similarly, the results for the corresponding vector-meson-baryon channels are also listed in Tables \[msg\] and \[mswidth\] for $a_\mu=-3.71$. $z_R$ (MeV) --------------- ----------------------- --------------- ---------------- $B_{s} \Lambda_{b}$ $B \Xi_{b}$ $B \Xi'_{b}$ $11021-0.59i$ $0.14-0.11i$ $2.27+0.004i$ $0$ $11191$ $0$ $0$ $1.92$ $B^*_{s} \Lambda_{b}$ $B^* \Xi_{b}$ $B^* \Xi'_{b}$ $11069-0.59i$ $0.14-0.12i$ $2.24+0.005i$ $0$ $11238$ $0$ $0$ $1.89$ : Pole positions $z_R$ and coupling constants $g_a$ for the states in (I, S) = (0, -1) sector for $a_\mu=-3.71$. \[msg\] $M$ (MeV) $\Gamma$ (MeV) (MeV) ----------- ---------------- ------------- -------------- ----------------- ---------------- ---------- ------------------- ------------------ $\bar{K} N$ $\pi\Sigma$ $\eta\Lambda$ $\eta'\Lambda$ $K\Xi$ $\eta_b\Lambda$ $B_s\Lambda_b$ $11021$ $2.21$ $0.65$ $0.01$ $0.08 $ $0.14$ $0.01$ $0.19$ $1.18$ $11191$ $1.24$ $0 $ $0.28$ $0.18 $ $0.10$ $0.18 $ $0.48$ $0$ $\bar K^*N$ $\rho\Sigma$ $\omega\Lambda$ $\phi\Lambda$ $K^*\Xi$ $\Upsilon\Lambda$ $B^*_s\Lambda_b$ $11070$ $2.17$ $0.61$ $0.01$ $0.01$ $0.20$ $0.01$ $0.19$ $1.18$ $11239$ $1.19$ $0$ $0.26$ $0.26$ $0$ $0.17$ $0.48$ $0$ : Mass ($M$), total width ($\Gamma$), and partial decay widths ($\Gamma_i$) for the states in (I, S) = (0, -1) sector for $a_\mu=-3.71$. \[mswidth\] Totally two $N^*$ and four $\Lambda^*$ states are predicted to exist with masses above $11$ GeV and very narrow widths of only a few MeV. The very narrow widths are due to the fact that all decays are tied to the necessity of the exchange of a heavy beauty vector meson because of hidden $b\bar{b}$ components involved in these states, and hence are suppressed. If these predicted narrow $N^*$ and $\Lambda^*$ resonances with hidden beauty would be found, they definitely cannot be accommodated by quark models with three constituent quarks. Together with other possible $N^*$ and $\Lambda^*$ states of other quantum numbers with hidden beauty, they should form a super-heavy island of the heaviest masses for excited nucleons $N^*$ and excited hyperons $\Lambda^*$. Effects of momentum dependent terms in the potential {#s4} ==================================================== For our model calculations in the last two sections, the static limit is assumed for the t-channel exchange of light vector mesons by neglecting momentum dependent terms as discussed after the Eq.(\[eq:lag\]). However, in Ref.[@jrv], dynamically generated open charmed baryons were studied by solving the Lippmann - Schwinger equation beyond the zero range approximation. The momentum dependent terms were found to have non-negligible effects on the results. In order to investigate the possible influence of the momentum dependent terms in this case, in this section, we use the conventional Schrodinger Equation approach to study possible bound states for the $B\Sigma_b$ channel by keeping the momentum dependent terms in the t-channel meson exchange potential. The deduction of the momentum dependent potential by the t-channel exchange of light vector mesons is straightforward. By keeping momentum dependent terms up to quadratic order with proper normalization factor for the Schrodinger Equation and including the vertex form factors as in the Bonn potential model [@Bonn], the effective S-wave $B\Sigma_b$ potential is obtained as the following: $$\begin{aligned} V^{S}_{ab(P_{1}B_{1}\to P_{2}B_{2})}&=&\frac{C_{ab}m^2_{V}}{4f^2}\frac{1}{\vec{q}^{2}+m_V^2} \left(\frac{\Lambda^2_V-m_V^2}{{\Lambda}^2_V+\vec{q}^2}\right)^2\times\nonumber\\ &&\left(1 + \frac{m_P^2 + 2m_B^2 + 4 m_P m_B}{4 m_P^2 m_B^2}\vec{k}^{2} + \frac{2 m_B^2 - m_P^2}{16 m_P^2 m_B^2}\vec{q}^{2}\right),\label{vsqq}\end{aligned}$$ where $\vec{k}$ and $\vec{q}$ are defined as $(\vec{p}+\vec{p'})/2$ and $\vec{p}-\vec{p'}$ with $\vec{p}$ and $\vec{p'}$ the initial and final momenta of the pseudo-scalar meson, respectively, in the center of mass system of the $B\Sigma_b$ channel. For simplicity, we assume the same cut-off parameter $\Lambda_V$ for the $\rho$ and $\omega$ mesons. The effective potential for the Schrodinger Equation in the coordinate space, $V(\vec r)$, can be obtained by using the following Fourier-transformation formulae: $$\begin{aligned} \mathcal{F}\{ {(\frac{\Lambda^2-m^2}{{\Lambda}^2+\vec{q}^2})}^2\frac{1}{\vec{q}^2+m^2}\}&=& \frac{1}{4\pi} \left(\frac{e^{-mr}}{r}-\frac{e^{-\Lambda r}}{r}-(\Lambda^2-m^2)\frac{e^{-\Lambda r}}{2 \Lambda}\right),\nonumber\\ \mathcal{F}\{ {(\frac{\Lambda^2-m^2}{{\Lambda}^2+\vec{q}^2})}^2{\frac{\vec{q}^2}{\vec{q}^2+m^2}}\}&=& \frac{1}{4\pi} \left(m^2(-\frac{e^{-mr}}{r}+\frac{e^{-\Lambda r}}{r}) +(\Lambda^2-m^2)\frac{\Lambda e^{-\Lambda r}}{2} \right),\nonumber\\ \mathcal{F}\{{(\frac{\Lambda^2-m^2}{{\Lambda}^2+\vec{q}^2})}^2{\frac{\vec{k}^2}{\vec{q}^2+m^2}}\}&=& \frac{1}{4\pi}\left(\frac{m^2}{4}\frac{e^{-mr}}{r}-\frac{\Lambda ^2}{4}\frac{e^{-\Lambda r}}{r}-\frac{\Lambda^2-m^2}{4}(\frac{\Lambda r}{2}-1) \frac{e^{-\Lambda r}}{r} \right) \nonumber\\ &&-\frac{1}{8\pi}\{\nabla^2,\frac{e^{-mr}}{r}-\frac{e^{-\Lambda r}}{r}-\frac{\Lambda^2-m^2}{2} \frac{e^{-\Lambda r}}{\Lambda}\} .\nonumber\end{aligned}$$ Then we can solve the Schrodinger Equation $$(-\frac{\hbar^2}{2\mu}\nabla^{2}+V(\vec{r})-E)\Psi(\vec{r})=0 ,$$ to find possible bound state with eigenvalue E and corresponding wave function $\Psi(\vec r)$, and estimate the size of the system $\bar r$ with the formula $$\begin{aligned} \bar{r}&=&\sqrt{\int r^2drd\Omega \Psi^*(\vec{r})r^2\Psi(\vec{r})\label{r}}.\end{aligned}$$ It is found that whether there exists a bound state depends on the cut-off parameter $\Lambda_V$. The results corresponding to various $\Lambda_V$ values are listed in Table.\[eigenvalues\]. -- ------------------ ------- ------- ------- ------- ------- ------- ------- ------- ------- ------- -- $\Lambda_V$(MeV)  1100 1200 1300 1400 1500 1600 1700 1800 1900 2000 $E$(MeV)   - -0.85 -4.49 -10.5 -18.4 -27.9 -38.7 -50.5 -63.3 -78.9 $\bar{r}$(fm)   - 2.36 1.19 0.86 0.70 0.60 0.53 0.48 0.44 0.41 -- ------------------ ------- ------- ------- ------- ------- ------- ------- ------- ------- ------- -- : Eigenvalue $E$ and average size of system $\bar r$ vs the cut-off parameter $\Lambda_V$. \[eigenvalues\] From the Table \[eigenvalues\], we can see that when the cut-off parameter $\Lambda_V$ is $1200$ MeV or larger, the effective potential can provide enough attraction to form a bound state. For $\Lambda_V$ in the range of $1200\sim 1800$ MeV, the binding energy is in the range of $1\sim 50$ MeV with the average distance between two hadrons to be about $0.5\sim 2$ fm. The typical values for $\Lambda_V$ in the Bonn potential are $\Lambda_\rho=1400$ MeV and $\Lambda_\omega=1500$ MeV [@Bonn]. The binding energy corresponding to $\Lambda_V=1600$ MeV is quite close to that obtained by the Valencia approach in the last section. This gives some justification of the simple Valencia approach although there could be an uncertainty of $10\sim 20$ MeV for the binding energy. According to Ref.[@Ericson], “the apparent radius of the pion as seen by the photon is determined almost completely by the intermediate $\rho$ meson: the intrinsic pion size must be considerably smaller than the measured charge radius. In descriptions which explicitly include the $\rho$ meson, the pion can therefore be considered point-like for all practical purposes". In our approach with the t-channel $\rho$ meson exchange explicitly included, the D meson similar to the pion is expected to have very small size while the intrinsic radius of $\Sigma_c$ baryon is expected to be around 0.5 fm similar to that for the proton [@Ericson]. With typical size $\bar r$ larger than 0.5 fm, our predicted hadron molecular state should not suffer much from internal structure of the constituents. For the $B_s\Lambda_b$-$B\Xi_b$ coupled channel case, it is not so easy to use the Schrodinger Equation approach. Since the simple Valencia approach gives a consistent result for the $B\Sigma_b$ single channel case with Schrodinger Equation approach, we expect it also gives reasonable results for the $B_s\Lambda_b$-$B\Xi_b$ coupled channel case. Production of $N^*_{b\bar b}$ and $\Lambda^*_{b\bar b}$ in $pp$ and $ep$ collisions {#s5} ==================================================================================== In order to look for these predicted super-heavy $N^*_{b\bar b}$ and $\Lambda^*_{b\bar b}$ states, we give an estimation of their production cross sections in the $pp \to pp\eta_b$ and $ep \to ep\Upsilon$ reactions. The Feynman diagrams are shown in Fig.\[feexp\]. We also estimate the background of the $pp \to pp\eta_b$ with $N^*_{b\bar{b}}$ replaced by the nucleon pole. ![Feynman diagrams for the reaction $pp \to pp\eta_b$ and $ep \to ep\Upsilon$.[]{data-label="feexp"}](feypp.eps "fig:"){width="0.35\columnwidth"} ![Feynman diagrams for the reaction $pp \to pp\eta_b$ and $ep \to ep\Upsilon$.[]{data-label="feexp"}](feyep.eps "fig:"){width="0.35\columnwidth"} The Lagrangians for the interaction vertices of these two reactions are as follows [@xie; @zouf; @wujj]: $$\begin{aligned} {\cal L}_{NN\pi}&=&g_{NN\pi}\bar{N}\gamma_5\vec{\tau}\cdot\vec{\psi}_{\pi}N+h.c.,\\ {\cal L}_{NN\eta_b}&=&g_{NN\eta_b}\bar{N}\gamma_5\psi_{\eta_b}N+h.c.,\\ {\cal L}_{N^{*}_{b\bar{b}}N\pi}&=&g_{N^{*+}_{b\bar{b}}N\pi}\overline{N^{*}_{b\bar{b}}}N\vec{\tau}\cdot\vec{\psi}_{\pi}+h.c.,\\ {\cal L}_{N^{*}_{b\bar{b}}N\eta_b}&=&g_{N^{*+}_{b\bar{b}}N\eta_b}\overline{N^{*}_{b\bar{b}}}N\psi_{\eta_b}+h.c.,\\ {\cal L}_{ee\gamma}&=&ie\bar{\psi}_{e}\gamma_{\mu}\psi_{e}A^{\mu}_{\gamma}+h.c.,\\ {\cal L}_{\rho\gamma}&=&\frac{em^{2}_{\rho}}{f_{\rho}}\rho^{\mu}A_{\gamma\mu}+h.c.,\\ {\cal L}_{N^{*}_{b\bar{b}}N\rho}&=&g_{N^{*}_{b\bar{b}}N\rho}\overline{N^{*}_{b\bar{b}}}\gamma_{5}\gamma^{\mu}N\tilde{g}_{\mu\nu}(P_{N^{*}_{c\bar{c}}})\vec{\tau}\cdot\vec{\psi}^{\nu}_{\rho}+h.c.,\\ {\cal L}_{N^{*}_{b\bar{b}}N\Upsilon}&=&g_{N^{*}_{b\bar{b}}N\rho}\overline{N^{*}_{b\bar{b}}}\gamma_{5}\gamma^{\mu}N\tilde{g}_{\mu\nu}(P_{N^{*}_{c\bar{c}}})\psi^{\nu}_{\Upsilon}+h.c..\end{aligned}$$ with $\tilde{g}_{\mu\nu}(P)=-g_{\mu\nu}+\frac{P^{\mu}P^{\nu}}{P^{2}}$. In our model calculation, we only consider S-wave PB and VB interactions, so the spin-parity $J^{P}$ of our predicted $N^*_{b\bar{b}}$ for the PB channels is $1/2^{-}$, and the $N^*_{b\bar{b}}$ for the VB channels can be either $1/2^{-}$ or $3/2^{-}$, but assumed to be $1/2^{-}$ here for a simple estimation of rough production rate. The coupling constants of the Lagrangians can be either calculated from its corresponding partial decay widths or obtained from references. They are all listed in Table \[coupling\]. For the $NN\eta_b$ vertex, the width of $\eta_b$ has not been measured. Since both $\eta_b$ and $\eta_c$ couple to nucleon through two gluon exchange, we use the relation $g_{NN\eta_b}\sim g_{NN\eta_c}\alpha_s^4(M_{\eta_b})/\alpha_s^4(M_{\eta_c})$ to estimate the $g_{NN\eta_b}$ with $g_{NN\eta_c}$ determined from the decay width of $\eta_c\to p\bar p$. Vertex $\Gamma(MeV)$ Coupling Constant($g^2/4\pi$) ------------------------------ --------------- ------------------------------- -- -- -- -- -- -- $pp\pi^0$ $14.4$ $N^{*+}_{b\bar{b}}p\pi^0$ $0.033$ $1.03\times 10^{-5}$ $N^{*+}_{b\bar{b}}p\eta_b$ $0.52$ $1.81\times 10^{-3}$ $ee\gamma$ $1/137$ $\gamma\rho$ $2.7$ [@xie] $N^{*+}_{b\bar{b}}p\rho^0$ $0.030$ $1.55\times 10^{-8}$ $N^{*+}_{b\bar{b}}p\Upsilon$ $0.51$ $4.72\times 10^{-4}$ $pp\eta_{b}$ $1\times 10^{-6}$ : The coupling constants of involved vertices and corresponding widths used. \[coupling\] As usual, the off-shell form factors should be considered here. We use two kinds of form factors for mesons and baryons, respectively. $$\begin{aligned} F_{M}&=&\frac{\Lambda_{M}^{2}-m^{2}_{M}}{\Lambda^{2}_{M}-p^{2}_{M}},\\ F_{N}&=&\frac{\Lambda_{N}^{4}}{\Lambda_{N}^{4}+(p^{2}_{N}-m^{2}_{N})^{2}},\end{aligned}$$ where $M$ stands for $\pi$ or $\rho$, and $N$ stands for $N^{*}_{b\bar{b}}$ or nucleon pole. Here $\Lambda_{M}=1.3$ GeV, $\Lambda_{N}=1.0$ GeV. To produce the predicted $N^*_{b\bar{b}}(11052)$ in the pp collisions, the center-of-mass energy should be above 12 GeV. In Fig.\[exp\], the left figure shows our theoretical estimated total cross section for the $pp \to pp \eta_b$ reaction through the $N^*_{b\bar{b}}$ production vs the center-of-mass energy, with (dashed curve) and without (solid curve) including the off-shell form factors. As an estimation of background contribution to the $N^*_{b\bar{b}}$ production, we also calculate the corresponding cross section through the off-shell nucleon pole without including the form factors. The result is shown by the dotted curve. The contribution from the nucleon pole is much smaller than that from the $N^*_{b\bar{b}}$ production, because the nucleon pole is much more off-shell than $N^*_{b\bar{b}}$. The contribution of the nucleon pole with form factors becomes very small for the same reason, so it is not shown in Fig.\[exp\]. This background reaction will not influence the observation of the $N^*_{b\bar{b}}$ production, especially for the energy range of $13\sim 25$ GeV. The cross section from $N^*_{b\bar{b}}$ production is about 0.1 nb, which is much smaller than that for the corresponding reaction $pp \to pp \eta_c$ with $N^*_{c\bar{c}}$ production [@charm] of about 0.1 $\mu b$. The main reason is that both couplings of $N^*_{b\bar{b}}N\pi$ and $N^*_{b\bar{b}}N\eta_b$ are much smaller than the corresponding $N^*_{c\bar{c}}N\pi$ and $N^*_{c\bar{c}}N\eta_c$ couplings. These two vertices cause a reduction of about 2 orders of magnitude. In addition, because the center-of-mass energy here is much larger than that in the previous calculation for the $\eta_c$ production, the propagator of exchanged $\pi^0$ further reduces the contribution. For the same reason, the contribution with form factors is much less than that without them. ![Total cross section vs invariant mass of system for $pp \to pp \eta_b$ reaction (left) and $e^-p \to e^-p \Upsilon$ reaction (right), with (dashed curves) and without (solid curves) including off-shell form factors, through production of the predicted $N^*_{b\bar b}$ resonances. The dotted curve is the background contribution from the nucleon pole for the $pp \to pp \eta_b$ reaction without including form factors.[]{data-label="exp"}](ppetab.eps "fig:"){width="0.49\columnwidth"} ![Total cross section vs invariant mass of system for $pp \to pp \eta_b$ reaction (left) and $e^-p \to e^-p \Upsilon$ reaction (right), with (dashed curves) and without (solid curves) including off-shell form factors, through production of the predicted $N^*_{b\bar b}$ resonances. The dotted curve is the background contribution from the nucleon pole for the $pp \to pp \eta_b$ reaction without including form factors.[]{data-label="exp"}](epup.eps "fig:"){width="0.49\columnwidth"} For the production of $N^*_{b\bar{b}}(11100)$ in $ep$ collisions, the invariant mass of the system should be above 11 GeV. The right figure in Fig.\[exp\] shows our calculated total cross section for the $e^-p \to e^-p \Upsilon$ reaction vs the invariant mass of the system with (dashed curve) and without (solid curves) including form factors. The cross section of this reaction is much larger than that for the $pp \to pp \eta_b$ reaction. The reason is due to the propagator of massless photon. The propagator of photon is given as the following: $$\begin{aligned} \frac{1}{p^{2}_{\gamma}}&=&\frac{1}{2(m^2_{e}+p_{i}p_{f}cos\theta-E_{i}E_{f})},\label{phog}\end{aligned}$$ where the $p_{i}$, $E_{i}$ are the three-momentum and energy of initial $e^-$, and $p_{f}$, $E_{f}$ for final $e^-$. $\theta$ is the angle between initial and final $e^-$. When the directions of initial and final $e^-$ are the same, [*i.e.*]{}, $cos\theta=1$, the value of Eq.(\[phog\]) becomes very large because of the very small mass of $e^-$. As the beam momentum of $e^-$ becomes larger, the propagator of photon can reach very big value. For the invariant mass of the system less than 15 GeV, the cross section of $e^-p \to e^-p \Upsilon$ reaction is of the same order of magnitude as that of $pp \to pp \eta_b$ reaction. Summary {#s6} ======= In summary, the meson-baryon coupled channel unitary approach with the local hidden gauge formalism is extended to the hidden beauty sector. Two $N^*_{b\bar b}$ states and four $\Lambda^*_{b\bar b}$ states are predicted to be dynamically generated from coupled PB and VB channels with the same approach as for the hidden charm sector [@charm]. Because of the hidden $b\bar{b}$ components involved in these states, the masses of these states are all above 11 GeV while their widths are of only a few MeV, which should form part of the heaviest island for the quite stable $N^*$ and $\Lambda^*$ baryons. The nature of these states is similar as corresponding $N^*_{c\bar c}$ and $\Lambda^*_{c\bar c}$ states predicted in Ref.[@charm], which definitely cannot be accommodated by the conventional 3q quark models. Production cross sections of the predicted $N^*_{b\bar{b}}$ resonances in $pp$ and $ep$ collisions are estimated as a guide for the possible experimental search at relevant facilities in the future. For the $pp \to pp \eta_b$ reaction, the best center-of-mass energy for observing the predicted $N^*_{b\bar{b}}$ is $13\sim 25$ GeV, where the production cross section is about 0.01 nb. For the $e^-p \to e^-p \Upsilon$ reaction, when the center-of-mass energy is larger than 14 GeV, the production cross section should be larger than 0.1 nb. Nowadays, the luminosity for pp or ep collisions can reach $10^{33}cm^{-2}s^{-1}$, this will produce more than 1000 events per day for the $N^*_{b\bar{b}}$ production. We expect future facilities, such as proposed electron-ion collider (EIC) [@EIC], to discover these very interesting super-heavy $N^*$ and $\Lambda^*$ with hidden beauty. Acknowledgments {#acknowledgments .unnumbered} =============== This work is supported by the National Natural Science Foundation of China (NSFC) under grants Nos. 10875133, 10821063, 11035006 and by the Chinese Academy of Sciences under project No. KJCX2-EW-N01, and by the Ministry of Science and Technology of China (2009CB825200). [99]{} Particle Data Group, K. Nakamura [*et al.*]{}, J. Phys. G [**37**]{}, 075021 (2010). N. Kaiser, P. B. Siegel and W. Weise, Phys. Lett.  B [**362**]{}, 23 (1995). E. Oset and A. Ramos, Nucl. Phys.  A [**635**]{}, 99 (1998). J. A. Oller, E. Oset and A. Ramos, Prog. Part. Nucl. Phys.  [**45**]{}, 157 (2000). J. A. Oller and U. G. Meissner, Phys. Lett.  B [**500**]{}, 263 (2001). T. Inoue, E. Oset and M. J. Vicente Vacas, Phys. Rev.  C [**65**]{}, 035204 (2002) C. Garcia-Recio, M. F. M. Lutz and J. Nieves, Phys. Lett.  B [**582**]{}, 49 (2004). T. Hyodo, S. I. Nam, D. Jido and A. Hosaka, Phys. Rev.  C [**68**]{}, 018201 (2003) C. Helminen and D. O. Riska, Nucl. Phys.  A [**699**]{}, 624 (2002). B. C. Liu, B. S. Zou, Phys. Rev. Lett. [**96**]{}, 042002 (2006); ibid, [**98**]{}, 039102 (2007). B. S. Zou, Nucl. Phys.  A [**835**]{}, 199 (2010). N. Kaiser, P. B. Siegel and W. Weise, Phys. Lett.  B [**362**]{}, 23 (1995) \[arXiv:nucl-th/9507036\]. T. Inoue, E. Oset and M. J. Vicente Vacas, Phys. Rev.  C [**65**]{}, 035204 (2002) \[arXiv:hep-ph/0110333\]. J. Nieves and E. Ruiz Arriola, Phys. Rev.  D [**64**]{}, 116008 (2001) L. S. Geng, E. Oset, B. S. Zou and M. Doring, Phys. Rev.  C [**79**]{}, 025203 (2009) \[arXiv:0807.2913 \[hep-ph\]\]. J. J. Wu, R. Molina, E. Oset and B. S. Zou, Phys. Rev. Lett.  [**105**]{} (2010) 232001, arXiv:1007.0573\[nucl-th\]; Phys. Rev.  C [**84**]{} (2011) 015202, arXiv:1011.2399 \[nucl-th\]. E. Oset and A. Ramos, Euro. Phys. J. A [**44**]{}, 445 (2010). J. A. Oller and U. G. Meissner, Phys. Lett.  B [**500**]{}, 263 (2001) . L. Roca, E. Oset and J. Singh, Phys. Rev.  D [**72**]{}, 014002 (2005) R. Molina, D. Nicmorus and E. Oset, Phys. Rev.  D [**78**]{}, 114018 (2008) L. S. Geng and E. Oset, Phys. Rev.  D [**79**]{}, 074009 (2009) Z. G. Wang, Phys. Lett.  B [**685**]{}, 59 (2010) \[arXiv:0912.1648 \[hep-ph\]\]. E. E. Jenkins, Phys. Rev.  D [**54**]{}, 4515 (1996) \[arXiv:hep-ph/9603449\]. C. E. Jimenez-Tejero, A. Ramos and I. Vidana, Phys. Rev. C [**80**]{}, 055206 (2009). R. Machleidt, K. Holinde and C. Elster, Phys. Rept.  [**149**]{}, 1 (1987). T. E. O. Ericson and W. Weise, “PIONS AND NUCLEI,” [*OXFORD, UK: CLARENDON (1988) 479 P. (THE INTERNATIONAL SERIES OF MONOGRAPHS ON PHYSICS, 74)*]{} J. J. Xie, C. Wilkin and B. S. Zou, Phys. Rev.  C [**77**]{}, 058202 (2008) \[arXiv:0802.2802 \[nucl-th\]\]. B. S. Zou and F. Hussain, Phys. Rev.  C [**67**]{}, 015204 (2003) \[arXiv:hep-ph/0210164\]. J. J. Wu, Z. Ouyang and B. S. Zou, Phys. Rev.  C [**80**]{}, 045211 (2009). V. Ptitsyn, AIP Conf. Proc.  [**1149**]{}, 735 (2009).
--- abstract: 'We extend the Aw-Rascle macroscopic model of car traffic into a two-way multi-lane model of pedestrian traffic. Within this model, we propose a technique for the handling of the congestion constraint, i.e. the fact that the pedestrian density cannot exceed a maximal density corresponding to contact between pedestrians. In a first step, we propose a singularly perturbed pressure relation which models the fact that the pedestrian velocity is considerably reduced, if not blocked, at congestion. In a second step, we carry over the singular limit into the model and show that abrupt transitions between compressible flow (in the uncongested regions) to incompressible flow (in congested regions) occur. We also investigate the hyperbolicity of the two-way models and show that they can lose their hyperbolicity in some cases. We study a diffusive correction of these models and discuss the characteristic time and length scales of the instability.' author: - 'Cécile Appert-Rolland, Pierre Degond and Sébastien Motsch' title: 'Two-way multi-lane traffic model for pedestrians in corridors' --- <span style="font-variant:small-caps;">Cécile Appert-Rolland</span> 2-CNRS; LPT; UMR 8627 Batiment 210, F-91405 ORSAY Cedex, France. email: [email protected] <span style="font-variant:small-caps;">Pierre Degond</span> 4-CNRS; Institut de Mathématiques de Toulouse UMR 5219 F-31062 Toulouse, France. email: [email protected] <span style="font-variant:small-caps;">Sébastien Motsch</span> email: [email protected] [**AMS subject classification:**]{} 90B20, 35L60, 35L65, 35L67, 35R99, 76L05 [**Key words:**]{} Pedestrian traffic, two-way traffic, multi-lane traffic, macroscopic model, Aw-Rascle model, Congestion constraint. [**Acknowledgements:**]{} This work has been supported by the french ’Agence Nationale pour la Recherche (ANR)’ in the frame of the contract ’Pedigree’ (contract number ANR-08-SYSC-015-01). The work of S. Motsch is partially supported by NSF grants DMS07-07949, DMS10-08397 and FRG07-57227. Introduction ============ Crowd modeling and simulation is a challenging problem which has a broad range of applications from public safety to entertainment industries through architectural and urban design, transportation management, etc. Common and crucial needs for these applications are the evaluation and improvement (both quantitatively and qualitatively) of existing models, the derivation of new experimentally-based models and the construction of hierarchical links between these models at the various scales. The goal of this paper is to propose a phenomenological macroscopic model for pedestrian movement in a corridor. A macroscopic model describes the state of the crowd through locally averaged quantities such as the pedestrian number density, mean velocity, etc. Macroscopic models are opposed to Individual-Based Models (IBM’s) which follow the location and state of each agent over time. Macroscopic models provide a description of the system at scales which are large compared to the individuals scale. Although they do not provide the details of the individuals scale, they are computationally more efficient. In particular, their computational cost does not depend on the number of agents, but only on the refinement level of the spatio-temporal discretization. In addition, by comparisons with the experimental data, they give access to large-scale information about the system. This information can provide a preliminary gross analysis of the data, which in turn can be used for building up more refined IBM’s. This procedure requires that the link between the microscopic IBM and the macroscopic model has been previously established. Therefore, macroscopic models which can be rigorously derived from IBM’s are crucial. The present work focuses on a one-dimensional model of pedestrian traffic in corridors. This setting has several advantages: 1. It makes the problem essentially one-dimensional and is a preliminary step for the development of more complex multi-dimensional problems. The present work will consider that pedestrian traffic occurs on discrete lanes. This approximation can be viewed as a kind of discretization of the actual two-dimensional dynamics. It prepares the terrain for the development and investigation of truly two-dimensional models. 2. We can build up on previous experience in the field of traffic flow models. Our approach relies on the Aw-Rascle model of traffic flow [@AR], which has been proven an excellent model for traffic flow engineering [@Zhang]. In the present work, this approach will be extended to two-way multi-lane traffic flow of pedestrians. 3. It is easier to collect well-controlled experimental data in corridors than in open space (see for instance [@Ped1]). 4. The relation of the macroscopic model to a corresponding microscopic IBM is more easily established in the one-dimensional setting. In [@AKMR], it has been proven that the Aw-Rascle model can be derived from a microscopic Follow-the-Leader model of car traffic. The proof uses a Lagrangian formulation of the Aw-Rascle model. The correspondence between the Lagrangian formulation and the IBM cannot be carried over to the two dimensional case, because of the very special structure of the Lagrangian model in one-dimension. The most widely used models of pedestrian traffic are IBM’s. Several families of models have been developed. Rule based models [@Reynolds99] have been used in particular for the development of games and virtual reality, with several possible levels of description. But their aim is more to have a realistic appearance rather than really reproducing a realistic behavior. More robust models are needed for example to test and improve the geometry of various types of buildings. Physicists have proposed some models inspired from the fluid simulation methods. In the so-called ’social force’ model [@Helbing_1991; @Helbing_Molnar_1995; @Helbing_Molnar_1997], the equations of motion for each pedestrian have the form of Newton’s law where the force is the sum of several terms each representing the ’social force’ under consideration. It obviously relies on the analogy existing between the displacement of pedestrians and the motion of particles in a gas. It describes quite well dense crowds, but not the individual trajectories of a few interacting pedestrians. Other approaches have been developed in the framework of cellular automata [@Burstedde_2001; @Guo_2008; @Nishinari_2004]. In these models, the non-local interactions between pedestrians are made local through the mediation of a virtual floor field. These models also are meant to describe the motion of crowds, not of individuals. Besides, a systematic study of the isotropy of cellular automata models is still lacking. More recently, some geometrical models have been developed. Pedestrians try to predict each others’ trajectories, and to avoid collisions [@Guy09; @Paris_p_d07; @Vandenberg_o08]. The knowledge of other pedestrians’ trajectories depends on the perception that the pedestrian under consideration has, which may vary with time. [@Pettre09] takes into account the fact that this knowledge is acquired progressively. Another type of perception based on the visual field is proposed in [@Ondrej10]. These models describe well the individual trajectories of a few interacting pedestrians, but it is not obvious yet whether they can handle crowds. By contrast to microscopic IBM’s, macroscopic crowd models are based on the analogy of crowd flow with fluid dynamics. A first approach has been proposed in [@Henderson_1974]. In [@Helbing_1992], a fluid model is derived from a gas-kinetic model through a moment approach and phenomenological closures. Recently, a similar approach has been proposed in [@Al-nasur_2006]. In [@Hoogendoorn_2003; @Hughes_2002; @Hughes_2003], a continuum model is derived through optimal control theory and differential games. It leads to a continuity equation coupled with a potential field which describes the velocity of the pedestrians. Other phenomenological models based on the analogy with the Lighthill-Whitham-Richards model of car traffic have been proposed by [@Bellomo_d08; @Chalons_2007; @Colombo_2005]. In [@Piccoli_2009; @Piccoli_2010], instead of considering a continuous time evolution described by PDE’s, the evolution of measures is performed on a discrete time scale. In the present paper, we shall consider a continuous time description. Macroscopic models provide a description of the system at large spatial scales. They can be heuristically justified for a long corridor stretch like a subway corridor, when the spatial inhomogeneities are weak (such as low variations of the density or velocity in the direction of the corridor). Of course, they cannot be used when the spatial inhomogeneities are at the same scale as for instance the mean-interpedestrian distance in the longitudinal direction. In the case of narrow corridors, this mean-interpedestrian distance is larger because there are less pedestrians in a cross-section, and the condition of weak spatial inhomogeneities is more stringent. From a rigorous standpoint, the derivation of macroscopic models from Individual-Based models requires that the number of agents be large, which is obviously questionable in most situations in pedestrians and highway traffic. Still there is a large literature devoted to macroscopic models which seem to provide adequate models for large scale dynamics. We will be specifically interested in two-way multi-lane traffic flow models with a particular emphasis on the handling of congestions. These points have been previously addressed in [@Weng_2007] for pedestrian counter-flows, [@Shvetsov_1999] for multi-lane traffic and [@Maury_Roudneff_2010; @Maury_Venel_2008] for the treatment of congestions. However, to the best of our knowledge, none of these different features have been included in the same model at the same time. The most difficult point is the treatment of congestions. In the recent approach [@Maury_Roudneff_2010; @Maury_Venel_2008] the congestion constraint (i.e. the limitation of the density by a maximal density corresponding to contact between pedestrians) is enforced by means of convex optimization tools (for IBM’s) or techniques borrowed from optimal transportation such as Wasserstein metrics (for continuum models). However, these abstract methods do not leave much space for parameter fitting to data and cannot distinguish between the behavior of pedestrians and say, sheep. Our technique relies on the explicit derivation of the dynamics of congestions, in the spirit of earlier work for traffic [@BDDR; @BDLMRR; @Deg_Del]. This procedure was initiated in the seminal work [@Bou_Bre_Cor_Rip]. The outline of the paper is as follows. We first present the modeling approach for a one-way one-lane Aw-Rascle model (1W-AR) of pedestrian flow in corridors in section \[sec\_1lane\_1way\]. We then successively extend this model into a two-way one-lane Aw-Rascle model (2W-AR) in section \[1lane\] and to a two-way multi-lane Aw-Rascle model (ML-AR) in section \[mlane\]. In each section, we present the corresponding Aw-Rascle model, together with a simplified version of it supposing that the pedestrian desired velocity is constant and uniform. We refer these simplified models as ”Constant desired velocity Aw-Rascle” (CAR) models. Therefore, we successively have the 1W-CAR, 2W-CAR and ML-CAR models as Constant desired velocity versions of respectively the 1W-AR, 2W-AR and ML-AR models. The 1W-CAR model can be recast in the form of the celebrated Lighthill-Whitham-Richards (LWR) model of traffic. Finally, for each of these models, we propose a specific treatment of congestion regions. This treatment consists in introducing a singular pressure in the AR model which tends to infinity as the density approaches the congestion density (i.e. the density at which the agents are in contact to each other). This singularly perturbed pressure relation provides a significant reduction of the flow when the density reaches this maximal density. A small parameter $\varepsilon$ controls the thickness of the transition region. In the limit $\varepsilon \to 0$, two phases appear: an uncongested phase where the flow is compressible and a congested phases where the flow is incompressible. The transition between these two phases is abrupt, by contrast to the case where $\varepsilon$ stay finite, where this transition is smooth. The location of the transition interface is not given a priori and is part of the unknowns of the limit problem. Table \[table\_1\] below provides a summary of the various proposed models and their relations. ------------ ------------------------------- -------------------------------------------------- ------------------------------------------------------------- Basic model Congestion model Congestion model with smooth transition: with abrupt transition: finite $\varepsilon$ $\varepsilon \to 0$ 1W-AR Section Section Section 1-way \[subsec\_AR\_model\] \[subsubsec\_AR\_density\_constraint\_smooth\] \[subsubsec\_AR\_density\_constraint\_incompressibility\] 1-lane 1W-CAR Section Section Section 1-way \[subsec\_toy\] \[subsubsec\_TAR\_density\_constraint\] \[subsubsec\_TAR\_density\_constraint\] 1-lane item (i) item (ii) 2W-AR Section Section Section 2-way \[subsec\_2AR\_model\] \[subsubsec\_2WAR\_density\_constraint\_smooth\] \[subsubsec\_2WAR\_density\_constraint\_incompressibility\] 1-lane 2W-CAR Section Section Section 2-way \[subsec\_2TAR\_model\] \[subsubsec\_T2AR\_density\_constraint\] \[subsubsec\_T2AR\_density\_constraint\] 1-lane item (i) item (ii) ML-AR Section Section Section 2-way \[subsec\_mlane\_principles\] \[subsubsec\_kWAR\_density\_constraint\_smooth\] \[subsubsec\_kWAR\_density\_constraint\_incompressibility\] multi-lane ML-CAR Section Omitted Omitted 2-way \[subsec\_mlane\_toy\] multi-lane ------------ ------------------------------- -------------------------------------------------- ------------------------------------------------------------- : Table of the various models, with some of their characteristics and the sections in which they are introduced. The meaning of the acronyms is as follows: AR=’Aw-Rascle model’, CAR=’Aw-Rascle model with Constant Desired Velocity’. The left column (basic model) refers the general formulation of the model and the middle and right columns, the modified models taking into account the congestion phenomena. The middle column corresponds to a smooth transition from uncongested state to congestion while the right column corresponds to an abrupt phase transition. []{data-label="table_1"} \ One interesting characteristics of two-way models as compared to one-way models is that they may lose their hyperbolicity in situations close to the congestion regime. Although, this loss of hyperbolicity can be seen as detrimental to the model, the resulting instability may explain the appearance of crowd turbulence at high densities. We note that a loss of (strict) hyperbolicity has already been found in a multi-velocity one-way model [@benzoni]. In order to gain insight into this instability, in section \[sec\_math\], we analyze the diffusive perturbation of the two-way Aw-Rascle model with constant desired velocity, and exhibit the typical time scale and growth rate of the so-generated structures. These observables can be used to assess the model and calibrate it against empirical data. In order to illustrate these considerations, we show numerical simulations that confirm the appearance of these large-scale structures which consist of two counter-diffusing crowds. In these simulations, which are presented for illustrative purposes only, in order to explore what kind of structures the lack of hyperbolicity of the model leads to, we assume a smooth pressure-density relation. Thanks to this assumption, we omit to treat the congestion constraint, which is a difficult stiff problem, for which special methods have to be designed (see e.g. [@DT; @DHN] for the case of Euler system of gas dynamics and [@BDDR; @Deg_Del] for the AR model). One-way one-lane traffic model {#sec_1lane_1way} ============================== An Aw-Rascle model for one-lane one-way pedestrian traffic {#subsec_AR_model} ---------------------------------------------------------- In this section, we construct a one-lane one-way continuum model of pedestrian traffic in corridors. In this model, we will pay a particular attention to the occurrence of congestions. We encode the congestion effect into a constraint of maximal total density. This work is inspired by similar approaches for vehicular traffic, which have been developed in [@BDDR; @BDLMRR; @Deg_Del]. For that purpose, the building block is a one-lane, one-way Aw-Rascle (1W-AR) model which has been proposed for vehicular traffic flow [@AR]. This model belongs to the class of second-order models in the sense that it considers that both the density and the velocity are dynamical variables which are subject to time-differential equations. By contrast, first order models use the density as the only dynamical variable and prescribe the density flux as a local function of the density. The Aw-Rascle model with constant desired velocity considered in section \[subsec\_toy\] is an example of a first order model. [**(1W-AR model)**]{} Let $\rho(x,t) \in {\mathbb R}$ the density of pedestrians on the lane, $u\in {\mathbb R}_+$ their velocity, $w(x,t) \in {\mathbb R}_+$ the desired velocity of the pedestrian in the absence of obstacles and $p(\rho)$ the velocity offset between the desired and actual velocities of the pedestrian. The 1W-AR model is written: $$\begin{aligned} & & \hspace{-1cm} \partial_t \rho + \partial_x (\rho u) = 0 , \label{AR_n} \\ & & \hspace{-1cm} \partial_t (\rho w) + \partial_x (\rho w u) = 0 , \label{AR_u} \\ & & \hspace{-1cm} w= u + p(\rho). \label{AR_w} \end{aligned}$$ In this model, the offset $p(\rho)$ is an increasing function of the pedestrian density. By analogy with fluid mechanics, this offset will be often referred to as the pressure, but its physical dimension is that of a velocity. Using the mass conservation equation, we can see that the desired velocity is a Lagrangian quantity (i.e. is preserved by the flow), in the sense that: $$\begin{aligned} & & \hspace{-1cm} \partial_t w + u \partial_x w = 0 . \label{AR_w_lag} \end{aligned}$$ It is natural, since the desired velocity is a quantity which is attached to the particles and should move together with the particles at the flow velocity. This model has been studied in great detail in [@AR] and proven to derive from a follow-the-leader model of car traffic in [@AKMR]. Of particular interest is the fact that this model is hyperbolic, with two Riemann invariants. The first one is obviously the desired velocity $w$ as (\[AR\_w\_lag\]) testifies. The second one is less obvious but is nothing but the actual flow velocity $u$. Indeed, from (\[AR\_w\_lag\]) and using (\[AR\_n\]), we get: $$\begin{aligned} \partial_t u + u \partial_x u &=& - (\partial_t p + u \partial_x p) \\ &=& - p'(\rho) (\partial_t \rho + u \partial_x \rho) \\ &=& p'(\rho) \rho \partial_x u , \end{aligned}$$ and therefore $$\begin{aligned} & & \hspace{-1cm} \partial_t u + (u - p'(\rho) \rho) \partial_x u = 0 . \label{AR_u_lag}\end{aligned}$$ Therefore, information about the fluid velocity propagates with a velocity $$\begin{aligned} & & \hspace{-1cm} c_u = u - p'(\rho) \rho. \label{AR_u_vel}\end{aligned}$$ In the reference frame of the fluid, this gives raise to waves moving upstream the flow with a speed equal to $- p'(\rho) \rho$. We can also consider the evolution of $\rho u$ instead of that of $u$. We obtain from (\[AR\_w\_lag\]) and using (\[AR\_n\]): $$\begin{aligned} \partial_t (\rho u) + \partial_x (\rho u u ) &=& - (\partial_t (\rho p) + \partial_x (\rho p u) ) \nonumber \\ &=& - \rho (\partial_t p + u \partial_x p) \nonumber \\ &=& - \rho \frac{dp}{dt} , \label{AR_rho_u} \end{aligned}$$ where we have introduced the material derivative $d/dt = \partial_t + u \partial_x$. This form is motivated by the observation [@AR] that drivers do not react to local gradients of the vehicle density but rather to their material derivative in the frame of the driver. This modification to standard gas dynamics like models of traffic was crucial in obtaining a cure to the various deficiencies of second order models as observed by Daganzo [@Dag]. Eq. (\[AR\_rho\_u\]) can also be put in the form $$\begin{aligned} \partial_t (\rho u) + \partial_x (\rho u w ) &=& - \partial_t (\rho p) \nonumber \\ &=& - \pi'(\rho) \partial_t \rho \nonumber \\ &=& \pi'(\rho) \partial_x (\rho u) , \label{AR_rho_u_2} \end{aligned}$$ with $$\begin{aligned} & & \hspace{-1cm} \pi(\rho) = \rho p(\rho), \quad \pi'(\rho) = \rho p'(\rho) + p(\rho) . \label{AR_pi} \end{aligned}$$ We will consider the 1W-AR model as a building block for the pedestrian model. In order to make the connection with a microscopic view of pedestrian flow, we consider a subcase of this model in the section below. Constant desired velocity {#subsec_toy} ------------------------- This one-way Constant Desired Velocity Aw-Rascle (1W-CAR) model assumes that the pedestrians can have only two velocities: either a fixed uniform velocity $V$ which is the same for all pedestrians and does not vary with time ; or zero, indicating that they are immobile. In other words, if because of the high density of obstacles in front, the pedestrians cannot proceed further with the velocity $V$, they have to stop. In this case, $$\begin{aligned} & & \hspace{-1cm} w = V $$ is a fixed value and therefore, the actual flow velocity $$\begin{aligned} & & \hspace{-1cm} u = V - p(\rho)\label{ARPed_u}\end{aligned}$$ is a local function of $\rho$. This leads to a first-order model where the flux velocity is given as a local prescription of the density. [**(1W-CAR model)**]{} Let $\rho(x,t) \in {\mathbb R}$ the density of pedestrians on the lane, $V \in {\mathbb R}_+$ the (constant) desired velocity of pedestrians and $p(\rho)$ the pressure. The 1W-CAR model is written: $$\begin{aligned} & & \hspace{-1cm} \partial_t \rho + \partial_x (\rho (V - p(\rho)) ) = 0. \label{ARPed_n} \end{aligned}$$ We denote by $f(\rho) = \rho (V - p(\rho))$ the mass flux. The quantity $p(\rho)$ being an increasing function of $\rho$, $f(\rho)$ has a concave shape (and is actually concave if $\rho p(\rho)$ is convex), which is consistent with classical first-order traffic models such as the Lighthill-Whitham-Richards (LWR) model [@LW]. Figure \[Fig\_LWR\] provides a graphical view of $f(\rho)$. It is interesting to note that the original 1W-AR model can be viewed as a LWR model with a driver-dependent flux function $f(\rho,w) = \rho (w-p)$ where $w$ is the driver dependent parameter, and consequently moves with the flow speed. It follows that the LWR is a useful lab to test concepts ultimately applying to the 1W-AR model. However, some of the features of the LWR model are too simple (such as the conservation of the maxima and minima of $\rho$) and a realistic description of the dynamics requires more complex models such as the 1W-AR model. ![Density flux $f(\rho) = \rho (V - p(\rho))$ as a function of $\rho$ in the 1W-CAR model.[]{data-label="Fig_LWR"}](figures/flux_LWR.eps) \[fig:nom\] It is also instructive to write the 1W-CAR model as a second order model, like the 1W-AR model. Indeed, using (\[AR\_rho\_u\_2\]) and (\[ARPed\_u\]), we can write (\[ARPed\_n\]) as: $$\begin{aligned} & & \hspace{-1cm} \partial_t \rho + \partial_x (\rho u) = 0 , \label{RARPed_n} \\ & & \hspace{-1cm} \partial_t (\rho u) + \partial_x (\rho u V ) = \pi'(\rho) \partial_x (\rho u) . \label{RARPed_u}\end{aligned}$$ Conversely, if $\rho$ and $u$ are solutions of this model, using the fact that $V$ is a constant together with eq. (\[RARPed\_n\]) to modify the second term of (\[RARPed\_u\]), and using the r.h.s. of eq. (\[AR\_rho\_u\_2\]) to modify the r.h.s of (\[RARPed\_u\]), we find, : $$\begin{aligned} & & \hspace{-1cm} \partial_t (\rho (p + u - V) ) = 0. $$ Therefore, if (\[ARPed\_u\]) is satisfied initially, it is satisfied at all times and we recover (\[ARPed\_n\]). The 1W-CAR model in the form (\[RARPed\_n\]), (\[RARPed\_u\]) has an interesting interpretation in terms of microscopic dynamics, when the pedestrians have two velocity states, the moving one with velocity $V$ and the steady one, with velocity $0$. Indeed, denoting by $g(x,t)$ the density of moving pedestrians and by $s(x,t)$ that of steady pedestrians, we have $$\begin{aligned} & & \hspace{-1cm} \rho = g + s . \end{aligned}$$ Because the moving pedestrians move with velocity $V$, we can write the pedestrian flux $\rho u$ as $$\begin{aligned} & & \hspace{-1cm} \rho u = V g . \label{RARPed_rhou=Vg} \end{aligned}$$ Since by (\[ARPed\_n\]), $\rho u = \rho (V-p(\rho))$, we deduce that $$\begin{aligned} & & \hspace{-1cm} g = \rho (1- \frac{p(\rho)}{V}), \quad s = \rho \frac{p(\rho)}{V} . \end{aligned}$$ Not surprisingly, the offset velocity scaled by the particle velocity is nothing but the proportion of steady particles and it is completely determined by the total density $\rho$. We deduce from (\[RARPed\_rhou=Vg\]) that system (\[RARPed\_n\]), (\[RARPed\_u\]) can be rewritten in the form: $$\begin{aligned} & & \hspace{-1cm} \partial_t ( g + s ) + \partial_x (V g) = 0 , \label{RARPed_n2} \\ & & \hspace{-1cm} \partial_t (V g ) + \partial_x (V^2 g ) = V \pi'(\rho) \partial_x g , \label{RARPed_u2} \end{aligned}$$ Dividing (\[RARPed\_u2\]) by $V$ and subtracting to (\[RARPed\_n2\]), we find: $$\begin{aligned} & & \hspace{-1cm} \partial_t g + \partial_x (V g) = \pi'(\rho) \partial_x g , \\ & & \hspace{-1cm} \partial_t s = - \pi'(\rho) \partial_x g . \end{aligned}$$ Thus, the term $\pi'(\rho) \partial_x g$ represents the algebraic transfer rate from immobile to moving particles, while $- \pi'(\rho) \partial_x g$ represents the opposite transfer. Therefore, this model assumes that the pedestrians decide to stop or become mobile again, based not only on local observation of the surrounding density, but on the observation of their gradients. More precisely, keeping in mind that $\pi'(\rho)$ and $V$ have the same sign, the transfer rate from the immobile to moving state is positive if the moving particle density increases in the downstream direction, indicating a lower congestion. Symmetrically, the transfer rate from the moving to immobile state increases if the moving particle density decreases in the downstream direction, indicating an increase of congestion. These evaluations of the variation of the moving particle density derivative are weighted by increasing functions of the density, meaning that the reactions of the pedestrians to their environment are faster if the density is large. We now turn to the introduction of the density constraint in the 1W-AR or 1W-CAR models. Introduction of the maximal density constraint in the 1W-AR model {#subsec_AR_density_constraint} ----------------------------------------------------------------- The maximal density constraint (also referred to below as the congestion constraint) is implemented in the expression of the velocity offset or pressure $p$. Two ways to achieve this goal are proposed. In the first one, $p$ is a smooth function of the particle density which blows-up at the approach of the maximal allowed density $\rho^*$. In the second one, congestion results in an incompressibility constraint which produces non-local effects with infinite speed of propagation of information. In congested regions, the pressure is no longer a function of the density but becomes implicitly determined by the incompressibility constraint. The transition from uncongested to congested regions is abrupt and appears as a kind of phase transition. This second approach can be realized as an asymptotic limit of the first approach where compression waves (or acoustic waves by analogy with gas dynamics) propagate at larger and larger speeds (so-called low Mach-number limit). Below, we successively discuss these two strategies. Then, we specifically consider the introduction of the congestion constraint within the 1W-CAR model. ### Congestion model with smooth transitions between uncongested and congested regions {#subsubsec_AR_density_constraint_smooth} To implement the congestion constraint, we will highly rely on previous work [@BDDR; @BDLMRR; @Deg_Del], where this constraint has been implemented in the 1W-AR model. We take a convex function $p(\rho)$ such that $p(0) = 0$, $p'(0) \geq 0$ and $p(\rho) \to \infty$ as $\rho \to \rho^*$. More explicitly, we can choose for instance for the pressure: $$\begin{aligned} & & \hspace{-1cm} p(\rho) = p^\varepsilon (\rho) = P(\rho) + Q^\varepsilon(\rho) , \label{AR_p_blow} \\ & & \hspace{-1cm} P(\rho) = M \rho^{m}, \quad m > 1, \label{AR_p_blow2} \\ & & \hspace{-1cm} Q^\varepsilon(\rho) = \frac{\varepsilon}{ \left( \frac{1}{\rho} - \frac{1}{\rho^*} \right)^\gamma } , \quad \gamma > 1. \label{AR_p_blow3}\end{aligned}$$ $P(\rho)$ is the background pressure of the pedestrians in the absence of congestion (and is taken in the form of an isentropic gas dynamics equation of state). $Q^\varepsilon$ is a correction which turns on when the density is close to congestion (i.e. $\varepsilon \ll 1$ is a small quantity), and modifies the background pressure to have it match the congestion condition $p(\rho) \to \infty$ as $\rho \to \rho^*$. Indeed, as long as $\rho-\rho_*$ is not too small, the denominator in (\[AR\_p\_blow3\]) is finite and $Q^\varepsilon(\rho)$ is of order $\varepsilon$. Thus the pressure $p$ is dominated by the $P$ term. However, a crossover occurs when $$\left( \frac{1}{\rho} - \frac{1}{\rho^*} \right)^\gamma \sim \varepsilon, $$ i.e. when $$\rho^* - \rho \sim \rho \rho^* \varepsilon^{1/\gamma}. \label{crossover2}$$ Thus in a density range near $\rho^*$ which scales as $\varepsilon^{1/\gamma}$, the correction $Q^\varepsilon(\rho)$ becomes of order unity. This is represented schematically on Figure \[fig\_p\]. Note that the precise shape of the term $\left( \frac{1}{\rho} - \frac{1}{\rho^*} \right)^\gamma$ is not important, as it does not contribute to the pressure law, except in a narrow region close to congestion. The chosen expression ensures that $Q^\varepsilon(\rho=0)=0$, and that it becomes significant in the vicinity of $\rho^*$ only. Note also that $Q^\varepsilon$ is an increasing function of $\rho$, in order to keep the problem hyperbolic. The pressure singularity at $\rho = \rho^*$ ensures that the congestion density $\rho^*$ cannot be exceeded. Indeed, let us consider a closed system (e.g. the system is posed on an interval $[a,b]$ with periodic boundary conditions) for simplicity. Let $u_0$ and $w_0$ be the initial conditions and suppose that they satisfy $$\begin{aligned} & & \hspace{-1cm} 0 \leq u_{\mbox{\scriptsize m}} \leq u_0 \leq u_{\mbox{\scriptsize M}}, \quad 0 \leq w_{\mbox{\scriptsize m}} \leq w_0 \leq w_{\mbox{\scriptsize M}}, $$ for some constants $u_{\mbox{\scriptsize m}}$, $u_{\mbox{\scriptsize M}}$, $w_{\mbox{\scriptsize m}}$, $w_{\mbox{\scriptsize M}}$. Then, [@AR] notices that, at any time, $u$ and $w$ satisfy the same estimates: $$\begin{aligned} & & \hspace{-1.1cm} 0 \leq u_{\mbox{\scriptsize m}} \leq u(x, t) \leq u_{\mbox{\scriptsize M}}, \quad 0 \leq w_{\mbox{\scriptsize m}} \leq w(x,t) \leq w_{\mbox{\scriptsize M}}, \quad \forall (x,t) \in [a,b] \times {\mathbb R}_+. \label{estim}\end{aligned}$$ In other words, this estimate defines an invariant region of the system. It follows from the fact that, $u$ and $w$ being the two Riemann invariants, they are transported by the characteristic fields (see eqs. (\[AR\_u\_lag\]), (\[AR\_w\_lag\])) and therefore, satisfy the maximum principle. From (\[estim\]), we deduce that $w - u = p(\rho) \leq w_{\mbox{\scriptsize M}} - u_{\mbox{\scriptsize m}}$, and we also have that $p(\rho) \geq 0$ at all times. Let $p^{-1}$ be the inverse function of $p$. Since $p$ maps $[0,\rho^*)$ increasingly to ${\mathbb R}_+$, then $p^{-1}$ maps increasingly ${\mathbb R}_+$ onto $[0,\rho^*)$, from which the estimate $\rho \leq p^{-1} (w_{\mbox{\scriptsize M}} - u_{\mbox{\scriptsize m}}) < \rho^*$ follows. This indeed shows that the constraint $\rho < \rho^*$ is satisfied at all times. From the estimate (\[estim\]), we also see that $u$ cannot become negative, so that the estimate $w \geq p(\rho)$ is also satisfied at all times. With this $\varepsilon$-dependent pressure, the 1W-AR model becomes a perturbation problem, written as follows: $$\begin{aligned} & & \hspace{-1cm} \partial_t \rho^\varepsilon + \partial_x (\rho^\varepsilon u^\varepsilon) = 0 , \label{EAR_n} \\ & & \hspace{-1cm} \partial_t (\rho^\varepsilon w^\varepsilon) + \partial_x (\rho^\varepsilon w^\varepsilon u^\varepsilon) = 0 , \label{EAR_u} \\ & & \hspace{-1cm} w^\varepsilon= u^\varepsilon + p^\varepsilon(\rho^\varepsilon). \label{EAR_w} \end{aligned}$$ The next section investigates the formal $\varepsilon \to 0$ limit. ### Congestion model with abrupt transitions between uncongested and congested regions {#subsubsec_AR_density_constraint_incompressibility} In the limit $\varepsilon \to 0$, the uncongested motion remains unperturbed until the density hits the exact value $\rho^*$. Once this happens, congestion suddenly turns on and modifies the dynamics abruptly. In the uncongested regions, the flow is compressible ; it becomes incompressible at the congestion density $\rho^*$. Therefore, in the limit $\varepsilon \to 0$, the abrupt transition from uncongested motion (when $\rho < \rho^*$) to congested motion (when $\rho = \rho^*$) corresponds to the crossing of a phase transition between a compressible to an incompressible flow regime. In the limit $\varepsilon \to 0$, the arguments of [@BDDR; @BDLMRR; @Deg_Del] can be easily adapted. Suppose that $\rho^\varepsilon \to \rho < \rho^*$. In this case, $Q^\varepsilon (\rho^\varepsilon) \to 0$ and we recover an 1W-AR model associated to the pressure $P(\rho)$: $$\begin{aligned} & & \hspace{-1cm} \partial_t \rho^0 + \partial_x (\rho^0 u^0) = 0 , \label{0AR_n_NC} \\ & & \hspace{-1cm} \partial_t (\rho^0 w^0) + \partial_x (\rho^0 w^0 u^0) = 0 , \label{0AR_u_NC} \\ & & \hspace{-1cm} w^0= u^0 + P(\rho^0). \label{0AR_w_NC} \end{aligned}$$ If on the other hand, $\rho^\varepsilon \to \rho^*$, then $Q^\varepsilon (\rho^\varepsilon) \to \bar Q$ with $0 \leq \bar Q \leq w_{\mbox{\scriptsize M}}$. Therefore, the total pressure is such that $p^\varepsilon (\rho^\varepsilon) \to \bar p$ with $P(\rho^*) \leq \bar p $. In this case, the model becomes incompressible: $$\begin{aligned} & & \hspace{-1cm} \partial_x u^0 = 0 , \label{0AR_n_C} \\ & & \hspace{-1cm} \partial_t w^0 + u^0 \partial_x w^0 = 0 , \label{0AR_u_C} \\ & & \hspace{-1cm} w^0= u^0 + \bar p, \quad \mbox{ with } \quad P(\rho^*) \leq \bar p . \label{0AR_w_C} \end{aligned}$$ Note that in this congested region, the density does not vary (it is equal to $\rho^*$) and cannot determine the pressure anymore. Indeed, the functional relation between the density and the pressure is broken and $\bar p$ may be varying with $x$ even though $\rho$ does not. The spatial variations of $\bar p$ compensate exactly (through (\[0AR\_w\_C\])) the variations of $w^0$, in such a way that all the pedestrians, whatever their desired velocity is, move at the same speed in the congestion region. This can also be seen when taking the limit $\varepsilon \to 0$ in (\[AR\_u\_lag\]). Indeed, if $\rho^\varepsilon \to \rho^*$ with $p^\varepsilon (\rho^\varepsilon) ( = w^\varepsilon - u^\varepsilon )$ staying finite, then $\rho^\varepsilon - \rho^* = O(\varepsilon^{1/\gamma})$ (see (\[crossover2\])) and $d p^\varepsilon / d \rho \sim \varepsilon^{-1/\gamma} \to \infty$. Therefore, in the congested regime, the derivative of the pressure with respect to the density becomes infinite. Inserting this in (\[AR\_u\_lag\]) shows that $\partial_x u^\varepsilon \to 0$. This ensures that all the pedestrians move at the same speed. Simultaneously, this blocks any further increase of the density, which cannot become larger than $\rho^*$. Indeed, the mass conservation equation (\[AR\_n\]) tells us that $$\begin{aligned} & & \hspace{-1cm} \frac{d}{dt} \rho^\varepsilon = \partial_t \rho^\varepsilon + u^\varepsilon \partial_x \rho^\varepsilon = - \rho^\varepsilon \partial_x u^\varepsilon , $$ and consequently, if $\partial_x u^\varepsilon \to 0$, any further increase of the density is impeded. In the general case, we expect that the two limit regimes coexist. The congested region may appear anywhere in the flow, depending on the initial conditions. Congestion regions must be connected to uncongested regions by interface conditions. Across these interfaces, $\rho$ and $\rho w$, which are conserved quantities obey the Rankine-Hugoniot relations. The quantity $w$, which is thought of as the (locally averaged) pedestrians’ desired velocity is modified across the interfaces through these relations. However, the bounds (\[estim\]) are preserved (see [@AR]). Connecting congested and uncongested regions is a delicate problem which has been investigated in [@BDDR] by a careful inspection of Riemann problem solutions. Specifically, [@BDDR] treats the special case $M=0$ in (\[AR\_p\_blow\])-(\[AR\_p\_blow3\]). The present choice of the pressure (\[AR\_p\_blow\])-(\[AR\_p\_blow3\]) is slightly different: in the limit $\varepsilon \to 0$, it produces a non-zero pressure in the uncongested region, while [@BDDR] considers that uncongested regions are pressureless in this limit. Pressureless gas dynamics develops some unpleasant features (such as the occurrence of vacuum, weak instabilities, and so on). Keeping a non-zero pressure in the uncongested region in the limit $\varepsilon \to 0$ allows to bypass some of these problems and represents an improvement over [@BDDR]. Of course, the precise choice of $m$ and $M$ must be fitted against experimental data. We do not attempt to derive interface conditions between uncongested and congested regions for the present choice of the pressure. Indeed, the perturbation problem (\[EAR\_n\])-(\[EAR\_w\]), even with a small value of $\varepsilon$ is easier to treat numerically than the connection problem between the two models (\[0AR\_n\_NC\])-(\[0AR\_w\_NC\]) and (\[0AR\_n\_C\])-(\[0AR\_w\_C\]). Therefore, we will not regard the limit model as a numerically effective one, but rather, as a theoretical limit which provides some useful insight. Still, the numerical treatment of the perturbation problem requires some care. Of particular importance is the development of Asymptotic-Preserving schemes, i.e. of schemes that are able to capture the correct asymptotic limit when $\varepsilon \to 0$. This is not an easy problem because of the blow up of the pressure near $\rho^*$. Indeed, due to the blow up of the characteristic speed in (\[AR\_u\_lag\]), the CFL stability condition of a classical explicit shock-capturing method leads to a time-step constraint of the type $\Delta t =0( \varepsilon^{1/\gamma} ) \to 0$ as $\varepsilon \to 0$. For this reason, classical explicit shock-capturing methods cannot be used to explore the congestion constraint when $\varepsilon \to 0$ and Asymptotic-Preserving schemes are needed. Another reason for considering the perturbation problem (\[EAR\_n\])-(\[EAR\_w\]) instead of the limit model is that the congestion may appear gradually rather than like an abrupt phase transition from compressible to incompressible motion. In particular, for large pedestrian concentrations, some erratic motions occur (this is referred to as crowd turbulence) and might be modeled by a suitable (may be different) choice of the perturbation pressure $Q^\varepsilon$. ### Introduction of the congestion constraint in the constant desired velocity 1W-CAR model {#subsubsec_TAR_density_constraint} [*(i) Congestion model with smooth transitions.*]{} The smooth pressure relations (\[AR\_p\_blow\])-(\[AR\_p\_blow3\]) can be used for the 1W-CAR model. Because $\rho$ now satisfies a convection equation: $$\begin{aligned} & & \hspace{-1cm} \partial_t \rho + (V - (\rho p )'(\rho)) \partial_x \rho =0, $$ the initial bounds are preserved. Indeed, suppose that $$\begin{aligned} & & \hspace{-1cm} 0 \leq \rho_{\mbox{\scriptsize m}} \leq \rho_0 \leq \rho_{\mbox{\scriptsize M}} < \rho^*, $$ for some constants $\rho_{\mbox{\scriptsize m}}$, $\rho_{\mbox{\scriptsize M}}$, then, at any time, $\rho$ satisfy the same estimates. $$\begin{aligned} & & \hspace{-1cm} 0 \leq \rho_{\mbox{\scriptsize m}} \leq \rho(x, t) \leq \rho_{\mbox{\scriptsize M}} < \rho^*, \quad \forall (x,t) \in [a,b] \times {\mathbb R}_+. $$ In this way, the constraint $0 \leq \rho \leq \rho^*$ is always satisfied. However, the fact that the bounds on the density are preserved by the dynamics can be viewed as unrealistic. In real pedestrian traffic, strips of congested and uncongested traffic spontaneously emerge from rather space homogeneous initial conditions. The generation of new maximal and minimal bounds is an important feature of real traffic systems which is not well taken into account in the 1W-CAR model and more generally, in LWR models. [*(ii) Congestion model with abrupt transitions.*]{} If the limit $\varepsilon \to 0$ is considered, and if the upper bound $\rho_{\mbox{\scriptsize M}} = \rho_{\mbox{\scriptsize M}}^\varepsilon$ depends on $\varepsilon$ and is such that $\rho_{\mbox{\scriptsize M}}^\varepsilon \to \rho^*$, then, some congestion regions can occur. The limit model in the uncongested region does not change, and is given by the single conservation relation (\[ARPed\_n\]) with the pressure $p(\rho^0)= P(\rho^0)$. In the congested region, we have $\rho^0 = \rho^*$, which implies $\partial_x u^0 = 0$. In terms of the moving and steady pedestrian densities, the congested regime means that $$\begin{aligned} & & \hspace{-1cm} \partial_x g^0 = 0, \qquad s^0 = \rho^* - g^0 , $$ i.e. both the steady and moving pedestrian densities are uniform in the congested region. Two-way one-lane traffic model {#1lane} ============================== An Aw-Rascle model for two-way one-lane pedestrian traffic {#subsec_2AR_model} ---------------------------------------------------------- The extension of the 1W-AR model to 2-way traffic, denoted by 2W-AR model, may seem rather easy, the 2-way traffic is written as a system of two 1-way models. However, we will see that the mathematical properties of the 2-way models are rather different from their one-way counterpart. [**(2W-AR model)** ]{} Let $\rho_\pm$ the density of pedestrians, $u_\pm$ their velocity, $w_\pm$ their desired velocity and $p$ the pressure, with an index $+$ for the right-going pedestrians and $-$ for the left-going ones. The 2W-AR model for 2-way traffic is written: $$\begin{aligned} & & \hspace{-1cm} \partial_t \rho_+ + \partial_x (\rho_+ u_+) = 0 , \label{2AR_n+} \\ & & \hspace{-1cm} \partial_t \rho_- + \partial_x (\rho_- u_-) = 0 , \label{2AR_n-} \\ & & \hspace{-1cm} \partial_t (\rho_+ w_+) + \partial_x (\rho_+ w_+ u_+) = 0 , \label{2AR_u+} \\ & & \hspace{-1cm} \partial_t (\rho_- w_-) + \partial_x (\rho_- w_- u_-) = 0 , \label{2AR_u-} \\ & & \hspace{-1cm} w_+= u_+ + p(\rho_+, \rho_-), \label{2AR_w+} \\ & & \hspace{-1cm} w_-= -u_- + p(\rho_-, \rho_+) . \label{2AR_w-} \end{aligned}$$ The coupling of the two flows of pedestrians in the 2W-AR model is through the prescription of the pressures which are functions of the densities of the two species $\rho_+$ and $\rho_-$. Our conventions are that the desired velocities $w_\pm$ and the velocity offsets $p(\rho_\pm, \rho_\mp)$ are magnitudes, and as such, are positive quantities. The actual velocities $u_\pm$ are signed quantities: $u_+ >0$ for right-going pedestrians and $u_-<0$ for left-going pedestrians. These conventions explain the different signs in factor of the velocities for (\[2AR\_w+\]) and (\[2AR\_w-\]). However, we do not exclude that, in particularly congested conditions, the right-going pedestrians may have to go backwards (i.e. to the left) or vice-versa, the left-going pedestrians have to go to the right. Therefore, we do not make any a priori assumption on the sign of $u_\pm$. For obvious symmetry reasons, the same pressure function is used for the two particles, with reversed arguments. The function $p$ is increasing with respect to both arguments since the velocity offset of one of the species increases when the density of either species increases. Some of the properties of the 1W-AR system extend to the 2W-AR one. For instance, the desired velocities are Lagrangian variables, as they satisfy: $$\begin{aligned} & & \hspace{-1cm} \partial_t w_+ + u_+ \partial_x w_+ = 0 , \label{2AR_w+_lag} \\ & & \hspace{-1cm} \partial_t w_- + u_- \partial_x w_- = 0 . \label{2AR_w-_lag} \end{aligned}$$ Unfortunately, the velocities $u_+$ and $u_-$ do not constitute Riemann invariants any longer because of the coupling induced by the dependence of $p$ upon $\rho_+$ and $\rho_-$. For this reason initial bounds on $u_+$ and $u_-$ are not preserved by the flow, as they were in the case of the 1W-AR model. Since the velocity offsets $p(\rho_+, \rho_-)$ and $p(\rho_-, \rho_+)$ are not bounded a priori, the velocities $u_+$ and $u_-$ can reverse sign when the velocity offsets are large. This is expected to reflect the fact that a dense crowd moving in one direction may force isolated pedestrians going the other way to move backwards. Of course, such a situation is only expected in close to congestion regimes. Nonetheless, the evolution of the pedestrian fluxes reflects the same phenomenology as in the one-way case, namely that pedestrians react to the Lagrangian derivative of the pressure, as shown by the following eqs. (which are the 2-way equivalents of eq. (\[AR\_rho\_u\])): $$\begin{aligned} & & \hspace{-1cm} \partial_t (\rho_+ u_+) + \partial_x (\rho_+ u_+ u_+ ) = - \rho_+ \, \left( \frac{d}{dt} \right)_+ [ p(\rho_+,\rho_-) ] , \label{2AR_rho_u+} \\ & & \hspace{-1cm} \partial_t (\rho_- u_-) + \partial_x (\rho_- u_- u_- ) = \rho_- \, \left( \frac{d}{dt} \right)_- [ p(\rho_-,\rho_+) ] , \label{2AR_rho_u-} \end{aligned}$$ where the material derivatives $(d/dt)_{\pm} = \partial_t + u_{\pm} \partial_x$ depend on what type of particles is concerned. These equations can also be put in the form (equivalent to (\[AR\_rho\_u\_2\]) for the 1W-AR model): $$\begin{aligned} & & \hspace{-1cm}\partial_t (\rho_+ u_+) + \partial_x (\rho_+ u_+ w_+ ) = \left[ p(\rho_+,\rho_-) + \rho_+ \left.\partial_1 p\right|_{(\rho_+,\rho_-)} \right] \partial_x (\rho_+ u_+) + \nonumber \\ & & \hspace{6cm} + \rho_+ \left.\partial_2 p\right|_{(\rho_+,\rho_-)} \partial_x (\rho_- u_-) , \label{2AR_rho_u+_2} \\ & & \hspace{-1cm} \partial_t (\rho_- u_-) - \partial_x (\rho_- u_- w_- ) = - \left[p(\rho_-,\rho_+) + \rho_- \left.\partial_1 p\right|_{(\rho_-,\rho_+)} \right] \partial_x (\rho_- u_-) \nonumber \\ & & \hspace{6cm} - \rho_- \left.\partial_2 p\right|_{(\rho_-,\rho_+)} \partial_x (\rho_+ u_+) , \label{2AR_rho_u-_2} \end{aligned}$$ where we denote by $\partial_1 p$ and $\partial_2 p$ the derivatives of the function $p$ with respect to its first and second arguments respectively. This form of the equations will be used below for the derivation of the Constant Desired Velocity model. The 2W-AR model is not always hyperbolic. Before stating the result, we introduce some notations. We define: $$\begin{aligned} & & \hspace{-1cm} c_{++} = \partial_1 p(\rho_+,\rho_-), \quad \quad c_{+-} = \partial_2 p(\rho_+,\rho_-), \label{eq:speed_def1} \\ & & \hspace{-1cm} c_{-+} = \partial_2 p(\rho_-,\rho_+), \quad \quad c_{--} = \partial_1 p(\rho_-,\rho_+). \label{eq:speed_def2}\end{aligned}$$ We assume that $p$ is increasing with respect to both arguments, which implies that all quantities defined by (\[eq:speed\_def1\]), (\[eq:speed\_def2\]) are non-negative. This assumption simply means that the pedestrian speed is reduced if the densities of either categories of pedestrians increase. For a given state $(\rho_+,w_+,\rho_-,w_-)$, the fluid velocities are given by: $$u_+ = w_+ - p(\rho_+,\rho_-), \quad u_- = - w_- + p(\rho_-, \rho_+). $$ We also define the following velocities $$c_{u_+} = u_+ - \rho_+ c_{++}, \quad c_{u_-} = u_- + \rho_- c_{--} . \label{eq:vel_waves}$$ These are the characteristic speeds (\[AR\_u\_vel\]) of the 1W-AR system. Specifically, $c_{u_+}$ is the wave at which information about velocity would propagate in a system of right-going pedestrians without coupling with the left-going ones. A similar explanation holds symmetrically for $c_{u_-}$. We now have the following theorem, the proof of which is elementary and left to the reader. The 2W-AR system is hyperbolic about the state $(\rho_+,w_+,\rho_-,w_-)$ if and only if the following condition holds true: $$\Delta := (c_{u_+} - c_{u_-})^2 - 4 \rho_+ \rho_- c_{+-} c_{-+} \geq 0. \label{eq:hyp_cond}$$ The quantities $u_\pm$ are two characteristics velocities of the system. If condition (\[eq:hyp\_cond\]) is satisfied, the two other characteristic velocities are $$\lambda_{\pm} = \frac{1}{2} \left[ c_{u_+} + c_{u_-} \pm \sqrt \Delta \right]. \label{eq:char_vel_2}$$ \[thm\_hyp\_2WAR\] Non-hyperbolicity occurs when the two characteristic velocities $c_{u_+}$ and $c_{u_-}$ of the uncoupled systems are close to each other. In this case, the first term of $\Delta$ is close to zero and does not compensate for the second term, which is negative. These conditions happen in particular when both velocities $c_{u_+}$ and $c_{u_-}$ are close to zero, which corresponds to the densities where the fluxes $\rho_+ u_+,$ $\rho_- u_-$ are maximal as functions of the densities $\rho_+$, $\rho_-$ respectively. In particular, in the one-way case with constant speed of figure \[Fig\_LWR\], this would correspond to the point $\rho_{\text{max}}$. These conditions correspond to the onset of congestion. Therefore, instabilities linked to the non-hyperbolic character of the model will develop in conditions close to congestion. The occurrence of regions of non-hyperbolicity is not entirely surprising. The instability of two counter-propagating flows is a common phenomenon in fluid mechanics. In plasma physics, the instability of two counter-propagating streams of charged particles is well known under the two-stream instability. The situation here is extremely similar, in spite of the different nature of the interactions (which are mediated by the long-range Coulomb force in the plasma case). The occurrence of a non-hyperbolic region is often viewed as detrimental, because in this region, the model is unstable. On the other hand, self-organization phenomena like lane formation or the onset of crowd turbulence cannot be described by an everywhere stable model. For instance, morphogenesis is explained by the occurrence of the Turing instability in systems of diffusion equations. Here, diffusion is not taken into account and the instability originates from a different phenomenon. However, in practice, some small but non-zero diffusion always exists. This diffusion damps the small scale structures but keeps the large scale structures growing. The typical size of the observed structures can be linked to the threshold wave-number below which instability occurs. Numerical simulations to be presented in a forthcoming work will allow us to determine whether the phenomena which are observed in dense crowds may be explained by this type of instability. In section \[sec\_math\], a stability analysis of a diffusive two-way LWR model will provide more quantitative support to these concepts. The constant desired velocity Aw-Rascle model for two-way one-lane pedestrian traffic {#subsec_2TAR_model} ------------------------------------------------------------------------------------- To construct the two-way constant desired velocity Aw-Rascle model (2W-CAR) for two-way one-lane pedestrian traffic, we must set $$\begin{aligned} & & \hspace{-1cm} w_+ = w_-= V , \label{2ARPed_w=v}\end{aligned}$$ and $$\begin{aligned} & & \hspace{-1cm} u_+ = V - p(\rho_+,\rho_-) , \quad u_- = -V + p(\rho_-,\rho_+) . \label{2ARPed_u}\end{aligned}$$ This leads to the following model: [**(2W-CAR model)**]{} Let $\rho_+$ and $\rho_-$ the densities of pedestrians moving to the right and to the left respectively, $V$ the (constant) desired velocity of pedestrians and $p$ the pressure term. The 2W-CAR model is written: $$\begin{aligned} & & \hspace{-1cm} \partial_t \rho_+ + \partial_x (\rho_+ (V - p(\rho_+,\rho_-)) ) = 0 , \label{2ARPed_n+} \\ & & \hspace{-1cm} \partial_t \rho_- - \partial_x (\rho_- (V - p(\rho_-,\rho_+)) ) = 0 . \label{2ARPed_n-} \end{aligned}$$ These are two first-order models coupled by a velocity offset which depends on the two densities. We can find the same interpretation of this model in terms of moving and steady particles as in the one-way model case. Using (\[2AR\_rho\_u+\_2\]), (\[2AR\_rho\_u-\_2\]) and (\[2ARPed\_w=v\]), we can write: $$\begin{aligned} & & \hspace{-1cm} \partial_t \rho_+ + \partial_x (\rho_+ u_+) = 0 , \label{R2TAR_n+} \\ & & \hspace{-1cm} \partial_t \rho_- + \partial_x (\rho_- u_-) = 0 , \label{R2TAR_n-} \\ & & \hspace{-1cm}\partial_t (\rho_+ u_+) + \partial_x (\rho_+ u_+ V ) = \left[ p(\rho_+,\rho_-) + \rho_+ \left.\partial_1 p\right|_{(\rho_+,\rho_-)} \right] \partial_x (\rho_+ u_+) + \nonumber \\ & & \hspace{6cm} + \rho_+ \left.\partial_2 p\right|_{(\rho_+,\rho_-)} \partial_x (\rho_- u_-) , \label{R2TAR_rho_u+} \\ & & \hspace{-1cm} \partial_t (\rho_- u_-) - \partial_x (\rho_- u_- V ) = - \left[p(\rho_-,\rho_+) + \rho_- \left.\partial_1 p\right|_{(\rho_-,\rho_+)} \right] \partial_x (\rho_- u_-) \nonumber \\ & & \hspace{6cm} - \rho_- \left.\partial_2 p\right|_{(\rho_-,\rho_+)} \partial_x (\rho_+ u_+) . \label{R2TAR_rho_u-} \end{aligned}$$ Conversely, if $\rho_+$, $u_+$, $\rho_-$, $u_-$ are solutions of this model, using the same method as in the one-way case, we easily find that: $$\begin{aligned} & & \hspace{-1cm} \partial_t (\rho_\pm (p \pm u_\pm - V) ) = 0 . \label{R2ARPed_const}\end{aligned}$$ Therefore, if (\[2ARPed\_u\]) is satisfied initially, it is satisfied at all times and we recover (\[2ARPed\_n+\]), (\[2ARPed\_n-\]). Now, we denote by $g_\pm(x,t)$ the density of the moving particles and by $s_\pm(x,t)$ that of the steady particles with a $+$ (respectively a $-$) indicating the right-going (respectively left-going) pedestrians. Although steady, the pedestrians have a desired motion either to the right or to the left, and we need to keep track of these intended directions of motions. We have $$\begin{aligned} & & \hspace{-1cm} \rho_\pm = g_\pm + s_\pm \quad \mbox{ and } \quad \rho_\pm u_\pm = \pm V g_\pm . \label{R2ARPed_rhou=Vg}\end{aligned}$$ We deduce that $$\begin{aligned} & & \hspace{-1cm} \frac{s_+}{\rho_+} = \frac{p(\rho_+,\rho_-)}{V}, \quad \quad \frac{s_-}{\rho_-} = \frac{p(\rho_-,\rho_+)}{V}. \label{R2ARPed_s_pm}\end{aligned}$$ Therefore, the offset velocities $p(\rho_+,\rho_-)$ and $p(\rho_-,\rho_+)$ scaled by the particle velocity $V$ represent the proportions of the steady particles $s^+/\rho_+$ and $s^-/\rho_-$ respectively. Now, we can rewrite (\[R2TAR\_n+\])-(\[R2TAR\_rho\_u-\]) as follows: $$\begin{aligned} & & \hspace{-1cm} \partial_t (g_+ + s_+) + \partial_x (V g_+) = 0 , \label{R2TAR_n+2} \\ & & \hspace{-1cm} \partial_t (g_- + s_-) - \partial_x (V g_-) = 0 , \label{R2TAR_n-2} \\ & & \hspace{-1cm}\partial_t (V g_+) + \partial_x (V^2 g_+ ) = \left[ p(\rho_+,\rho_-) + \rho_+ \left.\partial_1 p\right|_{(\rho_+,\rho_-)} \right] \partial_x (V g_+) \nonumber \\ & & \hspace{6cm} - \rho_+ \left.\partial_2 p\right|_{(\rho_+,\rho_-)} \partial_x (V g_-) , \label{R2TAR_rho_u+2} \\ & & \hspace{-1cm} \partial_t (V g_-) - \partial_x (V^2 g_- ) = - \left[p(\rho_-,\rho_+) + \rho_- \left.\partial_1 p\right|_{(\rho_-,\rho_+)} \right] \partial_x (V g_-) + \nonumber \\ & & \hspace{6cm} + \rho_- \left.\partial_2 p\right|_{(\rho_-,\rho_+)} \partial_x (V g_+) . \label{R2TAR_rho_u-2} \end{aligned}$$ By simple linear combinations, this system is equivalent to $$\begin{aligned} & & \hspace{-1cm}\partial_t g_+ + \partial_x (V g_+ ) = \left[ p(\rho_+,\rho_-) + \rho_+ \left.\partial_1 p\right|_{(\rho_+,\rho_-)} \right] \partial_x g_+ \nonumber \\ & & \hspace{6cm} - \rho_+ \left.\partial_2 p\right|_{(\rho_+,\rho_-)} \partial_x g_- , \label{R2TAR_rho_g+} \\ & & \hspace{-1cm} \partial_t g_- - \partial_x (V g_- ) = - \left[p(\rho_-,\rho_+) + \rho_- \left.\partial_1 p\right|_{(\rho_-,\rho_+)} \right] \partial_x g_- + \nonumber \\ & & \hspace{6cm} + \rho_- \left.\partial_2 p\right|_{(\rho_-,\rho_+)} \partial_x g_+ , \label{R2TAR_rho_g-} \\ & & \hspace{-1cm}\partial_t s_+ = - \left[ p(\rho_+,\rho_-) + \rho_+ \left.\partial_1 p\right|_{(\rho_+,\rho_-)} \right] \partial_x g_+ + \rho_+ \left.\partial_2 p\right|_{(\rho_+,\rho_-)} \partial_x g_- , \label{R2TAR_rho_s+} \\ & & \hspace{-1cm} \partial_t s_- = \left[p|_{(\rho_-,\rho_+)} + \rho_- \partial_1 p|_{(\rho_-,\rho_+)} \right] \partial_x g_- - \rho_- \partial_2 p|_{(\rho_-,\rho_+)} \partial_x g_+ . \label{R2TAR_rho_s-} \end{aligned}$$ Like in the one-way model, we find that the transition rates from the steady to moving states or vice-versa depend on the derivatives of the concentrations of moving pedestrians. Now, both the left and right going pedestrian total densities appear in the expressions of the transitions rates for either species. This is due to the coupling through the pressure term, which depends on both densities. Like the 2W-AR model, the 2W-CAR model is not always hyperbolic. Using the same notations as in the previous section, we have the: The 2W-CAR system is hyperbolic about the state $(\rho_+,\rho_-)$ if and only if condition (\[eq:hyp\_cond\]) is satisfied. In this case, the two characteristic velocities are given by (\[eq:char\_vel\_2\]). \[thm\_hyp\_2WCAR\] This can be seen directly from equations (\[2ARPed\_n+\]) and (\[2ARPed\_n-\]), once they are put under the form $$\partial_t \left(\begin{array}{c} \rho_+\\\rho_-\end{array}\right) + \left(\begin{array}{cc} c_{u_+} & -\rho_+ c_{+-}\\ \rho_- c_{-+} & c_{u_-} \end{array}\right) \partial_x \left(\begin{array}{c} \rho_+\\\rho_-\end{array}\right) =0 .$$ We refer to the end of section \[subsec\_2AR\_model\] for more comments about this property. Introduction of the congestion constraint in the 2W-AR model {#subsec_2WAR_density_constraint} ------------------------------------------------------------ ### Congestion model with smooth transitions {#subsubsec_2WAR_density_constraint_smooth} It is difficult to make a prescription for the function $p$. Its expression should be fitted to experimental data. Here we propose a form which allows us to investigate the effects of congestion. We propose: $$\begin{aligned} & & \hspace{-1cm} p(\rho_+, \rho_-) = p^\varepsilon (\rho_+,\rho_-) = P(\rho) + Q^\varepsilon(\rho_+,\rho_-) , \quad \mbox{ with } \quad \rho = \rho_+ + \rho_- \label{2AR_p_blow} \\ & & \hspace{-1cm} P(\rho) = M \rho^{m}, \quad m \geq 1, \label{2AR_p_blow2} \\ & & \hspace{-1cm} Q^\varepsilon(\rho_+,\rho_-) = \frac{\varepsilon}{q(\rho_+) \left( \frac{1}{\rho} - \frac{1}{\rho^*} \right)^\gamma} , \quad \gamma > 1. \label{2AR_p_blow3}\end{aligned}$$ The rationale for this formula is as follows. First, in uncongested regime, we expect that the velocity offsets of the right and left going pedestrians are the same, this common offset being a function of the total particle density. Thus, the uncongested flow pressure $P$ given by (\[2AR\_p\_blow2\]) is a function of $\rho$ only, and has the same shape as in the one-way case. Congestion occurs when the total density $\rho$ becomes close to $\rho^*$. Therefore, formula (\[2AR\_p\_blow3\]) resembles (\[AR\_p\_blow3\]), except for the prefactor $q(\rho_+)$. With this choice of the pressure, we anticipate that the constraint $$\begin{aligned} & & \rho = \rho_+ + \rho_- \leq \rho^* \label{2AR_constraint}\end{aligned}$$ will be satisfied everywhere in space and time, like in the one-way case. The prefactor $q(\rho_+)$ takes into account the fact that the velocity offset for the majority particle is smaller than that of the minority particle. Therefore, we prescribe $q$ to be an increasing function of $\rho_+$. For further usage, we note the following formula, which follows from eliminating $( ({1}/{\rho}) - ({1}/{\rho^*}) )^\gamma$ between $Q^\varepsilon(\rho_+,\rho_-)$ and $Q^\varepsilon(\rho_-,\rho_+)$: $$\begin{aligned} & & \hspace{-1cm} q(\rho_+) \, Q^\varepsilon(\rho_+,\rho_-) = q(\rho_-) \, Q^\varepsilon(\rho_-,\rho_+) . \label{2AR_p_blow4}\end{aligned}$$ It is more convenient to express this formula as $$\frac{Q^\varepsilon(\rho_+,\rho_-)}{Q^\varepsilon(\rho_-,\rho_+)} = \frac{q(\rho_-)}{q(\rho_+)},$$ remembering that $q$ is an increasing function. This formula states that the velocity offset for the right and left-going particles are inversely proportional to the ratios of a (function of) the densities. Since $q$ is increasing and taking $\rho_- < \rho_+$ as an example, we deduce that the velocity offset of the right-going particles will be less than that of the left-going particles. In other words, the flow of the majority category of pedestrians is less impeded than that of the minority one. In order to keep $Q^\varepsilon(\rho_\pm,\rho_\mp)$ small whenever $\rho<\rho^*$, we require that $q(\rho_\pm) = O(1)$ when $\rho_\pm < \rho^*$ . Physically relevant expressions of $q(\rho_\pm)$ can be obtained from real experiments. A possible extension, that we will not consider here, would be to have different functions $q_+(\rho_+)$ and $q_-(\rho_-)$. This could model the fact that for example, a crowd heading towards a train platform could be more pushy than the one going in the opposite direction. The 2W-AR model with $\varepsilon$-dependent pressure becomes a perturbation problem: $$\begin{aligned} & & \hspace{-1cm} \partial_t \rho^\varepsilon_+ + \partial_x (\rho^\varepsilon_+ u^\varepsilon_+) = 0 , \label{E2AR_n+} \\ & & \hspace{-1cm} \partial_t \rho^\varepsilon_- + \partial_x (\rho^\varepsilon_- u^\varepsilon_-) = 0 , \label{E2AR_n-} \\ & & \hspace{-1cm} \partial_t (\rho^\varepsilon_+ w^\varepsilon_+) + \partial_x (\rho^\varepsilon_+ w^\varepsilon_+ u^\varepsilon_+) = 0 , \label{E2AR_u+} \\ & & \hspace{-1cm} \partial_t (\rho^\varepsilon_- w^\varepsilon_-) + \partial_x (\rho^\varepsilon_- w^\varepsilon_- u^\varepsilon_-) = 0 , \label{E2AR_u-} \\ & & \hspace{-1cm} w^\varepsilon_+= u^\varepsilon_+ + p^\varepsilon(\rho^\varepsilon_+, \rho^\varepsilon_-), \label{E2AR_w+} \\ & & \hspace{-1cm} w^\varepsilon_-= -u^\varepsilon_- + p^\varepsilon(\rho^\varepsilon_-, \rho^\varepsilon_+) . \label{E2AR_w-} \end{aligned}$$ ### Congestion model with abrupt transitions {#subsubsec_2WAR_density_constraint_incompressibility} This case corresponds to the formal limit $\varepsilon \to 0$ of the previous model. Suppose that $\rho^\varepsilon \to \rho < \rho^*$. In this case, $Q^\varepsilon(\rho^\varepsilon_\pm,\rho^\varepsilon_\mp) \to 0$ and we recover a 2W-AR model associated to the pressure $P(\rho)$: $$\begin{aligned} & & \hspace{-1cm} \partial_t \rho^0_+ + \partial_x (\rho^0_+ u^0_+) = 0 , \label{02AR_n+_NC} \\ & & \hspace{-1cm} \partial_t \rho^0_- + \partial_x (\rho^0_- u^0_-) = 0 , \label{02AR_n-_NC} \\ & & \hspace{-1cm} \partial_t (\rho^0_+ w^0_+) + \partial_x (\rho^0_+ w^0_+ u^0_+) = 0 , \label{02AR_u+_NC} \\ & & \hspace{-1cm} \partial_t (\rho^0_- w^0_-) + \partial_x (\rho^0_- w^0_- u^0_-) = 0 , \label{02AR_u-_NC} \\ & & \hspace{-1cm} w^0_+= u^0_+ + P(\rho^0), \quad u^0_+ \geq 0, \label{02AR_w+_NC} \\ & & \hspace{-1cm} w^0_-= -u^0_- + P(\rho^0), \quad u^0_- \leq 0 . \label{02AR_w-_NC} \end{aligned}$$ If on the other hand, $\rho^\varepsilon \to \rho^*$, then $Q^\varepsilon (\rho_+^\varepsilon,\rho_-^\varepsilon) \to \bar Q_+$ and $Q^\varepsilon (\rho_-^\varepsilon,\rho_+^\varepsilon) \to \bar Q_-$. Furthermore, following (\[2AR\_p\_blow4\]), $\bar Q_+$ and $\bar Q_-$ are related by: $$\begin{aligned} & & \hspace{-1cm} q(\rho^0_+) \, \bar Q_+ = q(\rho^0_-) \, \bar Q_- . \label{2AR_p_blow5}\end{aligned}$$ Therefore, the total pressure is such that $p^\varepsilon (\rho_+^\varepsilon,\rho_-^\varepsilon) \to \bar p_+$ and $p^\varepsilon (\rho_-^\varepsilon,\rho_+^\varepsilon) \to \bar p_-$ with $P(\rho^*) \leq \bar p_\pm $ and $\bar p_+$ and $\bar p_-$ related through (\[2AR\_p\_blow5\]) (with $\bar Q_\pm$ replaced by $\bar p_\pm- P(\rho^*)$). We stress the fact that ${\bar Q_\pm}$ and consequently ${\bar p_\pm}$ are local function of $\rho_+^0,\,\rho_-^0$ (only the ratio ${\bar Q_+}/{\bar Q_-} = q(\rho_-^0)/q(\rho_+^0)$ is a local function of $\rho_+^0,\,\rho_-^0$). Indeed the value of ${\bar Q_\pm}$ of two different solutions of the model may be different, even if the local values of $(\rho_+^0,\,\rho_-^0)$ are the same. Therefore, there is no local function of $(\rho_+^0,\,\rho_-^0)$ which can match the value of ${\bar Q_\pm}$. Then, in this case, the model becomes: $$\begin{aligned} & & \hspace{-1cm} \partial_t \rho^0_+ + \partial_x (\rho^0_+ u^0_+) = 0 , \nonumber \\ & & \hspace{-1cm} \partial_t \rho^0_- + \partial_x (\rho^0_- u^0_-) = 0 , \nonumber \\ & & \hspace{-1cm} \partial_t (\rho^0_+ w^0_+) + \partial_x (\rho^0_+ w^0_+ u^0_+) = 0 , \nonumber \\ & & \hspace{-1cm} \partial_t (\rho^0_- w^0_-) + \partial_x (\rho^0_- w^0_- u^0_-) = 0 , \nonumber \\ & & \hspace{-1cm} w^0_+= u^0_+ + \bar p_+ \quad \mbox{ with } \quad P(\rho^*) \leq \bar p_+ ,\nonumber \\ & & \hspace{-1cm} w^0_-= -u^0_- + \bar p_-\quad \mbox{ with } \quad P(\rho^*) \leq \bar p_- , \nonumber \\ & & \hspace{-1cm} \rho^0_+ + \rho^0_- = \rho^*, \label{02AR_constant} \\ & & \hspace{-1cm} q(\rho^0_+) \, (\bar p_+ - P(\rho^*)) = q(\rho^0_-) \, (\bar p_- - P(\rho^*)) . \label{02AR_consistency}\end{aligned}$$ Relations (\[02AR\_constant\]) and (\[02AR\_consistency\]) furnish the two supplementary relations which allow us to compute the two additional quantities $\bar p_+$ and $\bar p_-$. The last relation (\[02AR\_consistency\]) specifies how, at congestion, the left and right going pedestrians share the available space. We see that this sharing relation depends upon the choice of the function $q$. Obviously, $q$ is an input of the model which must be determined from the experimental measurements. If some flow asymmetry must be taken into account (like if one crowd is more pushy than the other one), different functions $q_+(\rho_+)$ and $q_-(\rho_-)$ can be used. This model is a system of first-order differential equations in which the fluxes are implicitly determined by the constraint (\[02AR\_constant\]). As a consequence of this constraint, the total particle flux $\rho^0_+ u^0_+ + \rho^0_- u^0_-$ is constant within the congestion region. We note the difference between this constrained model and the constrained 1W-AR model (see section \[subsubsec\_AR\_density\_constraint\_incompressibility\]). In the 1W-AR model, there was a single unknown congestion pressure $\bar p$ and a single density constraint $\rho = \rho^*$. In the 2W-AR model, there are two congestion pressures $\bar p_+$ and $\bar p_-$, which play a similar role in the dynamics of their associated category of pedestrians. However, there is still a single density constraint, acting on the total density $\rho_+ + \rho_- = \rho^*$. The additional condition which allows for the computation of the two congestion pressures is provided by the ’space-sharing’ constraint (\[02AR\_consistency\]). The two constraints express very different physical requirements and must be combined in order to find the two congestion pressures which, themselves, have a symmetric role. ### Introduction of the congestion constraint in the 2W-CAR model {#subsubsec_T2AR_density_constraint} [*(i) Congestion model with smooth transitions.*]{} The smooth pressure relations (\[2AR\_p\_blow\])-(\[2AR\_p\_blow3\]) can be used for the 2W-CAR model. With this pressure relation, we anticipate that the bound $\rho \leq \rho^*$ is enforced. [*(ii) Congestion model with abrupt transitions.*]{} If the limit $\varepsilon \to 0$ is considered, then, the limit model in the uncongested region remains of the same form, i.e. is given by (\[2ARPed\_n+\]), (\[2ARPed\_n-\]) with the pressure given by $p(\rho^0_+ , \rho^0_-)= P(\rho^0_+ + \rho^0_-)$. In the congested region, using the same arguments as in section \[subsubsec\_2WAR\_density\_constraint\_incompressibility\], we find that $(\rho^0_+, \rho^0_-)$ satisfies: $$\begin{aligned} & & \hspace{-1cm} \partial_t \rho^0_+ + \partial_x (\rho^0_+ (V - \bar p_+) ) = 0 , \nonumber \\ & & \hspace{-1cm} \partial_t \rho^0_- - \partial_x (\rho^0_- (V - \bar p_-) ) = 0 , \nonumber \\ & & \hspace{-1cm} \rho^0_+ + \rho^0_- = \rho^*, \label{02ARPed_constant} \\ & & \hspace{-1cm} q(\rho^0_+) \, (\bar p_+ - P(\rho^*)) = q(\rho^0_-) \, (\bar p_- - P(\rho^*)) . \label{02ARPed_consistency}\end{aligned}$$ Again, this model gives rise to a system of first order differential equations in which the fluxes are implicitly determined by the constraints (\[02ARPed\_constant\]), (\[02ARPed\_consistency\]). As a consequence of this constraint, the total particle flux $\rho^0_+ u^0_+ + \rho^0_- u^0_-$ (where $u^0_\pm = V - \bar p_\pm$) is constant within the congestion region. Two-way multi-lane traffic model {#mlane} ================================ A Two-way multi-lane Aw-Rascle model of pedestrians {#subsec_mlane_principles} --------------------------------------------------- We now consider a multi-lane model to describe the structure of the flow in the cross sectional direction to the corridor. The models presented so far considered averaged quantities in the cross section of the corridor. However, it is a well observed phenomenon that two-way pedestrian flow presents interesting spontaneous lane structures (see e.g. [@Burstedde_2001]), with a preferential side depending on sociological behavior: pedestrians show a preference to the right side in western countries, while the preference is to the left in Japan for instance. In order to allow for a description of the cross-section of the flow, we discretize space in this cross-sectional direction and suppose that pedestrians walk along discrete lanes, like cars on a freeway, with lane changing probabilities depending on the state of the downwind flow. In this way, we design a model which may, if the parameters are suitable chosen, exhibit the spontaneous emergence of a structuration of the flow into lanes. We stress however, that the lanes in our model must be viewed as a mere spatial discretization and that spontaneously emerging pedestrian lanes may actually consist of several contiguous discrete lanes of our model. Let $k \in {\mathbb Z}$ be the lane index. So far, we consider an infinite number of lanes. Of course, there is a maximal number of $K$ lanes and $k \in \{1, \ldots, K\}$. Extra-conditions due to the finiteness of the number of lanes are discarded here for simplicity. For each of the lane, we write a 2W-AR model in the form described in section \[1lane\], supplemented by lane-changing source terms. [**(ML-AR model)**]{} For any index $k \in {\mathbb Z}$, let $\rho_{k,\pm}$ the density of pedestrians in the $k$-th lane, $u_{k,\pm}$ their velocity, $w_{k,\pm}$ their desired velocity and $p_{k}$ a pressure term, with an index $+$ for the right-going pedestrians and $-$ for the left-going ones. The ML-AR model is given by: $$\begin{aligned} & & \hspace{-1cm} \partial_t \rho_{k,+} + \partial_x (\rho_{k,+} \, u_{k,+}) = S_{k,+} , \label{kAR_n+} \\ & & \hspace{-1cm} \partial_t \rho_{k,-} + \partial_x (\rho_{k,-} \, u_{k,-}) = S_{k,-} , \label{kAR_n-} \\ & & \hspace{-1cm} \partial_t (\rho_{k,+} \, w_{k,+}) + \partial_x (\rho_{k,+} \, w_{k,+} \, u_{k,+}) = R_{k,+} , \label{kAR_u+} \\ & & \hspace{-1cm} \partial_t (\rho_{k,-} \, w_{k,-}) + \partial_x (\rho_{k,-} \, w_{k,-} \, u_{k,-}) = R_{k,-} , \label{kAR_u-} \\ & & \hspace{-1cm} w_{k,+}= u_{k,+} + p_k(\rho_{k,+}, \rho_{k,-}), \label{kAR_w+} \\ & & \hspace{-1cm} w_{k,-}= -u_{k,-} + p_k(\rho_{k,-}, \rho_{k,+}) . \label{kAR_w-} \end{aligned}$$ where $S_{k,\pm}$ and $R_{k,\pm}$ are source terms coming from the lane-changing transition rates. We allow for different pressure relations in the different lanes, to take into account for instance that the behavior of the pedestrians may be more aggressive in the fast lanes than in the slow ones, or to take into account that circulation along the walls may be different than in the middle of the corridor. This point must be assessed by comparisons with the experiments. We specify the pressure relation in each lane in the form of (\[2AR\_p\_blow\]), (\[2AR\_p\_blow3\]) with parameter values depending on $k$. We denote by $$\begin{aligned} & & \hspace{-1cm} \rho_{k} = \rho_{k,+} + \rho_{k,-} , \label{kAR_rho} \end{aligned}$$ the total density on the $k$-th lane. We assume that the congestion density $\rho^*$ is the same for all lanes (this assumption can obviously be relaxed). Interaction terms in the multi-lane model {#subsec_mlane_sources} ----------------------------------------- We assume that pedestrians prefer to change lane than to reduce their speed, i.e. they change lane if they feel that the offset velocity of their lane (i.e. $p_k(\rho_{k,+}, \rho_{k,-})$ in the case of right-going pedestrians on lane $k$) increases. If facing such an increase, right-going pedestrians change from lane $k$ to lanes $k \pm 1$ (not changing their direction of motion) with rates $\lambda_{k \to k\pm 1}^{+}$. Similarly, these rates are $\lambda_{k \to k\pm 1}^{-}$ for left-going pedestrians. These rates increase with the value of $(d/dt)_{k,+} (p_k(\rho_{k,+}, \rho_{k,-}))$ for $\lambda_{k \to k\pm 1}^{+}$ and with $(d/dt)_{k,-} (p_k(\rho_{k,-}, \rho_{k,+}))$ for $\lambda_{k \to k\pm 1}^{-}$ to indicate that the lane changing probability is increased when an increase of the downstream density is detected. We have denoted by $(d/dt)_{k,\pm}$ the material derivatives for particles moving on the $k$-th lane in the positive or negative direction: $(d/dt)_{k,\pm} = \partial_t + u_{k,\pm} \partial_x$. Strongly congested lanes do not attract new pedestrians. Therefore, $\lambda_{k \to k+ 1}^{+}$ is also a decreasing functions of $\rho_{k+1}$ which vanishes at congestion, when $\rho_{k+1} = \rho^*$. Similarly, $\lambda_{k \to k+ 1}^{-}$ is decreasing with $\rho_{k+1}$ and vanishes at congestion $\rho_{k+1} = \rho^*$ and $\lambda_{k \to k- 1}^{\pm}$ decreases with $\rho_{k-1}$ and vanishes at congestion $\rho_{k-1} = \rho^*$. Given these assumptions on the transition rates, the lane-changing source terms for the density equations are written: $$\begin{aligned} & & \hspace{-1.2cm} S_{k,\alpha} = \lambda_{k+1 \to k}^{\alpha} \, \rho_{k+1,\alpha} + \lambda_{k-1 \to k}^{\alpha} \,\rho_{k-1,\alpha} - (\lambda_{k \to k + 1}^{\alpha} + \lambda_{k \to k - 1}^{\alpha} ) \rho_{k,\alpha}, \quad \alpha = \pm \label{S_k_al_1}.\end{aligned}$$ It is easy to see that this formulation gives: $$\begin{aligned} & & \hspace{-1cm} \sum_{k \in {\mathbb Z}} S_{k,\alpha} = 0, \quad \alpha = \pm\end{aligned}$$ which implies the balance equation of the total number of particles moving in a given direction: $$\begin{aligned} & & \hspace{-1cm} \partial_t \rho_\alpha + \partial_x j_\alpha = 0, \quad \rho_\alpha = \sum_{k \in {\mathbb Z}} \rho_{k,\alpha}, \quad j_\alpha = \sum_{k \in {\mathbb Z}} \rho_{k,\alpha} u_{k,\alpha}, \quad \alpha = \pm. \label{cons_rho_mlane}\end{aligned}$$ Concerning the rates $R_{k,\pm}$, we consider that $w_{k,\pm}$ being a Lagrangian quantity, the quantities $\rho_{k,\pm} w_{k,\pm}$ vary according to the same rates as the densities themselves. Hence, we let: $$\begin{aligned} & & \hspace{-1cm} R_{k,\alpha} = \lambda_{k+1 \to k}^{\alpha} \, \rho_{k+1,\alpha} \, w_{k+1,\alpha}\,+\, \lambda_{k-1 \to k}^{\alpha} \,\rho_{k-1,\alpha} \, w_{k-1,\alpha} \, \nonumber \\ & & \hspace{3.5cm} - (\lambda_{k \to k + 1}^{\alpha} + \lambda_{k \to k - 1}^{\alpha} ) \rho_{k,\alpha} w_{k,\alpha}, \quad \alpha = \pm. \label{R_k_al_1} \end{aligned}$$ The material derivatives of $w_{k,\pm}$ satisfy: $$\begin{aligned} & & \hspace{-1cm} \left( \frac{d w_{k,+}}{dt} \right)_{k,+} \!\!\! : = \partial_t w_{k,+} + u_{k,+} \, \partial_x w_{k,+} = \frac{1}{\rho_{k,+}} (R_{k,+} - w_{k,+} \, S_{k,+}) = \nonumber \\ & & \hspace{-.2cm} = \lambda_{k+1 \to k}^{+} \, \frac{\rho_{k+1,+}}{\rho_{k,+}} ( w_{k+1,+} \!-\! w_{k,+} ) + \lambda_{k-1 \to k}^{+} \, \frac{\rho_{k-1,+}}{\rho_{k,+}} ( w_{k-1,+} \!-\! w_{k,+} ) , \label{kAR_w+_lag} \\ & & \hspace{-1cm} \left( \frac{d w_{k,-}}{dt} \right)_{k,-} \!\!\! : = \partial_t w_{k,-} + u_{k,-} \, \partial_x w_{k,-} = \frac{1}{\rho_{k,-}} (R_{k,-} - w_{k,-} \, S_{k,-}) = \nonumber \\ & & \hspace{-.2cm} = \lambda_{k+1 \to k}^{-} \, \frac{\rho_{k+1,-}}{\rho_{k,-}} ( w_{k+1,-} \!-\! w_{k,-} ) + \lambda_{k-1 \to k}^{-} \, \frac{\rho_{k-1,-}}{\rho_{k,-}} ( w_{k-1,-} \!-\! w_{k,-} ) . \label{kAR_w-_lag} \end{aligned}$$ The right-hand sides of these equations are not zero because the arrival of pedestrians from different lanes with a different preferred velocity modifies the average preferred velocity. The ’constant desired velocity version’ of the two-way multi-lane Aw-Rascle model of pedestrians {#subsec_mlane_toy} ------------------------------------------------------------------------------------------------ To construct the constant desired velocity Aw-Rascle model for two-way multi-lane pedestrian traffic (ML-CAR model), we must set $$\begin{aligned} & & \hspace{-1cm} w_{k,+} = w_{k,-}= V , \label{kARPed_w=v} \end{aligned}$$ and $$\begin{aligned} & & \hspace{-1cm} u_{k,+} = V - p(\rho_{k,+},\rho_{k,-}) , \quad u_{k,-} = - V + p(\rho_{k,-},\rho_{k,+}) . \label{kARPed_u} \end{aligned}$$ We can check in this case that $S_{k,\pm}$ and $R_{k,\pm}$ have been defined in a coherent way by (\[S\_k\_al\_1\]) and (\[R\_k\_al\_1\]), i.e. that they are such that equations (\[kAR\_n+\]-\[kAR\_n-\]) and (\[kAR\_u+\]-\[kAR\_u-\]) become equivalent. The corresponding model is written: [**(ML-CAR model)**]{} Let $\rho_{k,\pm}$ the density of pedestrians in the $k$-th lane, $V$ the constant desired velocity of pedestrian and $p_k$ the pressure term. The ML-CAR model is given by: $$\begin{aligned} & & \hspace{-1cm} \partial_t \rho_{k,+} + \partial_x \Big(\rho_{k,+} (V - p_k(\rho_{k,+},\rho_{k,-})) \Big) = S_{k,+} , \label{kARPed_n+} \\ & & \hspace{-1cm} \partial_t \rho_{k,-} - \partial_x \Big(\rho_{k,-} (V - p_k(\rho_{k,-},\rho_{k,+})) \Big) = S_{k,-} , \label{kARPed_n-} \end{aligned}$$ where $S_{k,\pm}$ is given by (\[S\_k\_al\_1\]). The features of this model are those of the two-way, one-lane CAR model of section \[subsec\_2TAR\_model\], combined with the features of the source terms $S_{k,\pm}$ as outlined in section \[subsec\_mlane\_sources\]. Introduction of the congestion constraint in the multi-lane ML-AR model {#subsec_kWAR_density_constraint} ----------------------------------------------------------------------- ### Congestion model with smooth transitions {#subsubsec_kWAR_density_constraint_smooth} The prescription for the pressure functions $p_k$ are the same as in section \[subsubsec\_2WAR\_density\_constraint\_smooth\], except for a possible $k$-dependence of the constants, namely: $$\begin{aligned} & & \hspace{-1cm} p_k(\rho_{k,+}, \rho_{k,-}) = p_k^\varepsilon (\rho_{k,+},\rho_{k,-}) = P_k(\rho_k) + Q_k^\varepsilon(\rho_{k,+},\rho_{k,-}) , \label{kAR_p_blow} \\ & & \hspace{-1cm} P_k(\rho_k) = M_k \rho_k^{m_k}, \quad m_k \geq 1, \label{kAR_p_blow2} \\ & & \hspace{-1cm} Q_k^\varepsilon(\rho_{k,+},\rho_{k,-}) = \frac{\varepsilon}{q_k(\rho_{k,+}) \left( \frac{1}{\rho_k} - \frac{1}{\rho^*} \right)^{\gamma_k} } , \quad \gamma_k > 1. \label{kAR_p_blow3}\end{aligned}$$ With this pressure law, the ML-AR model becomes a perturbation problem. This is indicated by equipping all unknowns with an exponent $\varepsilon$. This pressure relation can be used in the constant desired velocity model of section \[subsec\_mlane\_toy\] where all particles move with the same speed $V$. ### Congestion model with abrupt transitions {#subsubsec_kWAR_density_constraint_incompressibility} This case corresponds to the formal limit $\varepsilon \to 0$ of the previous model. Suppose that $\rho_k^\varepsilon \to \rho_k < \rho^*$. In this case, $Q_k^\varepsilon (\rho_{k,+}^\varepsilon, \rho_{k,-}^\varepsilon) \to 0$ and we recover a ML-AR model associated to the pressure $P_k(\rho_k)$: $$\begin{aligned} & & \hspace{-1cm} \partial_t \rho^0_{k,+} + \partial_x (\rho^0_{k,+} \, u^0_{k,+}) = S^0_{k,+} , \label{k0AR_n+_NC} \\ & & \hspace{-1cm} \partial_t \rho^0_{k,-} + \partial_x (\rho^0_{k,-} \, u^0_{k,-}) = S^0_{k,-} , \label{k0AR_n-_NC} \\ & & \hspace{-1cm} \partial_t (\rho^0_{k,+} \, w^0_{k,+}) + \partial_x (\rho^0_{k,+} \, w^0_{k,+} \, u^0_{k,+}) = R^0_{k,+} , \label{k0AR_u+_NC} \\ & & \hspace{-1cm} \partial_t (\rho^0_{k,-} \, w^0_{k,-}) + \partial_x (\rho^0_{k,-} \, w^0_{k,-} \, u^0_{k,-}) = R^0_{k,-} , \label{k0AR_u-_NC} \\ & & \hspace{-1cm} w^0_{k,+}= u^0_{k,+} + P_k(\rho^0_k), \label{k0AR_w+_NC} \\ & & \hspace{-1cm} w^0_{k,-}= -u^0_{k,-} + P_k(\rho^0_k). \label{k0AR_w-_NC} \end{aligned}$$ If on the other hand, $\rho_k^\varepsilon \to \rho^*$, the model becomes: $$\begin{aligned} & & \hspace{-1cm} \partial_t \rho^0_{k,+} + \partial_x (\rho^0_{k,+} \, u^0_{k,+}) = S^0_{k,+} , \label{k0AR_n+_C} \\ & & \hspace{-1cm} \partial_t \rho^0_{k,-} + \partial_x (\rho^0_{k,-} \, u^0_{k,-}) = S^0_{k,-} , \label{k0AR_n-_C} \\ & & \hspace{-1cm} \partial_t (\rho^0_{k,+} \, w^0_{k,+}) + \partial_x (\rho^0_{k,+} \, w^0_{k,+} \, u^0_{k,+}) = R^0_{k,+} , \label{k0AR_u+_C} \\ & & \hspace{-1cm} \partial_t (\rho^0_{k,-} \, w^0_{k,-}) + \partial_x (\rho^0_{k,-} \, w^0_{k,-} \, u^0_{k,-}) = R^0_{k,-} , \label{k0AR_u-_C} \\ & & \hspace{-1cm} w^0_{k,+}= u^0_{k,+} + \bar p_{k,+} \quad \mbox{ with } \quad P(\rho^*) \leq \bar p_{k,+} ,\label{k0AR_w+_C} \\ & & \hspace{-1cm} w^0_{k,-}= - u^0_{k,-} + \bar p_{k,-} \quad \mbox{ with } \quad P(\rho^*) \leq \bar p_{k,-} , \label{k0AR_w-_C} \\ & & \hspace{-1cm} \rho^0_{k,+} + \rho^0_{k,-} = \rho^*, \label{0kAR_constant} \\ & & \hspace{-1cm} q_k(\rho^0_{k,+}) \, (\bar p_{k,+} - P_k(\rho^*)) = q_k(\rho^0_{k,-}) \, (\bar p_{k,-} - P_k(\rho^*)) . \label{0kAR_consistency}\end{aligned}$$ The source terms are unchanged compared to the $\varepsilon >0$ case, and the interpretation of the model is the same as in section \[subsubsec\_2WAR\_density\_constraint\_incompressibility\]. Performing the limit $\varepsilon \to 0$ in the constant desired velocity model of section \[subsec\_mlane\_toy\] follows a similar procedure and is left to the reader. Study of the diffusive two-way, one-lane CAR models {#sec_math} =================================================== In this section, we restrict ourselves to the 2W-CAR model presented in section \[subsec\_2TAR\_model\] (i.e. without the introduction of the maximal density constraint), and we investigate the stability of a diffusive perturbation of this model. The goal of this section is to show that the addition of a small diffusivity stabilizes the large wave-numbers in the region of state space where hyperbolicity is lacking. The threshold value of the wave-number below which the instability grows can be related to the size of macroscopic structures observed in real crowd flows. Theoretical analysis -------------------- We consider the following model which is a slight generalization of the 2W-CAR model: $$\begin{aligned} & & \hspace{-1cm} \partial_t \rho_+ + \partial_x f(\rho_+,\rho_-) = \delta \, \partial_x^2 \rho_+ , \label{eq:diff_+} \\ & & \hspace{-1cm} \partial_t \rho_- - \partial_x f(\rho_-,\rho_+) = \delta \, \partial_x^2 \rho_- . \label{eq:diff_-}\end{aligned}$$ Typically, for the 2W-CAR model, $f(\rho_+,\rho_-) = \rho_+ (V - p(\rho_+,\rho_-))$ but we do not restrict ourselves to this simple flux prescription. The assumptions on $f$ are that for fixed $\rho_-$, the function $\rho_+ \to f(\rho_+, \rho_-)$ has the bell-shaped curve of figure \[Fig\_LWR\], which is characteristic of the LWR flux. For fixed $\rho_+$, the function $\rho_- \to f(\rho_+, \rho_-)$ is just assumed decreasing, meaning that the flux of right-going pedestrians is further reduced as the density of left-going pedestrians increases. By symmetry, the diffusivities $\delta$ are assumed to be the same for the two species of particles. Of course, the diffusivities may depend on the densities themselves, in which case they may be different. But we will discard this possibility here. We denote by $$\begin{aligned} & & \hspace{-1cm} \tilde c_{++} = \partial_1 f(\rho_+,\rho_-), \quad \quad \tilde c_{+-} = \partial_2 f(\rho_+,\rho_-), \label{eq:speedu_def1} \\ & & \hspace{-1cm} \tilde c_{-+} = \partial_2 f(\rho_-,\rho_+), \quad \quad \tilde c_{--} = \partial_1 f(\rho_-,\rho_+). \label{eq:speedu_def2}\end{aligned}$$ These quantities are related to those defined in section \[subsec\_2AR\_model\] for the 2W-AR model by $$\tilde c_{++} = c_{u_+}, \quad \tilde c_{+-} = - \rho_+ c_{+-}, \quad \tilde c_{--} = - c_{u_-}, \quad \tilde c_{-+} = - \rho_- c_{-+}. \label{eq:constants}$$ With the assumptions on $f$, we have that $\tilde c_{+-} \leq 0$, $\tilde c_{-+} \leq 0$, while $\tilde c_{++}$ (resp. $\tilde c_{--}$) decreases from positive to negative values when $\rho_+$ (resp. $\rho_-$) increases. Any state such that $(\rho_+,\rho_-)$ is independent of $x$ is a stationary solution. We study the linearized stability of the system about these uniform steady states. Denoting by $(r_+,r_-)$ its unknowns, the linear system is written: $$\begin{aligned} & & \hspace{-1cm} \partial_t r_+ + \tilde c_{++} \partial_x r_+ + \tilde c_{+-} \partial_x r_- = \delta \, \partial_x^2 r_+ , \label{eq:lin_diff_+} \\ & & \hspace{-1cm} \partial_t r_- + \tilde c_{-+} \partial_x r_+ + \tilde c_{--} \partial_x r_- = \delta \, \partial_x^2 r_- . \label{eq:lin_diff_-}\end{aligned}$$ We look for solutions which are pure Fourier modes of the form $r_\pm = \bar r_\pm \exp i (\xi x - st)$ where $\bar r_\pm$ is the amplitude of the mode, $\xi$ and $s$ are its wave number and frequency. Inserting the Fourier Ansatz into (\[eq:lin\_diff\_+\]), (\[eq:lin\_diff\_-\]) leads to a homogeneous linear system for $(\bar r_+,\bar r_-)$. This system has non-trivial solutions if and only if the determinant of the linear system cancels. This results in a relation between $s$ and $\xi$ (the dispersion relation). In this analysis, we restrict to $\xi \in {\mathbb R}$ and are looking for the time stability of the model. We denote by $\lambda = s / \xi$ the phase velocity of the mode. A given mode remains bounded in time, and therefore stable, if and only if the imaginary part of $s$ is non-positive. In the converse situation, the mode is unstable. The system is said linearly stable about the uniform state $(\rho_+,\rho_-)$ if and only if all the modes are stable for all $\xi \in {\mathbb R}$. In the converse situation, the system is unstable, and it is then interesting to look at the range of wave numbers $\xi \in {\mathbb R}$ which generate unstable modes. The following result follows easily from simple calculations: \(i) Suppose $(\rho_+,\rho_-)$ are such that the following condition: $$\Delta := (\tilde c_{++} + \tilde c_{--})^2 - 4 \tilde c_{+-} \tilde c_{-+} \geq 0 , \label{eq:hyp_cond2}$$ is satisfied, then the uniform steady state with uniform densities $(\rho_+,\rho_-)$ is linearly stable about $(\rho_+,\rho_-)$. For any given $\xi \in{\mathbb R}$, there exist two modes whose phase velocities $\lambda_\pm(\xi)$ are given by $$\lambda_{\pm} (\xi) = \frac{1}{2} \left[ \tilde c_{++} - \tilde c_{--} - 2 i \delta \xi \pm \sqrt \Delta \right]. \label{eq:phase_vel}$$ \(ii) Suppose that $(\rho_+,\rho_-)$ are such that (\[eq:hyp\_cond2\]) is not true. Then, the uniform steady state with uniform densities $(\rho_+,\rho_-)$ is linearly unstable about $(\rho_+,\rho_-)$. Moreover, we have $$|\xi| \leq \frac{\sqrt{|\Delta|}}{2\delta} \quad \Longleftrightarrow \quad \exists \mbox{ a mode such that }\mbox{Im} \, \, s > 0 \quad \mbox{(unstable mode)}. \label{eq:unstable}$$ The phase velocity is given by $$\lambda_{\pm} (\xi) = \frac{1}{2} \left[ \tilde c_{++} - \tilde c_{--} - 2 i \delta \xi \pm i \sqrt {|\Delta|} \right]. \label{eq:phase_vel_2}$$ \[prop:stab\_diffus\] We note that if (\[eq:constants\]) is inserted in (\[eq:hyp\_cond2\]), we recover (\[eq:hyp\_cond\]). Therefore, the addition of diffusion does not change the criterion for stability or instability. However, in the unstable case, all modes are unstable for the diffusion-free model (this would correspond to $\delta = 0$ in (\[eq:unstable\])). The addition of a non-zero diffusivity stabilizes the modes corresponding to the small scales (large $\xi$). However, the large scale modes (small $\xi$) remain unstable. We also note that, in the stable case, letting the diffusivity go to zero allows us to recover the characteristic speed of the diffusion-free model (\[eq:char\_vel\_2\]). For unstable modes, (\[eq:phase\_vel\_2\]) provides the typical growth rate $\nu_g$: it is equal to the positive imaginary part of $|\xi| \lambda_+$ , and given by $$\nu_g = \frac{\sqrt{|\Delta|}}{2} |\xi| - \delta \xi^2.$$ It is maximal for $$|\xi| = \frac{\sqrt{|\Delta|}}{4\delta}.$$ Therefore, the typical length scale $L_s$ of the unstable structures is given by the inverse of this wave-number: $$L_s = \frac{4 \delta}{\Delta} ,$$ because the other modes, having smaller growth rate, will eventually disappear compared to the amplitude of the dominant one. These length scale $L_s$ and time scale $1/\nu_g$ may be related to observations and provide a way to assess the model and calibrate it against empirical data. Numerical simulations --------------------- In this part, we want to investigate numerically the system (\[eq:diff\_+\]),(\[eq:diff\_-\]) and in particular we are interested in the profile of the solutions whether the system is in a hyperbolic region or not. With this aim, we first fix a flux function $f(\rho_+,\rho_-)$ defined as: $$\label{eq:flux_f_simu} f(\rho_+,\rho_-) = \rho_+\,\frac{g(\rho_++\rho_-)}{\rho_+ + \rho_-},$$ where $g$ is a flux depending on the total density $\rho=\rho_+ + \rho_-$. We choose for $g$ a simple function increasing on $[0,a]$ and decreasing on $[a,1]$: $$ g(x) = \left\{ \begin{array}{ll} \displaystyle x - \frac{x^2}{2a} & \text{for } 0 \leq x \leq a \\ \displaystyle \frac{a}{2} - \frac{a(a-x)^2}{2(1-a)^2} & \text{for } a \leq x \leq 1 \\ 0 & \text{otherwise} \end{array} \right.$$ Note that here, in order to keep the simulations simple, we choose a much smoother expression for $f$ than the one that was proposed in section \[subsec\_2WAR\_density\_constraint\] to enforce the density constraint. As a result, the density here can become larger than $\rho^*=1$. ![Left figure: the flux function $f(\rho_+,\rho_-)$ (\[eq:flux\_f\_simu\]) used in our simulations. Right figure: the region of non-hyperbolicity (\[eq:hyp\_cond2\]) of the model, e.g. $\Delta<0$ in this region.[]{data-label="fig:function_fg"}](figures/function_fg.eps "fig:") ![Left figure: the flux function $f(\rho_+,\rho_-)$ (\[eq:flux\_f\_simu\]) used in our simulations. Right figure: the region of non-hyperbolicity (\[eq:hyp\_cond2\]) of the model, e.g. $\Delta<0$ in this region.[]{data-label="fig:function_fg"}](figures/Delta_f.eps "fig:") In the following, we take the maximum of $g$ to be at $.7$, e.g. $a=.7$. The function $f$ is a decreasing function of $\rho_-$ since $g$ satisfies $g'(x)\leq 1$ and $f$ is zero when the total mass is greater than $1$, e.g. $f(\rho_+,\rho_-)=0$ if $\rho_+ +\rho_-\geq 1$. We plot the graph of the function $f$ in figure \[fig:function\_fg\] (left). Then, we numerically compute $\Delta$ to determine the region where the system is non-hyperbolic (see figure \[fig:function\_fg\], right). To solve numerically the system (\[eq:diff\_+\]),(\[eq:diff\_-\]), we use a central scheme [@kurganov_new_2000]. With this aim, we consider a uniform grid in space $\{x_i\}_i$ ($\Delta x=x_{i+1}-x_i$) on a fixed interval $[0,L]$ along with a fixed time step $\Delta t$. We denote by $U_i^n$ the approximation of $(\rho_+,\rho_-)$ on the cell $[x_{i-1/2},x_{i+1/2}]$ (with $x_{i+1/2} = x_i+\Delta x/2$) at the time $n\Delta t$. The numerical scheme consists of the following algorithm: $$\frac{U_i^{n+1}-U_i^n}{\Delta t} + \frac{1}{\Delta x}\left(F_{i+1/2}-F_{i-1/2}\right) = \delta\, \frac{U_{i-1}^n-2U_i^n+U_{i+1}^n}{\Delta x^2}.$$ Here, $F_{i+1/2}$ denotes the numerical flux at $x_{i+1/2}$ defined as: $$F_{i+1/2} = \frac{F(U_{i+1/2}^L)+F(U_{i+1/2}^R)}{2} - a_{i+1/2} \frac{U_{i+1/2}^R-U_{i+1/2}^L}{2},$$ where $F$ is the flux of the system $F(\rho_+,\rho_-)=(f(\rho_+,\rho_-),-f(\rho_-,\rho_+))^T$, the vectors $U_{i+1/2}^L$ and $U_{i+1/2}^R$ are respectively the left and right value of $(\rho_+,\rho_-)$ at $x_{i+1/2}$ computed using a MUSCL scheme [@leveque2002fvm] and $a_{i+1/2}$ is the maximum eigenvalues (\[eq:char\_vel\_2\]) of the system at $x_i$ and $x_{i+1}$: $$ a_{i+1/2} = \max(|\lambda_i^{\pm}|,|\lambda_{i+1}^{\pm}|).$$ As initial condition, we use a uniform stationary state $(\rho_+,\rho_-)$ perturbed by stochastic noise: $$\rho_+(0,x) = \rho_+ + \sigma\epsilon_+(x) \quad,\quad \rho_-(0,x) = \rho_- + \sigma\epsilon_-(x),$$ with $\epsilon_+(x)$ and $\epsilon_-(x)$ two independent white noises and $\sigma$ the standard deviation of the noise. We use periodic boundary condition for our simulations. The parameters of our simulations are the following: space mesh $\Delta x=1$, time step $\Delta t=.2$ (CFL$=.406$), diffusion coefficient $\delta=.4$ and standard deviation of the noise $\sigma=10^{-2}$. We use periodic boundary conditions. To illustrate our numerical scheme, we use three different initial conditions. First, we pick two values for $(\rho_+,\rho_-)$ in the hyperbolic region: $$ \rho_+ = .35 \quad,\quad \rho_-=.3.$$ The initial datum is plotted on figure \[fig:equil\_03503\] (left). As we can see on figure \[fig:equil\_03503\] (right), the solution stabilizes around the stationary state $(.35,.3)$. For our second simulation, we take $(\rho_+,\rho_-)$ in a non-hyperbolic region: $$ \rho_+ = .5 \quad,\quad \rho_-=.3.$$ The solution does no longer stabilize around the stationary state $(.5,.3)$. On figure \[fig:equil\_0503\] (left), we observe the apparition of clusters of high density. Each cluster for $\rho_+$ faces a cluster for $\rho_-$. Moreover, in each cluster, the total mass $\rho_++\rho_-$ is greater or equal to $1$. Therefore the flux in this region is zero. However, due to the diffusion, the solution is not in a stationary state. There is exchange of mass between the clusters. If we run the solution for a long time, only one cluster remains (see figure \[fig:equil\_0503\] (right)). In this cluster, we observe that the profile of $\rho_+$ is concave-down whereas the profile of $\rho_-$ is concave-up. Consequently, the diffusivity makes $\rho_+$ moving backward and $\rho_-$ moving forward. As a result, all the clusters are moving to the left. However, the concavity of the solution is puzzling. Numerically, it appears that the concavity of $\rho_+$ and $\rho_-$ depends on the total mass: the density with higher mass is concave-down and the density with lower mass is concave-up. But this property has to be understood analytically. For the third simulation, we take an initial datum $(\rho_+,\rho_-)$ close to the non-hyperbolic region: $$ \rho_+ = .4 \quad,\quad \rho_-=.3.$$ Indeed, we can see on figure \[fig:function\_fg\] (right) that the point $(0.4,0.3)$ almost lies at the border of the non-hyperbolic region. The oscillations amplify and clusters of high densities emerge (figure \[fig:equil\_0403\], left). However, if we increase the diffusion coefficient, taking $\delta=2$ instead of $\delta=.4$, then the solution stabilizes around the stationary state $(.4,.3)$ as we observe on figure \[fig:equil\_0403\], right. Therefore, a large enough diffusion prevents cluster formation. ![The initial condition (left figure) and the solution at $t=500$ unit times. The solution stabilizes around the stationary state $(.35,.3)$.[]{data-label="fig:equil_03503"}](figures/simulations/equil_init_03503.eps "fig:") ![The initial condition (left figure) and the solution at $t=500$ unit times. The solution stabilizes around the stationary state $(.35,.3)$.[]{data-label="fig:equil_03503"}](figures/simulations/equil_03503.eps "fig:") ![Starting from the initial state $(.5,.3)$, the initial oscillations amplify to create clusters (left figure). After a longer time ($t=10^4$ unit times), only one cluster remains (right figure).[]{data-label="fig:equil_0503"}](figures/simulations/equil_0503.eps "fig:") ![Starting from the initial state $(.5,.3)$, the initial oscillations amplify to create clusters (left figure). After a longer time ($t=10^4$ unit times), only one cluster remains (right figure).[]{data-label="fig:equil_0503"}](figures/simulations/equil_0503_t105.eps "fig:") ![Starting from the initial state $(.4,.3)$, clusters appears once again (left figure). However, if we increase the diffusion coefficient ($\delta=2$ instead of $\delta=.4$), the solution is stabilizing (right figure).[]{data-label="fig:equil_0403"}](figures/simulations/equil_0403_diff04.eps "fig:") ![Starting from the initial state $(.4,.3)$, clusters appears once again (left figure). However, if we increase the diffusion coefficient ($\delta=2$ instead of $\delta=.4$), the solution is stabilizing (right figure).[]{data-label="fig:equil_0403"}](figures/simulations/equil_0403_diff2.eps "fig:") The simulations are provided for ilfustration purpose only, and explore what kind of structures the lack of hyperbolicity of the model leads to. The experimental evidence of the appearance of clusters is difficult to provide since they must occur (if they occur) at very high densities. Experiments in such high density conditions are not possible for obvious safety reasons. The observation of real crowds shows that pedestrians can still move even at very high densities thanks to the spontaneous organization of the flow into lanes. The simple one-dimensional model that is simulated here cannot account for this feature. However, we conjecture that cluster formation can be impeded in the multi-lane model through the introduction of adequate lane-changing probabilities. Conclusion {#sec_conclu} ========== In this work, we have presented extensions of the Aw-Rascle macroscopic model of traffic flow to two-way multi-lane pedestrian traffic, with a particular emphasis on the study of the hyperbolicity of the model and the treatment of congestions. A first important contribution of the present work is that two-way models may lose their hyperbolicity in certain conditions and that this may be linked to the generation of large scale structures in crowd flows. Adding diffusion helps stabilize the small scale structures and favors the development of large scale structures which may be related to observations. We have shown numerical simulations which support this interpretation. A second contribution of this work is to provide a methodology to handle the congestion constraint in pedestrian traffic models. Congestion effects reflect the fact that the density cannot exceed a limit density corresponding to contact between pedestrians. We have proposed to treat them by a modification of the pressure relation which reduces the pedestrian velocities when the density reaches this maximal density. If this modification occurs on a very small range of densities, then, the model exhibits abrupt transitions between compressible flow (in the uncongested region) and incompressible flow (in the congested region). Mathematically rigorous proofs that these models respect the upper-bound on the total density are left to future works. Their numerical resolution will require the development specific techniques such as Asymptotic-Preserving methodologies in order to treat the occurrence of congestions. Data learning techniques will then be applied to fit the parameters of the model to experimental data. Other possible extensions of this work are the development of more complex models such as two-dimensional models, kinetic models allowing for a statistical distribution of velocities or crowd turbulence models with weak compressibility near congestion. Finally, the derivation of approximate equations describing the geometric evolution of the transition interface between the uncongested and congested regions would help understanding the dynamics of these interfaces. [99]{} S. Al-nasur & P. Kachroo, [*A Microscopic-to-Macroscopic Crowd Dynamic model*]{}, Proceedings of the IEEE ITSC 2006, 2006 IEEE Intelligent Transportation Systems Conference Toronto, Canada, September 17-20 (2006). A. Aw, A. Klar, A. Materne, M. Rascle, [*Derivation of continuum traffic flow models from microscopic follow-the-leader models*]{}, SIAM J. Appl. Math., [**63**]{} (2002), 259–278. MR1952895 (2003m:35148) A. Aw, M. Rascle, [*Resurrection of second order models of traffic flow*]{}, SIAM J. Appl. Math., [**60**]{} (2000), 916–938. MR1750085 (2001a:35111) N. Bellomo, C. Dogbe, [*On the modelling crowd dynamics: from scaling to second order hyperbolic macroscopic models*]{}, Math. Models Methods Appl. Sci., [**18**]{} (2008), 1317–1345. MR2438218 (2009g:92074) S. Benzoni-Gavage, R. M. Colombo, [*An $n$-populations model for traffic flow*]{}, European J. Appl. Math., [**14**]{} (2003), 587–612. MR2020123 (2004k:90018) F. Berthelin, P. Degond, M. Delitala, M. Rascle, [*A model for the formation and evolution of traffic jams*]{}, Arch. Rat. Mech. Anal., [**187**]{} (2008), 185–220. MR2366138 (2008h:90022) F. Berthelin, P. Degond, V. Le Blanc, S. Moutari, J. Royer, M. Rascle, [*A Traffic-Flow Model with Constraints for the Modeling of Traffic Jams*]{}, Math. Models Methods Appl. Sci., [**18, Suppl.**]{} (2008), 1269-1298. MR2438216 (2010f:35233) F. Bouchut, Y. Brenier, J. Cortes, J. F. Ripoll, [*A hierachy of models for two-phase flows*]{}, J. Nonlinear Sci., [**10**]{} (2000), 639–660. MR1799394 (2001j:76109) C. Burstedde, K. Klauck, A. Schadschneider, J. Zittarz, [*Simulation of pedestrian dynamics using a 2-dimensional cellular automaton*]{}, Physica A, [**295**]{} (2001), 507–525. arXiv:cond-mat/0102397 C. Chalons, [*Numerical approximation of a macroscopic model of pedestrian flows*]{}, SIAM J. Sci. Comput., [**29**]{} (2007), 539–555. MR2306257 (2008a:35188) R. M. Colombo, M. D. Rosini, [*Pedestrian flows and nonclassical shocks*]{}, Math. Methods Appl. Sci., [**28**]{} (2005), 1553–1567. MR2158218 (2006b:90009) C. Daganzo, [*Requiem for second order fluid approximations of traffic flow*]{}, Transp. Res. B, [**29**]{} (1995), 277–286. P. Degond, M. Delitala, [*Modelling and simulation of vehicular traffic jam formation*]{}, Kinet. Relat. Models, [**1**]{} (2008), 279–293. MR2393278 (2009a:90022) P. Degond, J. Hua, L. Navoret, [*Numerical simulations of the Euler system with congestion constraint*]{}, preprint. arXiv:1008.4045 P. Degond, M. Tang, [*All speed scheme for the low Mach number limit of the Isentropic Euler equations*]{}, Commun. Comput. Phys., [**10**]{} (2011), 1-31. arXiv:0908.1929 R. Y. Guo, H. J. Huang, [*A mobile lattice gas model for simulating pedestrian evacuation*]{}, Physica A, [**387**]{} (2008), 580–586. S. J. Guy, J. Chhugani, C. Kim, N. Satish, M. C. Lin, D. Manocha, P. Dubey, [*Clearpath: Highly parallel collision avoidance for multi-agent simulation*]{}, in ACM SIGGRAPH/Eurographics Symposium on Computer Animation (SCA), pp. 177–187 (2009). D. Helbing, [*A mathematical model for the behavior of pedestrians*]{}, Behavioral Science, [**36**]{} (1991), 298–310. D. Helbing, [*A fluid dynamic model for the movement of pedestrians*]{}, Complex Systems, [**6**]{} (1992), 391–415. D. Helbing, P. Molnàr, [*Social force model for pedestrian dynamics*]{}, Physical Review E, [**51**]{} (1995), 4282–4286. D. Helbing, P. Molnàr, [*Self-organization phenomena in pedestrian crowds*]{} in: F. Schweitzer (ed.) Self-Organization of Complex Structures: From Individual to Collective Dynamics, pp. 569–577, Gordon and Breach, London (1997). L. F. Henderson, [*On the fluid mechanics of human crowd motion*]{}, Transportation Research, [**8**]{} (1974), 509–515. S. Hoogendoorn, P. H. L. Bovy, [*Simulation of pedestrian flows by optimal control and differential games*]{}, Optimal Control Appl. Methods, [**24**]{} (2003), 153–172. MR1988582 (2004d:93088) R. L. Hughes, [*A continuum theory for the flow of pedestrians*]{}, Transportation Research B, [**36**]{} (2002), 507–535. R. L. Hughes, [*The flow of human crowds*]{}, Ann. Rev. Fluid Mech., [**35**]{} (2003), 169–182. A. Kurganov, E. Tadmor, [*New high-resolution central schemes for nonlinear conservation laws and convection-diffusion equations*]{}, J. Comput. Phys., [**160**]{} (2000), 240–282. MR1763829 (2001c:65102) R. J. LeVeque, “Finite volume methods for hyperbolic problems”, Cambridge University Press (2002). M. J. Lighthill, J. B. Whitham, [*On kinematic waves. I: flow movement in long rivers. II: A theory of traffic flow on long crowded roads*]{}, Proc. Roy. Soc., [**A229**]{} (1955), 281–345. MR0072605 (17,309e) and MR0072606 (17,310a). B. Maury, A. Roudneff-Chupin, F. Santambrogio, [*A macroscopic crowd motion model of gradient flow type*]{}, Math. Models Methods Appl. Sci., [**20**]{} (2010), 1787–1821. MR2735914 B. Maury, J. Venel, [*A mathematical framework for a crowd motion model*]{}, C. R. Acad. Sci. Paris, Ser. I, [**346**]{} (2008), 1245–1250. MR2473301 (2009m:91150) K. Nishinari, A. Kirchner, A. Namazi, A. Schadschneider, [*Extended floor field CA model for evacuation dynamics*]{}, IEICE Transp. Inf. & Syst., [**E87-D**]{} (2004), 726–732. arXiv:cond-mat/0306262 J. Ond[ř]{}ej, J. Pettré, A-H. Olivier, S. Donikian, [*A synthetic-vision based steering approach for crowd simulation*]{}, in SIGGRAPH ’10 (2010). S. Paris, J. Pettré, S. Donikian, [*Pedestrian reactive navigation for crowd simulation: a predictive approach*]{}, Eurographics, [**26**]{} (2007), 665–674. Pedigree team, [*Pedestrian flow measurements and analysis in an annular setup*]{}, in preparation. J. Pettré, J. Ond[ř]{}ej, A-H. Olivier, A. Cretual, S. Donikian, [*Experiment-based modeling, simulation and validation of interactions between virtual walkers*]{}, in SCA ’09: Proceedings of the 2009 ACM SIGGRAPH/Eurographics Symposium on Computer Animation, pp.189-198 (2009). B. Piccoli, A. Tosin, [*Pedestrian flows in bounded domains with obstacles*]{}, Contin. Mech. Thermodyn., [**21**]{} (2009), 85-107. arXiv:0812.4390 B. Piccoli, A. Tosin, [*Time-evolving measures and macroscopic modeling of pedestrian flow*]{}, Arch. Ration. Mech. Anal., [**199**]{} (2010), 707–738. arXiv:0811.3383 C. W. Reynolds, [*Steering behaviors for autonomous characters*]{}, in Proceedings of Game Developers Conference 1999, San Jose, California, pp. 763-782 (1999). V. Shvetsov, D. Helbing, [*Macroscopic dynamics of multi-lane traffic*]{}, Phys. Rev. E [**59**]{} (1999), 6328-6339. arXiv:cond-mat/9906430 J. van den Berg, H. Overmars, [*Planning time-minimal safe paths amidst unpredictably moving obstacles*]{}, Int. Journal on Robotics Research [**27**]{} (2008), 1274–1294. W. G. Weng, S. F. Shena, H. Y. Yuana, W. C. Fana, [*A behavior-based model for pedestrian counter flow*]{}, Physica A [**375**]{} (2007), 668–678. M. Zhang, [*A non-equilibrium traffic model devoid of gas-like behavior*]{}, Transportation Res. B [**36**]{} (2002), 275–290.
--- abstract: 'Let $I_A$ be a toric ideal. We prove that the degrees of the elements of the Graver basis of $I_A$ are not polynomially bounded by the true degrees of the circuits of $I_A$.' address: - 'Mitilini, P.O. Box 13, Mitilini (Lesvos) 81100, Greece' - 'Department of Mathematics, University of Ioannina, Ioannina 45110, Greece ' author: - Christos Tatakis - Apostolos Thoma title: Graver degrees are not polynomially bounded by true circuit degrees --- \[section\] \[thm1\][Lemma]{} \[thm1\][Remark]{} \[thm1\][Definition]{} \[thm1\][Corollary]{} \[thm1\][Definition]{} \[thm1\][Proposition]{} \[thm1\][Example]{} \[thm1\][Algorithm]{} [^1] Introduction ============ Let $A=\{\textbf{a}_1,\ldots,\textbf{a}_m\}\subseteq \mathbb{N}^n$ be a vector configuration in $\mathbb{Q}^n$ and $\mathbb{N}A:=\{l_1\textbf{a}_1+\cdots+l_m\textbf{a}_m \ | \ l_i \in \mathbb{N}\}$ the corresponding affine semigroup. We grade the polynomial ring $\mathbb{K}[x_1,\ldots,x_m]$ over an arbitrary field $\mathbb{K}$ by the semigroup $\mathbb{N}A$ setting $\deg_{A}(x_i)=\textbf{a}_i$ for $i=1,\ldots,m$. For $\textbf{u}=(u_1,\ldots,u_m) \in \mathbb{N}^m$, we define the $A$-*degree* of the monomial $\textbf{x}^{\textbf{u}}:=x_1^{u_1} \cdots x_m^{u_m}$ to be $$u_1\textbf{a}_1+\cdots+u_m\textbf{a}_m \in \mathbb{N}A.$$ We denoted by $\deg_{A}(\textbf{x}^{\textbf{u}})$, while the usual degree $u_1+\cdots +u_m$ of $\textbf{x}^{\textbf{u}}$ we denoted by $\deg(\textbf{x}^{\textbf{u}})$. The *toric ideal* $I_{A}$ associated to $A$ is the prime ideal generated by all the binomials $\textbf{x}^{\textbf{u}}- \textbf{x}^{\textbf{v}}$ such that $\deg_{A}(\textbf{x}^{\textbf{u}})=\deg_{A}(\textbf{x}^{\textbf{v}})$, see [@St]. For such binomials, we set $\deg_A(\textbf{x}^{\textbf{u}}- \textbf{x}^{\textbf{v}}):=\deg_{A}(\textbf{x}^{\textbf{u}})$. A nonzero binomial $\textbf{x}^{\textbf{u}}- \textbf{x}^{\textbf{v}}$ in $I_A$ is called *primitive* if there exists no other binomial $\textbf{x}^{\textbf{w}}- \textbf{x}^{\textbf{z}}$ in $I_A$ such that $\textbf{x}^{\textbf{w}}$ divides $ \textbf{x}^{\textbf{u}}$ and $\textbf{x}^{\textbf{z}}$ divides $ \textbf{x}^{\textbf{v}}$. The set of the primitive binomials forms the Graver basis of $I_A$ and is denoted by $Gr_A$. An irreducible binomial is called a *circuit* if it has minimal support. The set of the circuits is denoted by $\mathcal{ C}_A$ and it is a subset of the Graver basis, see [@St]. One of the fundamental problems in toric algebra is to give good upper bounds on the degrees of the elements of the Graver basis, see [@H; @St; @St1]. It was conjectured that the degree of any element in the Graver basis $Gr_A$ of a toric ideal $I_A$ is bounded above by the maximal true degree of any circuit in $\mathcal{ C}_A$, [@St1 Conjecture 4.8], [@H Conjecture 2.2.10]. Following [@St1] we define the true degree of a circuit as follows: Consider any circuit $C\in \mathcal{ C}_A$ and regard its support supp($C$) as a subset of $A$. The lattice $\mathbb{Z}$(supp($C$)) has finite index in the lattice $\mathbb{R}$(supp($C$))$\cap \mathbb{Z}A$, which is called the index of the circuit $C$ and denoted by index($C$). The *true degree* of the circuit $C$ is the product $\deg(C)\cdot $index($C$). The crucial role of the true circuit degrees was first highlighted in Hosten’s dissertation [@H]. Let us call $t_A$ the maximal true degree of any circuit in $\mathcal { C}_A$. The true circuit conjecture says that $$\deg(B)\leq t_A,$$ for every $B\in Gr_A$. There are several examples of families of toric ideals where the true circuit conjecture is true, see for example [@Pe]. The true circuit conjecture is also true for some families of toric ideals of graphs, see [@TT Section 4]. However the true circuit conjecture is not true in the general case. In [@TT] we gave an infinite family of counterexamples to the true circuit conjecture by providing toric ideals and elements of the Graver basis for which their degrees are not bounded above by $t_A$. We note that in the counterexamples of [@TT] the degrees of the elements of the Graver basis were bounded by $ t_A^2$. In this article we consider the following question: **Question:** [*Does the degree of any element in the Graver basis $Gr_A$ of a toric ideal $I_A$ is bounded above by a constant times $(t_A)^2$ or a constant times $(t_A)^{2014}$?*]{} To disprove such a statement, one needs to compute the Graver basis and the set of circuits for toric ideals $I_A$ in a polynomial ring with a huge number of variables. In order to produce examples of toric ideals such that there exist elements in their Graver basis of very high degree and at the same time the true degrees of their circuits have to be relatively low. This procedure is computationally demanding, if not impossible. An alternative approach is given by the class of the toric ideals of graphs where we explicitly know the form of the elements of their Graver basis, see [@RTT], and of their circuits, see [@Vi]. The main result of the article is Theorem \[main\] which says that [*there is no polynomial in $t_A$ that bounds the degree of any element in the Graver basis $Gr_A$ of a toric ideal $I_A$*]{}. To prove the theorem we are going to construct a family of examples of graphs $G_r^n$. For the toric ideals of these graphs and for a fixed $n$ we are going to prove that there are elements in the Graver basis whose degrees are exponential on $r$, see Proposition \[degprim\], while the true degrees of their circuits are linear on $r$, see Theorem \[truedeg\] and Proposition \[degcircuit\]. Toric ideals of graphs {#section 2} ====================== Let $G$ be a finite simple connected graph with vertices $V(G)=\{v_{1},\ldots,v_{n}\}$ and edges $E(G)=\{e_{1},\ldots,e_{m}\}$. Let $\mathbb{K}[e_{1},\ldots,e_{m}]$ be the polynomial ring in the $m$ variables $e_{1},\ldots,e_{m}$ over a field $\mathbb{K}$. We will associate each edge $e=\{v_{i},v_{j}\}\in E(G)$ with the element $a_{e}=v_{i}+v_{j}$ in the free abelian group $ \mathbb{Z}^n $ with basis the set of vertices of $G$. Each vertex $v_j\in V(G)$ is associated with the vector $(0,\ldots,0,1,0,\ldots,0)$, where the non zero component is in the $j$ position. We denote by $I_G$ the toric ideal $I_{A_{G}}$ in $\mathbb{K}[e_{1},\ldots,e_{m}]$, where $A_{G}=\{a_{e}\ | \ e\in E(G)\}\subset \mathbb{Z}^n $. A *walk* connecting $v_{i_{1}}\in V(G)$ and $v_{i_{s+1}}\in V(G)$ is a finite sequence of the form $$w=(\{v_{i_1},v_{i_2}\},\{v_{i_2},v_{i_3}\},\ldots,\{v_{i_s},v_{i_{s+1}}\})$$ with each $e_{i_j}=\{v_{i_j},v_{i_{j+1}}\}\in E(G)$. A trail is a walk in which all edges are distinct. The *length* of the walk $w$ is the number $s$ of its edges. An even (respectively odd) walk is a walk of *even* (respectively odd) length. A walk $w=(\{v_{i_1},v_{i_2}\},\{v_{i_2},v_{i_3}\},\ldots,\{v_{i_s},v_{i_{s+1}}\})$ is called *closed* if $v_{i_{s+1}}=v_{i_1}$. A *cycle* is a closed walk $$(\{v_{i_1},v_{i_2}\},\{v_{i_2},v_{i_3}\},\ldots,\{v_{i_s},v_{i_{1}}\})$$ with $v_{i_k}\neq v_{i_j},$ for every $ 1\leq k < j \leq s$. Given an even closed walk $w$ of the graph $G$; where $$w =(e_{i_1}, e_{i_2},\dots, e_{i_{2q}}),$$ we define $$E^+(w)=\prod _{k=1}^{q} e_{i_{2k-1}},\ E^-(w)=\prod _{k=1}^{q} e_{i_{2k}}$$ and we denote by $B_w$ the binomial $$B_w=\prod _{k=1}^{q} e_{i_{2k-1}}-\prod _{k=1}^{q} e_{i_{2k}}.$$ It is easy to see that $B_w\in I_G$. Moreover, it is known that the toric ideal $I_G$ is generated by binomials of this form, see [@Vi]. Note that the binomials $B_w$ are homogeneous and the degree of $B_w$ is $q$, the half of the number of edges of the walk. For convenience, we denote by $\textbf{w}$ the subgraph of $G$ with vertices the vertices of the walk and edges the edges of the walk $w$. We call a walk $w'=(e_{j_{1}},\dots,e_{j_{t}})$ a *subwalk* of $w$ if $e_{j_1}\cdots e_{j_t}| e_{i_1}\cdots e_{i_{2q}}.$ An even closed walk $w$ is said to be primitive if there exists no even closed subwalk $\xi$ of $w$ of smaller length such that $E^+(\xi)| E^+(w)$ and $E^-(\xi)| E^-(w)$. The walk $w$ is primitive if and only if the binomial $B_w$ is primitive. A *cut edge* (respectively *cut vertex*) is an edge (respectively vertex) of the graph whose removal increases the number of connected components of the remaining subgraph. A graph is called *biconnected* if it is connected and does not contain a cut vertex. A *block* is a maximal biconnected subgraph of a given graph $G$. The following theorems determine the form of the circuits and the primitive binomials of a toric ideal of a graph $G$. R. Villarreal in [@Vi Proposition 4.2] gave a necessary and sufficient characterization of circuits: \[circuit\] Let $G$ be a graph and let $W$ be a connected subgraph of $G$. The subgraph $W$ is the graph ${\bf w}$ of a walk $w$ such that $B_w$ is a circuit if and only if 1. $W$ is an even cycle or 2. $W$ consists of two odd cycles intersecting in exactly one vertex or 3. $W$ consists of two vertex-disjoint odd cycles joined by a path. Primitive walks were first studied by T. Hibi and H. Ohsugi, see [@OH]. The next Theorem by E. Reyes, Ch. Tatakis and A. Thoma [@RTT] describes the form of the underlying graph of a primitive walk. \[primitive-graph\] Let $G$ be a graph and let $W$ be a connected subgraph of $G$. The subgraph $W$ is the graph ${\bf w}$ of a primitive walk $w$ if and only if 1. $W$ is an even cycle or 2. $W$ is not biconnected and 1. every block of $W$ is a cycle or a cut edge and 2. every cut vertex of $W$ belongs to exactly two blocks and separates the graph in two parts, the total number of edges of the cyclic blocks in each part is odd. Observe that if $W'$ is the graph taken from $W$ by replacing every cut edge with two edges, then $W'$ is an Eulerian graph since it is connected, every cut vertex has degree four and the others have degree two. An Eulerian trail is a trail in a graph which visits every edge of the graph exactly once. Any closed Eulerian trail $w'$ of $W'$ gives rise to an even closed walk $w$ of $W$ for which every single edge of the graph $W'$ is a single edge of the walk $w$ and every multiple edge of the graph $W'$ is a double edge of the walk $w$ and a cut edge of $W={\bf w}$. Different closed Eulerian trails may give different walks, but all the corresponding binomials $B_w$ are equal or opposite. On the True circuit degree of toric ideals of graphs {#Section 3} ==================================================== In the next Theorem we prove that the index of any circuit $C$ in the toric ideal of a graph $G$ is equal to 1 and therefore the true degree of a circuit $C$ is equal to its degree. \[truedeg\] Let $G$ be a graph and let $C$ be a circuit in $\mathcal{C}_{A_G}$. Then $$true \deg(C)=\deg(C).$$ **Proof.** By definition $true \deg\ (C)=\deg(C)\cdot$ index($C$). We will prove that the index($C$) is equal to one for every circuit $C$ in a toric ideal of a graph $I_G$. It is enough to prove that $\mathbb{Z}$(supp($C$))=$\mathbb{R}$(supp($C$))$\cap \mathbb{Z}A_G$. Obviously $\mathbb{Z}$(supp($C$))$\subseteq \mathbb{R}$(supp($C$))$\cap \mathbb{Z}A_G$. For the converse consider a circuit $C$ in $\mathcal{C}_{A_G}$. By Theorem \[circuit\] there are two cases. First case: $C=B_w$ where ${\bf w}$ is an even cycle and let it be $$C=(e_1=\{v_{2k},v_1\},e_2=\{v_1,v_2\},\ldots,e_{2k}=\{v_{2k-1},v_{2k}\}).$$ Therefore supp($C$)$=\{ a_{e_1}, a_{e_2}, a_{e_3}, \ldots, a_{e_{2k}}\}$. Since $C$ is a cycle we know that $$a_{e_1}-a_{e_2}+a_{e_3}-\ldots-a_{e_{2k}}=0.$$ Let ${\bf x}\in \mathbb{R}$(supp($C$))$\cap \mathbb{Z}A_G$, where $A_G=\{a_e|e\in E(G)\}$. Therefore ${\bf x}=r_1a_{e_1}+\ldots+r_{2k}a_{e_{2k}}$, where $r_1,\ldots,r_{2k}\in\mathbb{R}$, and also ${\bf x}\in \mathbb{Z}A_G\subset \mathbb{Z}^n$. By ${\bf x}_{v}$ we denote the $v$ coordinate of ${\bf x}$ in $\mathbb{Z}^n$ with the canonical basis denoted by the vertices of $G$. Then ${\bf x}_{v_1}=r_1+r_2\in \mathbb{Z}$, ${\bf x}_{v_2}=r_2+r_3\in \mathbb{Z}, \ldots, {\bf x}_{v_{2k}}=r_{2k}+r_1\in \mathbb{Z}$. It follows that $$r_{2l}\equiv-r_1\mod\mathbb{Z},\ \ r_{2l-1}\equiv r_1\mod\mathbb{Z},$$ for $1\leq l\leq k$. Therefore there exist integers $z_1=0, z_2,\dots ,z_{2k}$ such that $r_{2l}=z_{2l}-r_1$ and $r_{2l-1}=z_{2l-1}+r_1$. Then ${\bf x}=r_1a_{e_1}+\ldots+r_{2k}a_{e_{2k}}= r_1a_{e_1}+(z_2a_{e_2}-r_1a_{e_2})+(z_3a_{e_3}+r_1a_{e_3})+\ldots+(z_{2k}a_{e_{2k}}-r_1a_{e_{2k}})= z_2a_{e_2}+\ldots+z_{2k}a_{e_{2k}}\in \mathbb{Z}$(supp($C$)). Second case: $C=B_w$ where ${\bf w}$ consists of two vertex disjoint odd cycles joined by a path or two odd cycles intersecting in exactly one vertex, see Theorem \[circuit\]. Let $(e_1=\{v_{1},v_2\},e_2=\{v_2,v_3\},\ldots,e_{2l+1}=\{v_{2l+1},v_{1}\})$ be the one odd cycle, let $(\xi_1=\{v_1, w_1\}, \xi_2=\{w_1, w_2\}, \ldots, \xi_{t}=\{w_{t-1}, u_1\} )$ be the path of length $t$ and $(\varepsilon_1=\{u_{1},u_2\},\varepsilon_2=\{u_2,u_3\},\ldots,\varepsilon_{2s+1}=\{u_{2s+1},u_{1}\})$ the second odd cycle. In the case that the length $t$ of the path is zero, $v_1=u_1$. Therefore supp($C$)$=\{ a_{e_1}, a_{e_2}, \ldots, a_{e_{2l+1}}, a_{\xi_1}, \ldots, a_{\xi_t}, a_{\varepsilon_1}, a_{\varepsilon_2}, \ldots, a_{\varepsilon_{2s+1}} \}$. Since $C$ is a circuit we have that $$a_{e_1}-a_{e_2} \ldots +a_{e_{2l+1}}-2a_{\xi_1}+ \ldots +2(-1)^ta_{\xi_t}+(-1)^{t+1} (a_{\varepsilon_1}-a_{\varepsilon_2}+ \ldots + a_{\varepsilon_{2s+1}})=0.$$ Let ${\bf x}\in \mathbb{R}$(supp($C$))$\cap \mathbb{Z}A_G$ then ${\bf x}=r_1a_{e_1}+\ldots+r_{2l+1}a_{e_{2l+1}}+q_1a_{\xi_1}+ \ldots +q_ta_{\xi_t}+ \varrho_1a_{\varepsilon_1}+ \varrho_2a_{\varepsilon_2}+ \ldots+ \varrho_{2s+1}a_{\varepsilon_{2s+1}} $, where $r_1,\ldots,r_{2l+1}, q_1,\ldots, q_t, \varrho_1,\ldots, \varrho_{2s+1}\in\mathbb{R}$, and also ${\bf x}\in \mathbb{Z}A_G\subset \mathbb{Z}^n$. By looking at the coordinates of ${\bf x}$ it follows that $$r_{2i}\equiv-r_1\mod\mathbb{Z},\ \ r_{2i+1}\equiv r_1\mod\mathbb{Z},$$ $$q_{m}\equiv(-1)^m2r_1\mod\mathbb{Z},$$ $$\varrho_{2j}\equiv(-1)^tr_1\mod\mathbb{Z},\ \ \varrho_{2j+1}\equiv (-1)^{t+1}r_1\mod\mathbb{Z},$$ for $1\leq i\leq l$, $1\leq m\leq t$ and $1\leq j\leq s$. Therefore there exist integers $x_2,\dots , x_{2l+1}, z_1, \dots , z_t, w_1, \dots ,w_{2s+1}$ such that $r_j=x_j+(-1)^{j+1}r_1$, $q_j=z_j+2(-1)^{t+j}r_1$ and $\varrho_j=w_j+ (-1)^{t+j}r_1.$ Then ${\bf x}=r_1a_{e_1}+\ldots+r_{2l+1}a_{e_{2l+1}}+q_1a_{\xi_1}+ \ldots +q_ta_{\xi_t}+ \varrho_1a_{\varepsilon_1}+ \varrho_2a_{\varepsilon_2}+ \ldots+ \varrho_{2s+1}a_{\varepsilon_{2s+1}} =x_2a_{e_2}+\ldots+x_{2l+1}a_{e_{2l+1}}+z_1a_{\xi_1}+ \ldots +z_ta_{\xi_t}+ w_1a_{\varepsilon_1}+ w_2a_{\varepsilon_2}+ \ldots+ w_{2s+1}a_{\varepsilon_{2s+1}} \in \mathbb{Z}$(supp($C$)).\ Therefore in all cases $\mathbb{R}$(supp($C$))$\cap \mathbb{Z}A_G\subset \mathbb{Z}$(supp($C$)) and thus index$(C)=1$ for all circuits $C$ in $I_{A_G}$. $\square$ Bounds of Graver and True Circuit degrees ========================================= The aim of this section is to provide examples of toric ideals such that there are elements in their Graver bases that have very high degree while the true degrees of their circuits remain relatively low. We will do this for toric ideals of certain graphs, since the full power of Theorem \[truedeg\] will come to use, and true degrees are equal to usual degrees.\ Let $G_1$, $G_2$ be two vertex disjoint graphs, on the vertices sets $V(G_1)=\{v_1,\ldots,v_s\}$, $V(G_2)=\{u_1,\ldots,u_k\}$ and on the edges sets $E(G_1), E(G_2)$ correspondingly. We define the [*sum of the graphs*]{} $G_1,G_2$ on the vertices $v_i,u_j$ as a new graph $G$ formed from their union by identifying the pair of vertices $v_i,u_j$ to form a single vertex $u$. The new vertex $u$ is a cut vertex in the new graph $G$ if both $G_1$, $G_2$ are not trivial. We say that we [*add*]{} to a vertex $v$ of a graph $G_1$ a cycle $S$, to get a graph $G$ if $G$ is the sum of $G_1, S$ on the vertices $v\in V(G_1)$ and any vertex $u\in S$. Let $n$ be an odd integer greater than or equal to three. Let $G_0^n$ be a cycle of length $n$. For $r\ge 0$ we define the graph $G_{r}^n$ inductively on $r$. $G_{r}^n$ is the graph taken from $G_{r-1}^n$ by adding to each vertex of degree two of the graph $G_{r-1}^n$ a cycle of length $n$. Figure \[Figure 1\] shows the graph $G_3^3$. ![The graph $G_3^3$[]{data-label="Figure 1"}](G4.eps) We consider the graphs $G_0^n$ up to $G_{r-1}^n$ as subgraphs of $G_r^n$. We note that the graph $G_r^n$ is Eulerian since by construction it is connected and every vertex has even degree, four if it is also a vertex of $G_{r-1}^n$ and two if it is not. In the next Proposition we prove that the binomial $B_{w_r^n}$ belongs to the Graver basis of $I_{G_r^n}$ and compute its degree. \[degprim\] Let $w_r^n$ be any closed Eulerian trail of the graph $G_r^n$. The binomial $B_{w_r^n}$ is an element of the Graver basis of $I_{G_r^n}$ and $$\deg(B_{w_r^n})=\frac{1}{2}(n+n^2(\frac{(n-1)^r-1}{n-2})).$$ **Proof.** We will prove the theorem by induction. We claim that the binomial $B_{w_s^n}$ belongs to the Graver basis of $I_{G_r^n}$, has degree $\frac{n+n^2(\frac{(n-1)^s-1}{n-2})}{2}$ and the graph $G_{s}^n={\bf w}_s^n$ has $n(n-1)^s$ vertices of degree 2, for $1\leq s\leq r$. For $s=1$ we consider the subgraph $G_1^n={\bf w}_1^n$ of $G_r^n$. The graph is not biconnected, every block of the graph is a cycle and there are no cut edges. Also every cut vertex of $G_1^n$ belongs to exactly two blocks and separates the graph in two parts. One of them is a cycle of length $n$ and the other consists of $n$ cyclic blocks of $n^2$ total number of edges. Thus the total number of edges of the cyclic blocks in each of the two parts is odd. Theorem \[primitive-graph\] implies that $B_{w_1^n}$ is primitive. The total number of edges of $G_1^n$ is $n^2+n$, therefore the degree of the binomial $B_{w_1^n}$ is $\frac{n+n^2}{2}$ and the graph $G_{1}^n={\bf w}_1^n$ has $n(n-1)$ vertices of degree 2.\ Suppose that $B_{w_s^n}$ is primitive, $\deg(B_{w_s^n})=\frac{n+n^2(\frac{(n-1)^s-1}{n-2})}{2}$ and the graph $G_{s}^n={\bf w}_s^n$ has $n(n-1)^s$ vertices of degree 2. By the construction of the graph $G_{s+1}^n$, in every vertex of degree two of the graph $G_s^n$ we add an odd cycle of length $n$. Since there are $n(n-1)^s$ vertices of degree two in $G_{s}^n$, the graph $G_{s+1}^n$ has $ n(n-1)^s$ new cycles, $n(n-1)^{s+1}$ vertices of degree 2 and $n\cdot n(n-1)^s$ new edges. Therefore the binomial $B_{w_{s+1}^n}$ has degree $$\begin{aligned} \deg(B_{w_{s+1}^n}) &=& \frac{n+n^2(\frac{(n-1)^s-1}{n-2})}{2}+\frac{n^2(n-1)^s}{2}\nonumber\\ &=& \frac{n+n^2(\frac{(n-1)^{s+1}-1}{n-2})}{2}.\nonumber\end{aligned}$$ The graph $G_{s+1}^n={\bf w}_{s+1}^n$ is not biconnected and every block of the graph is a cycle, since the graph $G_{s+1}^n$ is constructed by adding cycles on the vertices of degree two of the graph $G_{s}^n$. Let $v$ be a cut vertex of the graph $G_{s+1}^n$. The vertex $v$ is also a vertex of the subgraph $G_s^n$. There are two cases. Either the vertex $v$ is a cut vertex of the subgraph $G_s^n$ or it has degree two in $G_s^n$.\ First case, the vertex $v$ is a cut vertex in the graph $G_s^n$. By the hypothesis $B_{w_s^n}$ is primitive, therefore the vertex $v$ separates the graph $G_s^n={\bf w}_{s}^n$ in two parts. The total number of edges of the cyclic blocks in each of the two parts is odd by Theorem \[primitive-graph\]. The graph $G_{s+1}^n$ is taken from the graph $G_s^n$ by adding in every vertex of degree two of $G_s^n$ a cycle of length $n$. Thus in each cycle of the graph $G_s^n$ that has $n-1$ vertices of degree two we add $(n-1)n$ new edges, i.e. even number of edges and therefore the vertex $v$ separates also the graph $G_{s+1}^n$ in two parts, the total number of edges of the cyclic blocks in each part is odd.\ In the second case, the vertex $v$ has degree two in the graph $G_s^n$. The vertex $v$ separates the graph $G_{s+1}^n$ in two parts. One of them is a cycle of length $n$ and the other one has $2\deg(B_{w_{s+1}^n})-n$ edges. Thus the total number of edges of the cyclic blocks in each part is odd.\ From Theorem \[primitive-graph\] we conclude that the binomial $B_{w_{s+1}^n}$ is primitive. $\square$ Let $B(G_r^n)$ be the [*block tree*]{} of $G_r^n$, the bipartite graph with bipartition $(\mathbb{B},\mathbb{S})$ where $\mathbb{B}$ is the set of blocks of $G_r^n$ and $\mathbb{S}$ is the set of cut vertices of $G_r^n$, $\{{\mathcal B}, v\}$ is an edge if and only if $v\in {\mathcal B}$. The leaves of the block tree are the vertices of the block tree which have degree one. Let ${\mathcal B}_k, {\mathcal B}_i, {\mathcal B}_l$ be blocks of a graph $G_r^n$. We call the block ${\mathcal B}_i$ *internal block* of ${\mathcal B}_k,{\mathcal B}_l$, if ${\mathcal B}_i$ is an internal vertex in the unique path defined by ${\mathcal B}_k,{\mathcal B}_l$ in the block tree $B(G_r^n)$. Every path of the graph $G_r^n$ from the block ${\mathcal B}_k$ to the block ${\mathcal B}_l$ passes from every internal block of ${\mathcal B}_k,{\mathcal B}_l$. The path has vertices at least the cut vertices which are vertices in the path $({\mathcal B}_k,\ldots,{\mathcal B}_l)$ in $B(G_r^n)$ and from one to at most $n-1$ common edges with the cycle that forms an internal block. We denote by $\d({\mathcal B}_1,{\mathcal B}_2)$ the block distance between two vertices ${\mathcal B}_1, {\mathcal B}_2\in \mathbb{B}$ of the block tree $B(G_r^n)$, which we define as the number of the internal vertices belonging to $\mathbb{B}$ in the unique path defined by the blocks ${\mathcal B}_1,{\mathcal B}_2$ in the block tree $B(G_r^n)$. The next lemma will be used to prove proposition \[degcircuit\]. \[leaves\] Let ${\mathcal B}_1,{\mathcal B}_2$ be two blocks of the graph $G_r^n$. Then $$\d({\mathcal B}_1,{\mathcal B}_2)\leq 2r-1.$$ **Proof.** We will prove it by induction. We claim that for any two blocks ${\mathcal B}_1,{\mathcal B}_2$ of the graph $G_s^n$ holds $\d({\mathcal B}_1,{\mathcal B}_2)\leq 2s-1,$ for $1\leq s\leq r$.\ We consider the block tree $B(G_1^n)$. Let ${\mathcal B}_1,{\mathcal B}_2$ be two blocks of the graph $G_1^n$. If both of them are leaves of the block tree $B(G_1^n)$ then $\d({\mathcal B}_1,{\mathcal B}_2)=1$ since there is exactly one internal block, which corresponds to the graph $G_0^n$. Otherwise, the distance is equal to 0. In every case $\d({\mathcal B}_1,{\mathcal B}_2)\leq 1=2\cdot1-1.$\ Suppose that the claim is true for $G_s^n$. We consider the graph $G_{s+1}^n$ and let ${\mathcal B}_1,{\mathcal B}_2$ be two of its blocks. Each of the blocks ${\mathcal B}_1,{\mathcal B}_2$ is either block of the graph $G_s^n$ or has a common cut vertex with a block of the graph $G_s^n$. It follows from the induction hypothesis that $\d({\mathcal B}_1,{\mathcal B}_2)\leq (2r-1)+2=2(r+1)-1$. $\square$ We denote by $t_{A_{G_r^n}}$ the maximum degree of a circuit in the graph $G_r^n$. In the following proposition we are providing a bound for the $t_{A_{G_r^n}}$. \[degcircuit\] Let $t_{A_{G_r^n}}$ the maximum degree of a circuit in the graph $G_r^n$. Then $t_{A_{G_r^n}}\leq n+(2r-1)(n-1)$. **Proof.** The graph $G_r^n$ has no even cycles and therefore the subgraph corresponding to a circuit consists by two different odd cycles joined by a path, see Theorem \[circuit\]. We remark that every cycle of the graph has length $n$ and it is a block. Therefore it is enough to prove that a path between two blocks ${\mathcal B}_1,{\mathcal B}_2$ of $G_r^n$ has length at most $(2r-1)(n-1)$. Each such path passes from all internal blocks of ${\mathcal B}_1,{\mathcal B}_2$ and no other and has at most $n-1$ common edges with every one of them. Therefore the path has at most length $\d({\mathcal B}_1,{\mathcal B}_2)\cdot (n-1)\leq (2r-1)(n-1).$ Thus the corresponding circuit has degree at most $ n+(2r-1)(n-1)$. $\square$ [It is not difficult to see that the bound given at Proposition \[degcircuit\] is sharp. In fact, there are several appropriate choices for the two blocks ${\mathcal B}_1,{\mathcal B}_2$ of $G_r^n$ and a unique choice of the path between them such that the $t_{A_{G_r^n}}=n+(2r-1)(n-1)$.]{} There are several bounds on the degrees of the elements of the Graver basis of a toric ideal, see for example [@H; @St; @TT]. The following theorem is the main result of the paper. It shows that for a general toric ideal $I_A$ a bound given by a polynomial in $t_A$ for the degrees of the elements of the Graver basis does not exist. Recall that $t_A$ is the maximal true degree of a circuit in $I_A$. \[main\] The degrees of the elements in the Graver basis of a toric ideal $I_A$ cannot be bounded polynomially above by the maximal true degree of a circuit. **Proof.** Let $G$ be the graph $G_r^n$. It follows from Theorem \[truedeg\] and Proposition \[degcircuit\] that the maximal true degree of a circuit is linear on $r$, while from Proposition \[degprim\] there exists an element in the Graver basis whose degree is exponential in $r$. Therefore the degree of an element in the Graver basis $Gr_{A_G}$ of a toric ideal $I_{A_G}$ cannot be bounded polynomially above by the maximal true degree of a circuit in $\mathcal{ C}_{A_G}$. The proof of the theorem follows. $\square$ [00]{} S. Hosten, Degrees of Gröbner bases of integer programs, Ph.D Thesis Cornell University (1997). H. Ohsugi and T. Hibi, Toric ideals generated by quadratic binomials, J. Algebra 218 (1999) 509–527. S. Petrović, *On the universal Gröbner bases of varieties of minimal degree*, Math. Res. Lett. **15** (2008) 1211–1223. E. Reyes, Ch. Tatakis and A. Thoma, Minimal generators of toric ideals of graphs, Adv. Appl. Math. 48 (2012) 64-78. B. Sturmfels, Gr[ö]{}bner Bases and Convex Polytopes. University Lecture Series, No. 8 American Mathematical Society. B. Sturmfels, Equations defining toric varieties, Algebraic Geometry, Santa Cruz 1995, American Mathematical Society, Providence, RI, (1997) 437–449. Ch. Tatakis and A. Thoma, On the Universal Gröbner bases of toric ideals of graphs, J. Combin. Theory Ser. A 118 (2011) 1540-1548. R. Villarreal, Rees algebras of edge ideals, Comm. Algebra 23 (1995) 3513–3524. [^1]:
--- abstract: 'The Brazilian spherical antenna (Schenberg) is planned to detect high frequency gravitational waves (GWs) ranging from 3.0 kHz to 3.4 kHz. There is a host of astrophysical sources capable of being detected by the Brazilian antenna, namely: core collapse in supernova events; (proto)neutron stars undergoing hydrodynamical instability; f-mode unstable neutron stars, caused by quakes and oscillations; excitation of the first quadrupole normal mode of 4–9 solar mass black holes; coalescence of neutron stars and/or black holes; exotic sources such as bosonic or strange matter stars rotating at 1.6 kHz; and inspiralling of mini black hole binaries. We here address our study in particular to the neutron stars, which could well become f-mode unstable producing therefore GWs. We estimate, for this particular source of GWs, the event rates that in principle can be detected by Schenberg and by the Dutch Mini-Grail antenna.' address: - | $^{1}$Instituto Nacional de Pesquisas Espaciais - Divisão de Astrofísica\ Av. dos Astronautas 1758, São José dos Campos, 12227-010 SP, Brazil - | $^{2}$Instituto Tecnológico de Aeronáutica - Departamento de Física\ Praça Marechal Eduardo Gomes 50, São José dos Campos, 12228-900 SP, Brazil author: - 'José Carlos N de Araujo$^{1}$, Oswaldo D Miranda$^{1,2}$ and Odylio D Aguiar$^{1}$' title: 'Detectability of f-mode unstable Neutron Stars by the Schenberg Spherical Antenna' --- Introduction ============ The Brazilian antenna is made of Cu-Al (94$\%$-6$\%$), it has a diameter of 65 cm and covers the 3.0-3.4 kHz bandwidth using a two-mode parametric transducer (see, Aguiar 2002, 2004 for details). The initial target is a sensitivity of $\tilde{h}\sim 2\times 10^{-21}\, {\rm Hz^{-1/2}}$ in a 50 Hz bandwidth, which we believe is reachable at 4.2 K with conservative parameters. An advanced sensitivity will be pursued later, cooling the antenna to 15 mK and improving all parameters (see Frajuca 2004). The Brazilian antenna will operate in conjunction with the Dutch Mini-GRAIL antenna, and the laser interferometer detectors, which can also cover such high frequencies with similar sensitivities. It is worth mentioning that the event rates for the two spherical antennas will be the same, as long as they have the same sensitivity. So, throughout the paper, when we refer to the sources and the event rates to the Schenberg antenna, the reader has to bear in mind that we are also refereing to the the Dutch Mini-Grail antenna. In a previous work by a M.Sc. student of our group (Castro 2003), a preliminary study of the most probable sources of gravitational waves (GWs) was conducted that could be detected by Schenberg, namely: core collapse in supernova events; (proto)neutron stars undergoing hydrodynamical instability; f-mode unstable neutron stars, caused by quakes and oscillations; excitation of the first quadrupole normal mode of $4-9 M_{\odot}$ black holes; coalescence of neutron stars and/or black holes; exotic sources such as bosonic or strange matter stars rotating at 1.6 kHz; and inspiralling of mini black hole binaries. Our aim here is to present the results of this study and to revisit the $f-$mode unstable neutron star source of GWs. This specific pulsation mode is one of the most important channel for the emission of GWs (Kokkotas and Andersson 2001) by neutron stars. In particular, we are using recent results for the radial distribution of pulsars in the Galaxy (Yusifov and Küçük 2004) in order to determine the event rate detectable by Schenberg for a given efficiency of generation of GWs. The paper is organized as follows. In section 2 we briefly consider the sources detectable by the Schenberg antenna, in section 3 we revisit the f-mode neutron star GW detectability by the Schenberg antenna, and finally in section 4 we present our conclusions. The Sources to Schenberg ======================== First of all, it is worth mentioning that in the present estimates of event rates we are assuming that the Schenberg’s sensitivity to burst sources is of $h \sim 10^{-20}$, which seems reasonable from our projected $\tilde{h}$ and bandwidth. It is important to bear in mind that such a sensitivity is not the quantum limit one, which could be a factor around 5 better. Also, it is worth remembering that using the “squeezing technique" the quantum limit could in principle be overtaken. All this would imply that Schenberg, which will be using parametric transducers, could in principle present event rates significantly greater than those presented here; in some cases, the rates could well be higher by a factor of up to $\sim 10^{2}$. It is worth stressing that the Brazilian detector will be sensitive to sources of the local group of galaxies ($r\sim 1.5{\rm Mpc}$). Although, just the Galaxy, ${\rm M}31$ and ${\rm M}33$ can give a significant contribution to the event rates, because these three galaxies account for more than $90\%$ of the mass of the local group. Our estimates show that, except for the mini black hole binaries and the f-mode unstable neutron stars, the other putative sources to Schenberg present event rates of at most one event every $\sim$ 10 years, at a signal-to-noise ratio (SNR) equal to unity. Thus, the prospect for the detection of these sources is not very promising. Because of this we do not enter into details of such estimates. For the mini black hole binaries we refer the reader to the paper by de Araujo (2004) for details. They show that the event rate in this case may be of one event every 5 years, at SNR equal to unity. Our main aim in the present paper is to pay attention to the f-mode unstable neutron star, which can in principle be an important source to the Schenberg antenna. In our previous study we show that one such an event every year would be detectable by Schenberg at a SNR equal to unity. In the next section we revisit the f-mode unstable neutron star study concerning its detectability in particular by the Schenberg antenna. GWs from f-Mode Unstable Neutron Stars ====================================== Before studying the f-mode unstable neutron star as a source of GWs to the Brazilian antenna, which is of major interest here, it is worth considering its relevance as compared to the other pulsation modes such stars could have in what concern the generation of GWs. Relativistic stars are known to have a host of pulsation modes. Only a few of them, however, is of relevance for GW detection. From the GW point of view the most important modes are the fundamental (f) mode of fluid oscillation, the first few pressure (p) modes and the first GW (w) mode (Kokkotas and Schutz 1992). Among these three modes the pulsation energy is mostly stored in the f-mode in which the fluid parameters undergo the largest changes. It is worth mentioning that the r-mode can also be, under certain circumstances, a very important source of GWs (Andersson 2001). An important question is how the modes are excited in the neutron stars, which are of our concern here. There are many scenarios that could lead to significant asymmetries. A supernova explosions are expected to form a wildly pulsating neutron star that emits GWs. A pessimistic estimate for the energy radiated as GWs indicates a total release equivalent to $< 10^{-6}M_{\odot}$. An optimistic estimates, where the neutron star is formed, for example, from strongly non-spherical collapse, suggest a release equivalent to $10^{-2}M_{\odot}$. Another possible excitation mechanism for neutron star pulsation is a starquake, which can be associated with a pulsar glitch. The energy released in this process may be of the order of the maximum mechanical energy stored in the crust of the neutron stars, which is estimated to be of $10^{-9} - 10^{-7}M_{\odot}$ (Blaes 1989; Mock and Joss 1998). During the coalescence of two neutron stars several oscillation modes could in principle be generated. Stellar oscillations being excited by the tidal fields of the two stars, for example. The neutron star may undergo a phase transition leading to a mini-collapse, which could lead to a sudden contraction during which part of the gravitational binding energy of the star would be released, and, as a result, it could occur that part of this energy would be channelled into pulsations of the remnant. Similarly, the transformation of a neutron star into a strange star is likely to induce pulsations. In our previous study we have found that Schenberg could in principle detect at least one such a source per year at SNR equal to unity. The basic assumptions in this study are the following. Firstly, we have taken into account in our estimate only the known Pulsars. Secondly, we have assumed that the energy release in GWs is of the order of $10^{-6} M_{\odot}$ (see, e.g., Andersson and Kokkotas 1996). Thirdly, we have associated the f-mode excitation with the same mechanism responsible to the glitch phenomenon, which is related to some neutron star internal structure rearrangement (see, e.g., Horvath 2004). Last but not least, we have assumed that the f-mode instability may produce GWs in the frequency band of $3.0-3.4$ kHz, that of Schenberg’s. We refer the reader to the paper by Kokkotas and Andersson (2001), in particular its figure 2, where it is shown clearly that for a family of equations of state (EOSs) GWs of $\sim 3$ kHz may be produced by f-mode unstable neutron stars. It is worth recalling that GWs produced in the f-mode excitation depends on the EOS for the neutron star matter that, as is well known, is not completely established. Before considering the improvements we intend to take into account in revisiting this study, it is worth recalling that the characteristic amplitude of GWs related to the f-mode instability is given by $$\label{h} h \simeq 2.2\times 10^{-21}\left(\frac{\varepsilon_{GW}}{10^{-6} }\right)^{1/2}\left(\frac{2\, kHz}{f_{GW}}\right)^{1/2}\left(\frac{50\, kpc}{r} \right),$$ (see, e.g., Andersson and Kokkotas 1998) where $\varepsilon_{GW}$ is the efficiency of generation of GWs, $f_{GW}$ is the GW frequency, and $r$ is the distance to the source. It is worth mentioning that the f-mode is a burst source of GWs with a duration of tenths of a second (see, e.g., Andersson and Kokkotas 1998). This signal being concentrated in a bandwidth which is completely within the Schenberg’s bandwidth. As a result in the calculation of the SNR it is a good approximation to compare directly the detector pulse sensitivity with the predicted wave amplitude of the f-mode GW. The Schenberg’s sensitivity for burst sources can be of the order of $10^{-20}$. This implies that, for $\varepsilon_{GW} \sim 10^{-6}$ and $f_{GW}=$3 kHz, Schenberg can in principle detect f-mode unstable neutron star sources at distances of up to $r \sim 10$ kpc at $SNR\sim 1$. Certainly, the number of neutron stars within the volume [*seen*]{} by Schenberg could be in principle enormous. Unless the efficiency of generation of GWs through f-mode instability is $ \ll 10^{-6}$ or such a mode is not excited at all, Schenberg could in principle detect f-mode unstable neutron stars with a considerable event rate. In this study the main ingredient we have taken into account is the distribution function of pulsars in the Galaxy. One could consider, however, that it would be desirable to take into account, instead, a distribution function for the neutron stars in the Galaxy, because the pulsar population is a tiny part (say 0.1-0.01%; later on we explain how these figures are obtained) of the neutron star population. But, one has to bear in mind that most of the observed neutron stars is in fact seen in the form of pulsars. It is worth mentioning that if the f-mode instability occurs in any neutron star, being it a pulsar or not, the event rate seen by any GW detector sensitive to its frequency could be strongly enhanced. The distribution function for the pulsars in the Galaxy has been studied in many papers (see, e.g., Narayan 1987, Paczynski 1990, Hartman 1997; Lyne 1998; Schwarz and Seidel 2002, among others). We here adopt the distribution function recently obtained by Yusifov and Küçük (2004), namely, $$\rho(R)=A\biggl({R \over R_\odot} \biggr)^a \exp{\biggl[-b\biggl({R-R_\odot \over R_\odot }\biggr) \biggr] } \label{Gam14} ,$$ where $\rho(R)$ is the surface density of pulsars, $R$ is Galactocentric distance, $R_\odot=8.5$ kpc is the Sun$-$Galactic center (GC) distance. Note that, the equation (\[Gam14\]) implies that $\rho(0)=0$, which is inconsistent with observations. To avoid such a problem the authors include an additional parameter $R_1$ and used a shifted Gamma function, replacing $R$ and $R_\odot$ in equation (\[Gam14\]) by $X=R+R_1$ and $X_\odot =R_\odot +R_1$, respectively. The best fit, using the LMS method gives: $A=37.6\pm1.9 {\rm kpc}^{-2}, \> a=1.64\pm0.11$, $b=4.01\pm0.24$ and $R_1=0.55\pm0.10$ kpc. We refer the reader to the paper by Yusifov and Küçük (2004) for further details. In figure 1 we present the number of pulsars as a function of the distance from the sun, which has been obtained through integration of equation (\[Gam14\]). Also present is the number of pulsars corrected by the beaming factor, which multiples that number by a factor of approximately 10 (see, e.g., Tauris and Manchester 1998). The number of pulsars that could be seen by Schenberg amount $\sim 10^{5}$ (taking into account the beaming correction). In the whole Galaxy the number of pulsars with luminosity greater than 0.1 mJy ${\rm kpc^{2}}$ at 1400MHz is predicted to be of $\sim 2.4\times 10^{5}$, taking into account the beaming factor. Again, we refer the reader to the paper by Yusifov and Küçük (2004) for further details. ![Number of pulsars as a function of the distance from the sun without (solid line) and with (dashed line) the beaming factor correction.](dearaujorv01.eps){width="6cm"} Now, from the available catalogs at that time, Castro (2003) obtained that only $3\%$ of the known pulsars present glitch phenomenon. The number of glitches in 25 years amounting to 45. From Castro we obtain that the number of glitches per year per pulsars amount to $2.6\times 10^{-3}$. Since we are considering the fact that the f-mode is capable of being excited during the glitch phenomenon we have: $$2.6\times 10^{-3} events/yr/pulsar.$$ Finally, to obtain the number of events per year, detectable by Schenberg, we use the results presented in figure 1. Since the efficiency of generation of GW through f-mode channel is not known, we present in figure 2 the event rate detectable by Schenberg as a function of $\varepsilon_{GW}$. Note that for $\varepsilon_{GW} > 10^{-8}$ ($10^{-7}$) we predict that one f-mode source could in principle be detected at $SNR=1$ ($SNR=3$) every year. It is worth noting that the number of pulsars used in the calculation of figure 2 takes into account the beaming correction. ![Number of events per year, i.e., number of f-mode unstable pulsars per year, as a function of $\varepsilon_{GW}$, the efficiency of generation of GWs, detectable by Schenberg at $SNR=1$ and $3$, assuming that the Schenberg’s sensitivity to burst sources is of $h \sim 10^{-20}$.](dearaujorv02.eps){width="6cm"} Note that the event rate is critically dependent on the value of $\varepsilon_{GW}$. In figure 2 one sees that if this parameter is too small the event rate is very small too. However, the prediction appearing in figure 2 takes into account only the neutron stars in the form of pulsars with luminosity greater than 0.1 mJy ${\rm kpc^{2}}$ at 1400MHz. The number of neutron stars in the Galaxy, pulsars or not, could well be a factor of a thousand, or even tens of a thousand, greater. Paczynski (1990), for example, estimates that there may exist $\sim 10^{9}$ neutron stars in the Galaxy. Other authors find similar figures, namely: Nelson (1995) and Walter (2001) argue that there may exit $\sim 10^{8} - 10^{9}$ neutron stars in the Galaxy; whereas, Timmes (1996), using models for massive stellar evolution coupled to a model for Galactic chemical evolution, obtained $\sim 10^{9}$ neutron stars in the Galaxy. If the non pulsar neutron stars could also be f-mode unstable, this means that the event rate detectable by Schenberg could be greatly enhanced. In particular, if the fraction of neutron stars and the fraction of pulsars, which are f-mode unstable, are similar this means that the event rate that could be detected by Schenberg would be a thousands, or even a few thousands, greater. If this is the case, even though $\varepsilon_{GW} \sim 10^{-10}$, Schenberg could detect $\sim 10-100$ events every year at $SNR=3$. Final Remarks ============= Particular attention has been given here to the f-mode sources, because it can in principle be one of the most important candidates to be detected by the Schenberg antenna, with an event rate that could amount to several sources every year. Since the interferometers are also sensitive to the GWs generated by f-mode neutron stars, it would be of interest to search for this sources with such detectors. Due to the fact that the sensitivity of the interferometers at 3 kHz (see, e.g., Shoemaker 2005) could well be similar to that of the Schenberg antenna, the event rate of both detectors could be similar. Also, it is worth mentioning that the interferometers could probe a wider range of EOSs, as compared to the Schenberg antenna, since they are sensitive to broader GW frequency band. Finally, it is worth mentioning that Kokkotas (2001) show that detecting the f-mode, the EOS, the mass and the radius of the neutron stars will be strongly constrained. The reader should appreciate the reading of this paper by Kokkotas , who show in detail how these above mentioned astrophysical information is obtained from the GW data. Aguiar O D 2002 1949 Aguiar O D 2004 S457 Andersson N and Kokkotas K D 1996 4134 Andersson N and Kokkotas K D 1998 [*Mon. Not. R. Astron. Soc.*]{} [**299**]{} 1059 Blaes O, Blandford R, Goldreich P, Madau P 1989 [*Astrophys. J*]{} [**343**]{} 839 Castro C S 2003 [*Master thesis*]{} (S.J. Campos: INPE-10118-TDI/896) de Araujo J C N, Miranda O D, Castro C S, Paleo B W and Aguiar O D 2004 S521 Frajuca C, Ribeiro K L, Andrade L A, Aguiar O D, Magalhães N S and de Melo Marinho Junior R 2004 S1107 Hartman J W 1997 [*Astron. Astr.*]{} [**322**]{} 477 Horvath J E 2004 [*Int. J. Mod. Phys.*]{} [**D 13**]{} 1327 Kokkotas K D and Andersson N 2001 [*Preprint gr-qc/0109054*]{} Kokkotas K D, Apostolatos A T and Andersson N 2001 [*Mon. Not. R. Astron. Soc.*]{} [**320**]{} 307 Kokkotas K D and Schutz B F 1992 [*Mon. Not. R. Astron. Soc.*]{} [**255**]{} 119 Lyne A G 1998 [*Mon. Not. R. Astron. Soc.*]{} [**295**]{} 743 Mock P C and Joss P C 1998 [*Astrophys. J.*]{} [**500**]{} 374 Narayan R 1987 [*Astrophys. J.*]{} [**319**]{} 162 Nelson R W, Wang J C L, Salpeter E E, Wasserman I 1995 [*Astrophys. J.*]{} [**438**]{} L99 Paczynski B 1990 [*Astrophys. J.*]{} [**348**]{} 485 Schwarz D J and Seidel D 2002 [*Astron. Astr.*]{} [**388**]{} 483 Shoemaker D 2005 in press Timmes F X, Woosley S E and Weaver T A 1996 [*Astrophys. J.*]{} [**457**]{} 83 Tauris T M and Manchester R N 1998 [*Mon. Not. R. Astron. Soc.*]{} [**298**]{} 625 Walter, F M 2001 [*Astrophys. J.*]{} [**549**]{} 433 Yusifov I and Küçük I 2004 [*Astro. Astr.*]{} [**422**]{} 545
--- abstract: 'Different notions of amenability of hypergroups and the relations between them are studied. Developing Leptin’s theorem for hypergroups, we characterize the existence of a bounded approximate identity for the Fourier algebra of hypergroups. We study the Leptin condition for some classes of hypergroups derived from representation theory of compact groups. Studying amenability of the hypergroup algebra of discrete commutative hypergroups, we apply these hypergroup tools to derive some results on amenability properties of some Banach algebras on locally compact groups. We prove some results concerning amenability of $ZA(G)=A(G) \cap ZL^1(G)$ for compact groups $G$ and amenability of $Z\ell^1(G)$ for FC groups $G$. Also we show that proper Segal algebras of compact connected simply connected real Lie groups are not approximately amenable. 1.0em [**Keyword:**]{} hypergroups; Fourier algebra; amenability; compact groups. 1.0em [**AMS codes:**]{} 43A62, 46H20.' author: - bibliography: - 'Bibliography.bib' --- 2.5em Amenability notion of locally compact groups has different characterizations which lead to different definitions. For example, amenability of locally compact groups is equivalent to a family of structural properties called . Also subsequently, the amenability leads to the existence of a bounded approximate identity of the Fourier algebra of the group. And as Johnson, [@jo2], proved they are equivalent to the existence of a virtual diagonal for the group algebra. Leptin, [@lep], showed that the Følner type conditions on a group admit a bounded approximate identity of the Fourier algebra. Indeed, the existence of a bounded approximate identity is equivalent to the amenability of the underlying group. This result is known as . So, although we experience a variety in the definitions, eventually a unity in the notion emerges (unlike hypergroups as we observe in the following). Similar to the amenability of groups, different notions of amenability have been defined for hypergroups. Skantharajah, [@sk], defined the actual concept of in the sense of the existence of a left invariant mean. According to this definition of amenability, large classes of hypergroups are amenable including all commutative hypergroups and compact hypergroups. But unlike groups, the existence of a left invariant mean does not imply the amenability of the corresponding hypergroup algebra, [@sk], while the other side of the Johnson’s theorem is still true for hypergroups that is, the amenability of the hypergroup algebra implies the existence of a left invariant mean. Skantharajah also defined (approximately invariant) in the hypergroup algebra, denoted by $(P_1)$, and the Banach space of all square integrable functions on a hypergroup, denoted by $(P_2)$. He showed that $(P_2)$ implies $(P_1)$ and $(P_1)$ is equivalent to the amenability. These notions were also defined and studied for which form discrete hypergroups, [@izu]. The (as a specific case of the Følner conditions) for hypergroups first was defined by Singh in [@singh-mem] to study the norm of positive integrable functions as operators over the $L^2$-space of a hypergroup. After a long gap, the author, in [@ma], independently defined the [Leptin condition]{} on hypergroups and studied its application to the hypergroup structure defined on duals of compact groups. In this paper, we investigate different amenability notions of hypergroups and their relations. First, we look closer at a generalization of Følner type conditions over hypergroups. This generalization not only investigates some properties of particular classes of hypergroups, but it also improves our knowledge about the Fourier algebra of hypergroups. We should note that the Fourier space is in general just a Banach space. But for wide classes of hypergroups –specially the hypergroup structures which are admitted by groups– this Banach space actually forms a Banach algebra, [@mu1; @mu2]. This class of hypergroups are called . We study the existence of a bounded approximate identity of the Fourier algebra for the class of regular Fourier hypergroups. The outcome is Leptin’s theorem for hypergroups which shows that a bounded approximate identity can always be normalized to $1$ and its existence is equivalent to $(P_2)$. We also study amenability of hypergroup algebras for discrete commutative hypergroups satisfying $(P_2)$. We show that for this class of hypergroups, the hypergroup algebra cannot be amenable (as a Banach algebra) if the Haar measure goes to infinity (as a weight on the hypergroup). Some of the main examples of hypergroup structures are mathematical objects derived from locally compact groups. These hypergroup instances highlight a hypergroup perspective towards some Banach algebras on locally compact groups. For example, the center of group algebras for locally compact groups with relatively compact conjugacy classes are hypergroup algebras. Also, the subalgebra of the Fourier algebra on compact groups which consists of all class functions, denoted by $ZA(G)$, is a hypergroup algebra. These close ties not only emphasize the applications that this hypergroup study has for locally compact groups but also give us a rich class of examples on which we observe our theory. In this paper, we apply hypergroup machinery to study some amenability properties of these algebras. This paper is organized as follows. In Section \[s:hypergroups\], we introduce the notations and present some facts regarding hypergroups and their Fourier spaces that will be used later. One may note that the Fourier space of hypergroups, even as a Banach space, carries interesting properties, [@samea1; @vr]. Here we show that similar to locally compact groups, every element of the Fourier space is the convolution of two square integrable functions. Section \[ss:Folner-condition-on-\^G\] introduces and studies Følner type conditions, in particular the Leptin condition for hypergroups. First in Subsection \[ss:definition-of-Leptins\], we define different Følner type conditions on hypergroups and consider their co-relations. In an attempt to answer a question about Segal algebras on compact groups, the author, in [@ma], defined the Leptin condition for hypergroups. Here, we expand that study by defining more conditions. These conditions on one hand help us to study the behaviour of hypergroup Haar measures analogous to groups. On the other hand, they develop a criterion to vaguely measure the growth rate of the hypergroup action between subsets of a hypergroup. The question of approximate amenability of Segal algebras of compact groups highlights the importance of the Leptin condition for the class of discrete hypergroups defined on dual of compact groups, [@ma]. Hence, in Subsection \[ss:Leptin-number\], we apply some studies on the irreducible decomposition of tensor products on representations of compact groups to consider the Leptin condition for the dual hypergroup structure of some classes of compact groups including $\operatorname{SU}(n)$ for $n\geq 2$. The equivalence of the Leptin condition and the existence of a bounded approximate identity of Fourier algebras is a crucial part of the theory of amenable groups. In Section \[ss:bai-of-A(H)-Leptin-condition\], we apply the notion of the hypergroup Leptin condition to study the existence of a bounded approximate identity for the Fourier algebra of regular Fourier hypergroups. (Following [@mu1], we call a hypergroup when its Fourier space forms a Banach algebra.) In Subsection \[ss:Reiter-condition\], we characterize the existence of a bounded approximate identity in the Fourier algebra of these hypergroups with respect to the amenability notion $(P_2)$. This characterization is an analog of . Although $(P_2)$ implies the amenability of a hypergroup, it is strictly stronger. We apply the hypergroup Leptin’s theorem to double coset hypergroups and conclude $(P_2)$ for them when they are admitted by amenable locally compact groups. We close the section by a brief study on the existence of bounded approximate identities in ideals of Fourier algebras for regular Fourier hypergroups in Subsection \[ss:bai-of-ideals\]. Johnson, in [@jo2], proved that the amenability of groups is equivalent to the amenability of their group algebras. For hypergroups, it is known that the amenability of a hypergroup algebra implies the existence of a left invariant mean but not necessarily vice versa, [@sk]. The question of amenability of hypergroup algebras is a challenging question and so far most of the partial answers are concerning specific classes of hypergroups, especially polynomial hypergroups, (see [@la2; @la6]). In Section \[s:AM-L1(H)\], we study the amenability of the hypergroup algebra of discrete commutative hypergroups with property $(P_2)$. As we mentioned before, this result and the appearance of $(P_2)$ in hypergroup Leptin’s condition emphasize the importance of this amenability notion of $(P_2)$. Interestingly, $(P_2)$ is even equivalent to a type condition for fusion algebras, see [@izu] where $(P_2)$ is even called the of a fusion algebra while the actual concept of amenability is called the . We close this section by studying the amenability of a class of multivariable polynomial hypergroups known as . Here we apply a significantly shorter proof to extend a result of [@la2] which was for one variable version of these hypergroups. Due to the fact that locally compact groups have strong ties to hypergroups, our results in the preceding sections have applications to groups; Section \[s:Applications\] presents some of these applications. First in Subsection \[ss:AA-Segals\], applying the information obtained from the Leptin condition on duals of compact groups, we extend the main result of [@ma]; every proper Segal algebra on compact connected simply connected real Lie groups is not approximately amenable. Subsection \[ss:AM-ZA(G)\] studies the amenability of $ZA(G)(=A(G) \cap ZL^1(G))$ for a compact group $G$ with respect to some conditions on the dimensions of the irreducible unitary representations of $G$. The main result of this subsection is an analog of a result by Johnson in [@jo2] which shows that $ZA(G)$ is not amenable for . Eventually, Subsection \[ss:amenability-of-zl1(G)\], studies the amenability of the center of the group algebra for (infinite discrete) . A discrete group $G$ is called FC or if every conjugacy class of $G$ is finite. Here we observe that if for every integer $n$ there are just finitely many conjugacy classes with the cardinality $n$, then the center of the group algebra is not amenable. This subsections follows a previous paper of the author with Yemon Choi and Ebrahim Samei, [@ma2], which concerned the amenability of a specific class of FC groups. Some results of this paper are based on work from the author’s Ph.D. thesis, [@ma-the], under the supervision of Yemon Choi and Ebrahim Samei. We warn the reader that many facts which are either immediate or well known for Følner conditions and other amenability notions of (amenable) groups and their Fourier algebras are unknown for regular Fourier hypergroups. One may look at [@ma-the; @sk] for some counterexamples. Here we also highlight some critical differences that one should consider when one generalizes group ideas to hypergroups in some remarks in the manuscript. 1.5em [Preliminaries and notation]{}\[s:hypergroups\] Since some results in this paper may target people who are not deeply engaged in the theory of hypergroups, we present a rather detailed background here. [Hypergroups]{} For notations, definitions, and properties of hypergroups, we mainly cite [@bl]. As a short summary for hypergroups, we may present the following. \[d:discrete-hypergroups\][@bl 1.1.2]\ We call a locally compact space $H$ a if the following conditions hold. - [There exists an associative binary operation $*$ called the on $M(H)$ under which $M(H)$ is an algebra. Moreover, for every $x$, $y$ in $H$, $\delta_x * \delta_y$ is a positive measure with compact support and $\norm{\delta_x*\delta_y}_{M(H)}=1$.]{} - [ The mapping $(x,y)\mapsto \delta_x*\delta_y$ is a continuous map from $H\times H$ into $M(H)$ equipped with the weak$^*$ topology that is $\sigma(M(H), C_c(H))$.]{} - [The mapping $(x,y)\rightarrow \supp(\delta_x*\delta_y)$ is a continuous mapping from $H\times H$ into ${\textfrak C}(H)$ equipped with the Michael topology (see [@bl 1.1.1]).]{} - [ There exists an element (necessarily unique) $e$ in $H$ such that for all $x$ in $H$, $\delta_e * \delta_x = \delta_x * \delta_e = \delta_x$.]{} - There exists a (necessarily unique) homeomorphism $x \rightarrow \check{x}$ of $H$ called satisfying the following: - [$(\check{x}\check{)} = x$ for all $x \in H$.]{} - [If $\check{f}$ is defined by $\check{f}(t) := f(\check{t})$ [for all]{} $f\in C_c(H)$ and $t\in H$, one may define $\check{\mu}(f):=\mu(\check{f})$ for all $\mu\in M(H)$. Then $(\delta_x * \delta_y\check{)} = \delta_{\check{y}} * \delta_{\check{x}}$ [ for all ]{} $x, y \in H$. ]{} - [$e$ belongs to $\supp (\delta_x * \delta_y)$ if and only if $y = \check{x}$.]{} Let $(H,*,\tilde{ }\;)$ be a (locally compact) . The notation $A*B$ stands for $\cup\{\supp(\delta_x*\delta_y):\; \text{for all }\;x\in A, y\in B\}$ for $A,B$ subsets of the hypergroup $H$. By abuse of notation, we use $x*A$ to denote $\{x\}*A$. For each $f\in C_c(H)$ and $x,y\in H$, one may define $L_xf(y):=\delta_x*\delta_y(f)$. A Borel measure $h$ on $H$ is called a (left) if $h(L_xf)=h(f)$ for all $f\in C_c(H)$ and $x\in H$. Also $$f*_hg(x):=\int_{H} f(t) L_{\tilde{t}}g(x) dh(t),\ \ \text{and} \ \ \norm{f}_{1}:=\int_{H} |f(t)| dh(t)$$ for all $f,g\in C_c(H)$ and $x\in H$. Then the set of all $h$-integrable functions on $H$ forms a Banach algebra, denoted by $(L^1(H,h),*_h,\norm{\cdot}_1)$; it is called the of $H$. Let $H$ be a hypergroup with a Haar measure $h$. Then the $\Delta$ is defined on $H$ by the identity $h*\delta_{\tilde{x}}=\Delta(x)h$ for every $x\in H$. $\Delta$ is continuous, and over every set $\{x\}*\{y\}$, for all $x,y\in H$, $\Delta$ is constantly equal to $\Delta(x)\Delta(y)$. In particular, if $\Delta(x)\Delta(\tilde{x})=\Delta(\delta_x*\delta_{\tilde{x}})=\Delta(e)=1$. Also similar to locally compact groups, $$\label{eq:modular-function} \int_{H} f(\tilde{y}) dh(y)=\int_H \frac{1}{\Delta(y)} f(y) dh(y).$$ Hypergroups which are from either of compact, discrete, or commutative type, are unimodular i.e. $\Delta\equiv 1$ on $H$. For some classes of hypergroups including discrete and/or commutative and/or compact hypergroups, the existence of a Haar measure can be proved. So from now on, we assume that every hypergroup $H$ possesses a Haar measure. Note that unlike groups, the Haar measure on discrete hypergroups is not necessarily a fixed multiplier of the counting measure. Let $H$ be a discrete hypergroup equipped with a Haar measure $h$. Then, $\ell^1(H)=M(H)$ is a Banach algebra. Also the mapping $f\mapsto fh$, $L^1(H,h)\rightarrow \ell^1(H)$ is an isometric algebra isomorphism from the Banach algebra $L^1(H,h)$ onto the Banach algebra $\ell^1(H)$. The following proposition is a discrete version of [@bl Proposition 1.2.16]. Here we present a short proof applying the definition of discrete hypergroups. \[p:L\^1-C\_0\] Let $H$ be a discrete hypergroup. Then for every $\phi \in c_0(H)$ and $f \in L^1(H,h)$, the function $f*\phi$ belongs to $c_0(H)$. Let $\epsilon>0$ be fixed. Therefore there is some $K\subset H$ finite such that for every $x\in H\setminus K$, $|\phi(x)|<\epsilon \norm{f}_1^{-1}$. Also there is some $F\subseteq H$ finite such that $$\sum_{x\in H\setminus F} |f(x)| h(x) <\epsilon \norm{\phi}_\infty^{-1}.$$ Based on the definition of convolution between sets and (H1) in Definition \[d:discrete-hypergroups\], it is obvious that $C:=F*K$ is a finite subset of $H$. Let $x\in H\setminus C$, $t\in F$, and $s\in K$. If $\delta_{\tilde{t}}*\delta_x(s)\neq 0$, $s\in \tilde{t}*x$. Therefore, by (H6), $e\in \tilde{s}* \tilde{t}*x$. Again (H6) implies that $\tilde{x}\in \tilde{s}*\tilde{t}$ or equivalently $x\in t*s \subseteq F*K$ which is a contradiction. Hence, for $x\in H\setminus C$, $t\in F$, and $s\in K$, $\delta_{\tilde{t}}*\delta_x(s)= 0$. Consequently, $$\sum_{t\in F } |f(t)| \sum_{s\in K} |\phi(s)| \delta_{\tilde{t}}* \delta_x(s)h(t)=0.$$ Therefore for $x\in H\setminus C$, one gets $$\begin{aligned} \left| \sum_{t\in H} f(t) \phi(\delta_{\tilde{t}}* \delta_x) h(t)\right| &\leq& \left| \sum_{t\in F } f(t) \phi(\delta_{\tilde{t}}* \delta_x) h(t)\right| + \left| \sum_{t\in H \setminus F } f(t) \phi(\delta_{\tilde{t}}* \delta_x) h(t)\right|\\ &\leq& \sum_{t\in F } |f(t)| |\phi(\delta_{\tilde{t}}* \delta_x)|h(t) + \sum_{t\in H \setminus F } |f(t)| h(t) \norm{\phi}_\infty \\ & \leq& \epsilon + \sum_{t\in F } |f(t)| \sum_{s\in H} |\phi(s)| \delta_{\tilde{t}}* \delta_x(s) h(t)\\ & =& \epsilon + \sum_{t\in F } |f(t)| \sum_{s\in H\setminus K} |\phi(s)| \delta_{\tilde{t}}* \delta_x(s) h(t) \\ &+& \sum_{t\in F } |f(t)| \sum_{s\in K} |\phi(s)| \delta_{\tilde{t}}* \delta_x(s) h(t) \\ & \leq& \epsilon + \sup_{s\in H\setminus K} |\phi(s)| \norm{f}_1 + \sum_{t\in F } |f(t)| \sum_{s\in K} |\phi(s)| \delta_{\tilde{t}}* \delta_x(s) h(t) = 2\epsilon.\end{aligned}$$ And this finishes the proof. Let $C(H)$ denote the set of all continuous bounded functions on $H$. If $H$ is a commutative hypergroup, the , denoted by $\widehat{H}$, is defined to be the set $$\{\alpha\in C(H)\;|\; \alpha(\delta_x*\delta_y)=\alpha(x)\alpha(y), \alpha(\tilde{x})=\overline{\alpha(x)} \}.$$ One can show that $\wH$ is the set of all $*$-multiplicative functionals of $L^1(H,h)$. Therefore, $\wH$ can be equipped with the Gelfand spectrum topology as the maximal ideal space of the hypergroup algebra which forms a locally compact space. For each $f\in L^1(H)$, the Gelfand transform $f\mapsto \mathcal{F}{f}$ is defined by $$\mathcal{F}{f}(\alpha)=\int_{H} f(x) \overline{\alpha(x)} dh(x)\ \ \ (\alpha \in \wH)$$ and is called which is a norm decreasing injection from $L^1(H,h)$ into $C_0(\wH)$. Also one may define the $\mathcal{FS}: M(H) \rightarrow C(\wH)$ as another norm decreasing injection by $$\mathcal{FS}(\mu)(\alpha):=\int_H \overline{\alpha(x)} d\mu(x)\ \ \ (\alpha \in \wH).$$ There exists a measure $\varpi$ on $\wH$ such that for every $f\in L^1(H,h) \cap L^2(H,h)$, $$\int_H |f(x)|^2 dh(x) = \int_{\wH} |{\mathcal{F}(f)}(\alpha)|^2 d\varpi(\alpha).$$ The measure $\varpi$ is called the . [Fourier algebra of hypergroups]{} For a compact hypergroup $H$, Vrem in [@vr] defined the similar to the Fourier algebra of a compact group. Subsequently, Muruganandam, [@mu1], defined the on an arbitrary (not necessary compact) hypergroup $H$ using irreducible representations of $H$ as analogous to the Fourier-Stieltjes algebra on locally compact groups. Subsequently, he defined the [ Fourier space]{} of a hypergroup $H$, as a closed subspace of the Fourier-Stieltjes algebra, generated by $\{f*_h\tilde{f}:\; f\in L^2(H,h)\}$ or equivalently generated by $\{f*_h \tilde{f}: f\in C_c(H)\}$; hence, $A(H) \cap C_c(H)$ is dense in $A(H)$. Further, $A(H) \subseteq C_0(H)$, $\norm{\cdot}_\infty \leq \norm{\cdot}_{A(H)}$, and for every $u\in A(H)$, $L_x u$, $\check{u}$, and $\overline{u}$ belong to $A(H)$. 1.0em For a hypergroup $H$, it is known that for every $x \in H$ and $f\in L^2(H)$, $L_x f \in L^2(H)$ while $\norm{L_xf}_2=\norm{L_x}_2$ (see [@bl (1.3.18)]). Therefore, $L_x$ is an operator in ${\cal B}(L^2(H))$ which is denoted by $\lambda(x)$. The von Neumann sub-algebra of ${\cal B}(L^2(H))$ generated by $(\lambda(x))_{x\in H}$ is called the of $H$ and denoted by $VN(H)$. By [@mu1 Theorem 2.19], for every $T\in VN(H)$ there exists a unique continuous linear functional $\phi_T$ on $A(H)$ satisfying $\phi_T(u)=\langle T(f), g\rangle_{L^2(H)}$ where $\check{u}=f*\tilde{g}$. The mapping $T \mapsto \phi_T$ is a Banach space isomorphism between $VN(H)$ and $A(H)^*$. Moreover, the above mapping is also a homeomorphism when $VN(H)$ is given the $\sigma$-weak topology and $A(H)^*$ is given weak$^*$ topology. On the other hand, for each $f\in L^1(H)$, $f*_hg\in L^2(H)$ for $g\in L^2(H)$ while $\norm{f*_hg}_2 \leq \norm{f}_1 \norm{g}_2$. So the operator $\lambda(f)$ which carries $g$ to $f*_hg$ belongs to ${\cal B}(L^2(H))$. The $C^*$-algebra generated by $(\lambda(f))_{f\in L^1(H)}$ in ${\cal B}(L^2(H))$ is called of $H$ and denoted by $C^*_\lambda(H)$. It is proven in [@mu1] that $C^*_\lambda(H)$ is actually a $C^*$-subalgebra of $VN(H)$. Moreover, $A(H)$ can be considered as a subalgebra of $B_\lambda(H)$ where $B_\lambda(H)$ is the dual of $C^*_\lambda(H)$. In this paper we rely on the following lemma which we present from [@ma] without its proof. \[l:A(H)-properties\][@ma Lemma 3.4]\ Let $H$ be a hypergroup, $K$ a compact subset of $H$ and $U$ an open subset of $H$ such that $K\subset U$. Then for each relatively compact open set $V$ such that $\overline{K *V*\check{V}} \subseteq U$, then $u_V:={h_H(V)}^{-1} 1_{K*V} *_h \tilde{1}_{V}$ belongs to $A(H) \cap C_c(H)$. Also $u_V(H)\geq 0$, $u_V|_K=1$, $\supp(u_V) \subseteq U$, and $$\norm{u_V}_{A(H)} \leq \left(\frac{ h_H(K*V)}{{h_H(V)}}\right)^{\frac{1}{2}}.$$ \[r:existence-of-the-V\] For each pair $K,U$ such that $K \subset U$, we can always find a relatively compact neighborhood $V$ of $e_H$ that satisfies the conditions in Lemma \[l:A(H)-properties\]. The existence is a result of continuity of the mapping $(x,y)\mapsto x*y$ with respect to the locally compact topology of $H\times H$ into the Michael topology on ${\textfrak C}(H)$ (see [@bl]). Since $H$ is locally compact, there exists some relatively compact open set $W$ such that $K\subseteq W \subseteq \overline{W} \subseteq U$; $K\in {\textfrak C}_{H\setminus \overline{W}}(W)$ as an open set in the Michael topology and consequently for each $x\in K$, $x*e\in {\textfrak C}_{H\setminus \overline{W}}(W)$. Since, the mapping $e \rightarrow x*e$ is continuous, there is some neighborhood $V_1^x$ of $e$ such that for each $y\in V_1^x$, $x*y\in {\textfrak C}_{H\setminus \overline{W}}(W)$ i.e. $x*y \subseteq W$ and $x*y \cap H\setminus \overline{W}=\emptyset$. Let us define $V^{(1)}=\cup_{x\in K} (V_1^x \cap \check{V}_1^x)$. Clearly, $\check{V}^{(1)}=V^{(1)}$. Moreover, $ K*V^{(1)}=\cup_{y\in V^{(1)}}\cup_{x\in K} x*y \subseteq \cup_{x\in K} x*V_1^x \subseteq W $ and $K*V^{(1)} \cap H\setminus \overline{W}=\emptyset$ since $(x*y) \cap (H\setminus \overline{W})=\emptyset$ for all $x\in K$ and $y\in V^{(1)}$. Now let us replace $K$ by the compact set $\overline{K*V^{(1)}}$. Therefore, similar to the previous argument, for some relatively compact open set $W'$ such that $\overline{K*V^{(1)}} \subseteq W'\subseteq \overline{W'} \subseteq U$, one may find some $V^{(2)}$ a neighborhood of $e$ such that $V^{(2)}=\check{V}^{(2)}$, $\overline{K*V^{(1)}}*V^{(2)} \subseteq W'$, and $( \overline{K * V^{(1)}} * V^{(2)}) \cap (H\setminus \overline{W'} )=\emptyset$. Hence, for the relatively compact open set $V:=V^{(1)}\cap V^{(2)}$, one gets that $V=\check{V}$ and $ {K*V*\check{V}} \subseteq \overline{K*V^{(1)}} * V^{(2)} \subseteq \overline{W'}$. So $\overline{K * V* \check{V}} \subseteq U$. 1.0em In [@mu1], Muruganandam showed that when $H$ is commutative, $A(H)$ can be characterized as $\{f*_h\tilde{g}:\; f,g\in L^2(H,h)\}$ and $\norm{u}_{A(H)}=\inf \norm{f}_2 \norm{g}_2$ for all $f,g\in L^2(H,h)$ such that $u=f*_h\tilde{g}$. The key point of his proof for commutative hypergroups, as it was shown in [@mu1 Proposition 4.2] and [@chil Section 2], is this fact that ${\cal F}(A(H))$, where ${\cal F}$ is the (extension of the) Fourier transform, is $L^1(S,\varpi)$ where $S$, as a subset of $\widehat{H}$, is the support of the Plancherel measure $\varpi$. A similar characterization is known for the Fourier space of compact hypergroups, [@vr]. The following implies that this fact is true for all hypergroups. \[p:Fourier-of-hypergroups\] Let $H$ be a hypergroup. Then $A(H)=\{f*\tilde{g}: f,g\in L^2(H)\}$ and $\norm{u}=\inf\{\norm{f}_2\norm{g}_2\}$ over all $f,g\in L^2(G)$ where $u=f*\tilde{g}$. This infimum is actually attained for some $f,g\in L^2(G)$. As a great reference for this important characterization for locally compact groups is the Master’s thesis of Zwarich, [@zw-the]. Chapter 4 in this long thesis is dedicated to this proof for locally compact groups based on an observation by Haagerup in [@haa] and the von Neumann theory developed in [@dix2].The proof, in [@zw-the], is written based on the properties of von Neumann algebras which are in as defined in [@haa], so it can be adapted easily for hypergroups as well. Before we present the proof, let us recall that every element $u\in A(H)$ acts $\sigma$-weak continuously on $VN(H)$; hence, $u$ is actually a linear functional on $VN(H)$, [@zw-the Section 3.3]. 0.75em For $f,g\in C_c(H)$, define $$\langle f,g\rangle := \int_H f(x) \overline{g(x)} dx, \ \ f^*(x):=\frac{\overline{f(\tilde{x})}}{\sqrt{\Delta(x)}},\ \ f^{\sharp}(x):=\frac{1}{\sqrt{\Delta(x)} } f(x).$$ Then $C_c(H)$ forms a as defined in [@zw-the Definition 4.3.3]. To show that, one can re-write the proof of [@zw-the Proposition 4.3.4] with appropriate modifications and applying (\[eq:modular-function\]). Obviously the Hilbert space generated by $C_c(H)$ would be $L^2(H)$. Further, the second commutant of $\lambda(C_c(H))$ is $VN(H)$, the von Neumann algebra of $H$ (see [@mu1 Remark 2.18]). Therefore, by [@zw-the Theorem 4.3.9], $VN(H)$ is in standard form. Hence, one may apply [@zw-the Theorem 4.3.16] to the elements of $A(H)$ as normal linear functionals on $VN(H)$ and conclude that $A(H)=\{f*\tilde{g}: f,g\in L^2(H)\}$ and the condition of the norm. 1.0em In [@mu1], Muruganandam calls a hypergroup $H$ a , if the Banach space $({A}(H),\norm{~\cdot~}_{{A}(H)})$ equipped with pointwise product is a Banach algebra. He studied this property for a variety of commutative hypergroups in [@mu1]. He showed that some polynomial hypergroups including Jacobi polynomial hypergroups and Chebyshev polynomial hypergroups are regular Fourier hypergroups. Furthermore, in [@mu2], he pursued this study for (which are not necessarily commutative). He showed that the , including the double coset hypergroups, are regular Fourier. For a compact group $G$, the equivalent classes of irreducible unitary representations forms a discrete commutative hypergroup denoted by $\wG$ and called , (see Subsection \[ss:Leptin-number\]). In [@ma], it was shown dual of every compact group is a regular Fourier hypergroup. Moreover, the Fourier algebra $A(\wG)$ as a Banach algebra is isometrically isomorphic to the center of the group algebra usually denoted by $ZL^1(G)$. 2.0em [Følner type conditions on Hypergroups]{}\[ss:Folner-condition-on-\^G\] Amenable locally compact groups are characterized by a variety of properties including Følner type conditions. As we mentioned before, these conditions relate the concept of “amenability" (which is an algebraic notion on the group algebra) to some structural properties of the group or semigroup. In this section, we look at a generalization of Følner type conditions over hypergroups. [Definitions and relations]{}\[ss:definition-of-Leptins\] In [@ma], the author introduced the Leptin condition for hypergroups. Here, we define more Følner type conditions for hypergroups and we study their relations. To recall, for each two subsets $A$ and $B$ of some set $X$, we denote their symmetric difference, $(A\setminus B) \cup (B\setminus A)$, by $A\triangle B$. \[d:Folner-Leptin-condition\] Let $H$ be a hypergroup and $D\geq 1$ an integer. We define the following properties: - [We say that $H$ satisfies the if for every compact subset $K$ of $H$ and $\epsilon>0$, there exists a measurable set $V$ in $H$ such that $0<h(V)<\infty$ and $h(K*V)/h(V) < D + \epsilon$.]{} - [ We say that $H$ satisfies the if for every compact subset $K$ of $H$ and $\epsilon>0$, there exists a measurable set $V$ in $H$ such that $0<h(V)<\infty$ and $h(x*V\triangle V)/h(V) < \epsilon$ for every $x\in K$.]{} - [ We say that $H$ satisfies the if for every compact subset $K$ of $H$ and $\epsilon>0$, there exists a measurable set $V$ in $H$ such that $0<h(V)<\infty$ and $h(K*V\triangle V)/h(V) < \epsilon$.]{} \[r:Leptin-is-1-Leptin\] If a hypergroup $H$ satisfies the $1$-Leptin condition, $H$ satisfies the as defined in [@ma Definition 4.1]. From now on, we may use the Leptin condition instead of the $1$-Leptin condition and we denote it by $(L)$. \[p:compact-hypergroups\] For every compact hypergroup $H$, $H$ satisfies all conditions $(SF)$, $(F)$, and $(L)$. The proof is a direct result of finiteness of the Haar measure on compact hypergroups, [@bl], by replacing $V=H$ for all conditions in Definition \[d:Folner-Leptin-condition\]. \[r:relatively-compact\] In Definition \[d:Folner-Leptin-condition\] of the Leptin condition, $(L_D)$, we can suppose that $V$ is compact. To show this fact suppose that $H$ satisfies the $D$-Leptin condition. For compact subset $K$ of $H$ and $\epsilon>0$, there exists a measurable set $V$ such that $h(K*V)/h(V) < D + \epsilon$. Using regularity of $h$, as a measure, for each positive integer $n$, we can find compact set $V_1\subseteq V$ such that $h(V\setminus V_1) < h(V)/n$. This implies that $0< h(V_1)$ and $ h(V)/h(V_1) < {n}/{(n-1)}$. Therefore $$\frac{h(K*V_1)}{h(V_1)} \leq \frac{h(V)}{h(V_1)} \left( \frac{h(K*V_1)}{h(V)}\right)< \frac{n}{n-1}(D+\epsilon).$$ So we can add compactness of $V$ to the definition of the Leptin condition. \[p:Strong-Folner-implies-Folner-&-Leptin\] For every hypergroup $H$, $(SF)$ implies $(L)$. For a compact set $K$ and $\epsilon>0$, let $V$ be a measurable set such that $h(K*V \triangle V) < \epsilon h(V)$. Hence $$\begin{aligned} \frac{h(K*V)}{h(V)} -1 &\leq& \frac{h(K*V) - h(V)}{h(V)}\\ &\leq& \frac{h(K*V) + h(V) -2 h((K*V) \cap V)}{h(V)}\\ &=& \frac{h((K*V)\triangle V)}{h(V)} <\epsilon .\end{aligned}$$ \[p:Strong-Folner-Folner-&-Leptin-equal\] For every discrete hypergroup $H$, $(F)$ implies $(SF)$. And consequently, $(F)$ implies $(L)$. We should just show that $(F) \Rightarrow (SF)$ the rest is obtained by Proposition \[p:Strong-Folner-implies-Folner-&-Leptin\]. Let $K$ be a compact subset of $H$. Since for discrete hypergroups, each compact set is finite, we may suppose that $K= \{x_i\}_{i=1}^n$. Therefore, for each $\epsilon>0$ there is a finite set $V$ such that $0<h(V)$ and $$\frac{h((x*V)\triangle V) }{h(V)}<\frac{\epsilon}{|K|}\ \ \ (x\in K).$$ So $$\begin{aligned} \frac{h( (\bigcup_{i=1}^n x_i) *V \triangle V)}{h(V)} &=& \frac{h(\bigcup_{i=1}^n ( x_i *V) \triangle V)}{h(V)}\\ &\leq & \sum_{i=1}^n \frac{h(x_i*V \triangle V)}{h(V )} \ \ \ = \epsilon.\end{aligned}$$ The last inequality is a result of the following inclusion about arbitrary sets $B_1, B_2, C$: $$\begin{aligned} ((B_1\cup B_2)\triangle C) %&=& (B_1\cup B_2 \cup C) \setminus ((B_1\cup B_2)\cap C)\\ %&=& \big((B_1 \cup C) \cup (B_2\cup C)\big) \setminus \big((B_1 \cap C) \cup (B_2 \cap C)\big)\\ %&=& \big((B_1 \cup C) \setminus (B_1 \cap C) \cup (B_2 \cap C)\big) \cup \big((B_2\cup C) \setminus (B_1 \cap C) \cup (B_2 \cap C)\big)\\ %&\subseteq& \big((B_1 \cup C) \setminus (B_1 \cap C)\big) \cup \big((B_2\cup C) \setminus (B_2 \cap C)\big)\\ %&=& \subseteq \big(B_1 \triangle C)\big) \cup \big(B_2\triangle C)\big).\end{aligned}$$ \[r:why-not- the-rest-of-implicarions\] If $H$ is a locally compact group, all the conditions $(F)$, $(SF)$, and $(L)$ are equivalent and they equal the amenability of the group $H$. If one tries to adapt the rest of the relations between $(F)$, $(SF)$, and $(L)$ from the group case, [@pi], one may notice that in almost all of the arguments, the inclusion $x(A\setminus B)\subseteq xA \setminus xB$ is crucially applied where $A,B$ are subsets of the group $H$ and $x$ is one arbitrary element.[^1] This inclusion is not necessarily true for a general hypergroup though. In [@la4] the authors rendered the notion of Følner conditions on polynomial hypergroups. To do so, summing sequences in the context of polynomial hypergroups are defined as follows. \[d:summing-sets\][@la4 Definition 2.1]\ Let $\Bbb{N}_0$ denote a polynomial hypergroup with the Haar measure $h$. A sequence $(A_n)_{n\in \Bbb{N}_0}$ where $A_n\subseteq \Bbb{N}_0$ for all $n\in \Bbb{N}$ is called *summing sequence* on the polynomial hypergroup $\Bbb{N}_0$ if it satisfies - [$A_n \subseteq A_{n+1}$ for every $n\in \Bbb{N}_0$,]{} - [ $\Bbb{N}_0=\bigcup_{n\in \Bbb{N}_0} A_n$,]{} - [ $h(A_n) <\infty$ for every $n\in \Bbb{N}_0$,]{} - [$\displaystyle \lim_{n\rightarrow \infty} \frac{h((k*A_n) \Delta A_n)}{h(A_n)}=0$ for all $k\in \Bbb{N}$.]{} \[eg:polynomials\] Let $\Nat_0$ be a polynomial hypergroup which have a summing sequence $(A_n)_{n\in \Bbb{N}_0}$. Then it satisfies all the [ Leptin]{}, [ Strong Følner]{}, and Følner conditions. To prove this, note that the existence of a summing sequence immediately implies the Følner condition, the rest would be proven based on Proposition \[p:Strong-Folner-Folner-&-Leptin-equal\], since $\Bbb{N}_0$ is a discrete commutative hypergroup. As an example in [@la4], it was shown that Jacobi polynomials have summing sequences. [$D$-Leptin condition on dual of compact groups]{}\[ss:Leptin-number\] Let $G$ be a compact group and $\widehat{G}$ the set of all equivalent classes of irreducible unitary representations of $G$. When ${\cal H}_\pi$ is the finite dimensional Hilbert space related to a representation $\pi\in\widehat{G}$, $d_\pi$ denotes the dimension of the ${\cal H}_\pi$. For each two irreducible representations $\pi_{1},\pi_{2}\in\widehat{G}$, $\pi_1 \otimes \pi_2$ can be written as a decomposition of $\pi'_1,\cdots,\pi'_n$ elements of $\widehat{G}$ with respective multiplicities $m_1,\cdots,m_n$, i.e. $ \pi_1\otimes \pi_2 \cong \oplus_{i=1}^n m_i \pi'_i. $ According to this decomposition, one may define a convolution and involution on $\widehat{G}$ to $\ell^1(\widehat{G})$ by $$\label{eq:hypergroup-convolution-on-^G} \delta_{\pi_1}* \delta_{\pi_2}:=\sum_{i=1}^n \frac{m_i d_{\pi'_i}}{d_{\pi_1}d_{\pi_2}}\delta_{\pi'_i} \ \ \ \text{and}\ \ \ \tilde{\pi}=\overline{\pi}$$ for all $\pi,\pi_1,\pi_2\in \widehat{G}$ where $\overline{\pi}$ is the complex conjugate of the representation $\pi$. Then $(\widehat{G}, * ,\tilde{ }\;)$ forms a discrete commutative hypergroup such that $\pi_0$, the trivial representation of $G$, is the identity element of $\widehat{G}$ and $h(\pi)=d_\pi^2$ is the Haar measure of $\widehat{G}$. \[c:SU(2),product-of-finite-groups\][@ma Section 4]\ The hypergroups $\widehat{G}$ satisfies the Følner, strong Følner, and Leptin conditions for $G=\operatorname{SU}(2)$ or $G=\prod_{i\in\Ind}G_i$ the product equipped with product topology for $\{G_i\}_{i\in \Ind}$ a family of finite groups. In [@ma], it was implied that the duals of compact groups, as discrete commutative hypergroups, are regular Fourier hypergroups. This fact was applied to study some properties of compact groups, using the Fourier algebra of the dual of compact groups. That study mainly was based on the satisfaction of Leptin condition by the dual hypergroups. 2.0em Let $\bG$ be a connected simply connected compact real Lie group, (e.g. $\operatorname{SU}(n)$). Then, $\widehat{\bG}$, as the dual object of a compact Lie groups, forms a finitely generated hypergroup. Suppose that $F$ is a finite generator of $\widehat{\bG}$; therefore, by [@ve Theorem 2.1], there exists positive integers $0<\alpha,\beta <\infty$ such that $$\label{eq:growth-in-dual-of-Lie-groups} \alpha \leq \frac{h_{\widehat{\bG}}(F^k)}{k^{d_\bG}} \leq \beta$$ for all $k\in\Bbb{N}$ where $d_\bG$ is the dimension of the group $\bG$ as a Lie group over $\Bbb{R}$. According to the following theorem, this estimation for the growth rate of $\widehat{\bG}$ results in the satisfaction of $D$-Leptin condition for $\widehat{\bG}$. \[t:D-Leptin-of-simply-connected-Lie\] Let $\bG$ be a connected simply connected compact real Lie group. Then $\widehat{\bG}$, as a hypergroup, satisfies the $D$-Leptin condition for some $D\geq 1$. Given finite set $K\subseteq \widehat{\bG}$. Suppose that $F$ is a finite generator of $\widehat{\bG}$. For some $k\in \Bbb{N}$, $K\subseteq F^k$. Moreover, for each $\ell\in\Bbb{N}$, $F^\ell* F^k\subseteq F^{\ell+k}$. By applying (\[eq:growth-in-dual-of-Lie-groups\]), $$\begin{aligned} \limsup_{\ell\rightarrow \infty} \frac{h_{\widehat{\bG}}(K*F^\ell)}{h_{\widehat{\bG}}(F^\ell)} &\leq& \limsup_{\ell\rightarrow\infty}\frac{h_{\widehat{\bG}}(F^{\ell+k})}{h_{\widehat{\bG}}(F^\ell)} = \limsup_{\ell \rightarrow \infty} \frac{h_{{\bG}}(F^{\ell+k})}{(\ell+k)^{d_{{\bG}}}} \; \frac{\ell^{d_{{\bG}}}}{h_{\widehat{\bG}}(F^\ell)}\; \frac{(\ell+k)^{d_\bG}}{\ell^{d_\bG}}\leq \beta/\alpha.\end{aligned}$$ Therefore, $\widehat{\bG}$ satisfies the $D$-Leptin condition for some $1 \leq D <\infty$. Let $\operatorname{SU}(3)$ denote the special group of $3\times 3$ unitary matrices which is a connected simply connected compact real Lie group. Here we apply some studies on the representation theory of real connected Lie groups to find a concrete answer for $D$ for this special hypergroup. \[p:Leptin-number-of-SU(3)\] The hypergroup $\widehat{\operatorname{SU}(3)}$ satisfies the $18240$-Leptin condition. Let us follow [@bour9] in notations and basic facts about a compact Lie group $\bG$ and in particular $\bG=\operatorname{SU}(3)$. Let the set of all fundamental weights $\beta$ be denoted by $B$. Then we have $(\beta| {\beta'}) = 0$ for $\beta\neq \beta'$ while $ (\beta|\beta) > 0$ for all $\beta, \beta'\in B$. Taking highest weights induces an identification between the set $\widehat{\bG}$ of classes of irreducible unitary representations and the set of dominant weights $X_{++}$ which are all $( p_\beta \beta)_{\beta}$ for $p_\beta \in \Bbb{N}_0=\{0,1,2,\ldots\}$. From now on, without loss of generality, we denote the element $\pi$ of $\widehat{\bG}$ by its corresponding multipliers in $X_{++}$ that is $\pi=( p_\beta)_{\beta}$. As it is mentioned in [@ve], we know that in the case of connected simply connected compact real Lie groups, the set $F$, of representations $\delta_{\beta_0}$ which is $\delta_{\beta_0}=( p_\beta)_{\beta}$ where $p_\beta=0$ for all $\beta\neq \beta_0$ and $p_{\beta_0}=1$, forms a generator of $\widehat{\bG}$. Further, one may define a mapping $\tau : \widehat{\bG}\rightarrow \Bbb{N}_0$ where $\tau( \pi)=\sum_\beta p_\beta$ such that for each $\pi=( p_\beta)_{\beta}$, $\pi$ belongs to $F^{\tau(\pi)} \setminus F^{\tau(\pi)-1}$. The dimension of $\pi=(p_\beta)_\beta$ is given by Weyl’s formula $$d_\pi = \prod_{\alpha \in R^+} \left( 1 +\frac{\sum_{\beta} p_\beta (\beta|\alpha)}{(\rho | \alpha)}\right)$$ where $\rho$ is the sum of the fundamental dominant weights where $(\rho,\beta)=1$ for each $\beta$. If we restrict the case to $\operatorname{SU}(3)$, one gets that $\widehat{\operatorname{SU}(3)}$ is nothing but the set of all $\pi=(p,q)$ where $p,q\in \Bbb{N}_0$ while $d_\pi=(p+1)(q+1)(2+p+q)/2$. Also according to the finite generator $F=\{(0,1),(1,0)\}$, one gets that $S_k:=F^k\setminus F^{k-1}$ is nothing but the set $\{(k,k-j)\}_{j=0}^k$. Hence, based on some computations, one gets that $$h(S_k)=\sum_{j=0}^k \frac{(j+1)^2 (k-j+1)^2 (k+2)^2}{4}.$$ One may use this fact that $F^k=\sum_{j=0}^k S_j$ to get the following bounds. Therefore, $$\frac{h(F^n)}{n^8}= \frac{1}{n^8}+\frac{3 n^7+60 n^6+518 n^5+2520 n^4+7547 n^3+14220 n^2+16412 n+10560}{2880 n^7}.$$ Therefore, $$\label{eq:alpha-beta-for-SU(3)} \frac{1}{960} < \frac{h(F^n)}{k^n} \leq 19.$$ Now the argument mentioned in the proof of Theorem \[t:D-Leptin-of-simply-connected-Lie\] implies that $\widehat{\operatorname{SU}(3)}$ satisfies the $18240$-Leptin condition. In [@ma-the], the author applied a study on the tensor decomposition of irreducible representations of $\operatorname{SU}(3)$, [@we], to compute the $D$-Leptin condition of the dual of ${\operatorname{SU}(3)}$. The outcome was the $3^8$-Leptin condition which is significantly smaller than the amount found in in Proposition \[p:Leptin-number-of-SU(3)\]. But still the advantage of the proof of Proposition \[p:Leptin-number-of-SU(3)\] is that the structural details in the first half of the proof for Proposition \[p:Leptin-number-of-SU(3)\] let us to have similar computations for other $\operatorname{SU}(n)$’s and find two upper and lower bounds $\alpha$ and $\beta$, although these doable computations would be really long for large $n$’s (as even it is for $n=3$). It sounds to be interesting if one can apply this theory to obtain a formula for ${\operatorname{SU}(n)}$ or even for other connected simply connected real compact Lie groups. To do so, one may find the computations in the proof of [@ve Theorem 2.1] helpful. Note that the real dimension of the group $\operatorname{SU}(3)$ is $8$; hence, $\alpha=1/960$ and $\beta=19$ are actually the bounds which are mentioned in (\[eq:growth-in-dual-of-Lie-groups\]). 1.5em Suppose that $\{G_i\}_{i\in\Ind}$ is a non-empty family of compact groups for an arbitrary indexing set $\Ind$. Let $G:=\prod_{i\in\Ind}G_i$ be the product of $\{G_i\}_{i\in \Ind}$ equipped with product topology. Then $G$ is a compact group and by [@he2 Theorem 27.43], $\widehat{G}$ is the discrete space of all $\pi=\otimes_{i\in\Ind}\pi_i$ [such that]{} every $\pi_i$ belongs to $\widehat{G}_i$ [and]{} $\pi_i$ is the trivial representation $1$ on $G_i$ for all except for finitely many $i\in\Ind$. Moreover, for each $\pi=\otimes_{i\in\Ind}\pi_i \in \widehat{G}$, $d_\pi=\prod_{i\in\Ind} d_{\pi_i}$. \[t:Leptin-of-product-groups\] Let $G=\prod_{i\in\Ind} G_i$ for a family of compact groups $(G_i)_{i\in \Ind}$ such that for each $i\in\Ind$, $\widehat{G}_i$ satisfies the $D_i$-Leptin condition. Then if $D:=\prod_{i\in \Ind} D_i$ exists, $\widehat{G}$ satisfies the $D$-Leptin condition. Given compact subset $K$ of $\widehat{G}$ and $\epsilon>0$. There exists some finite set $F\subseteq \Ind$ such that $K\subseteq \bigotimes_{i\in F} K_i \otimes E_F^c$ where $K_i$ is a compact subset of $\widehat{G}_i$ and $E_F^c=\bigotimes_{i\in\Ind\setminus F} \pi_0$ where $\pi_{0}$’s are the identities of the corresponding hypergroup $\widehat{G}_i$. If $D:=\prod_{i\in \Ind} D_i <\infty$, given $\epsilon >0$, one may find a $\epsilon'>0$ such that $\prod_{i\in F} (D_i + \epsilon') < D +\epsilon$. Using the $D_i$-Leptin condition for each $\widehat{G}_i$, there exists some finite set $V_i$ which satisfies such that $h_{\widehat{G}_i}(K_i*V_i)/ h_{\widehat{G}_i}(V_i) <D_i+ \epsilon' $. Therefore, for the finite set $V=(\bigotimes_{i\in F} V_i )\otimes E_F^c$, $$\frac{h(K*V)}{h(V)} \leq \prod_{i\in F} \frac{ h_{G_i}(K_i*V_i)}{h_{G_i}(V_i)} < \prod_{i\in F} (D_i+\epsilon') < D +\epsilon.$$ 2.0em [Bounded approximate identity of Fourier algebra]{}\[ss:bai-of-A(H)-Leptin-condition\] Let $H$ be a regular Fourier hypergroup, we denote by $(B_D)$ the existence of an approximate identity of $A(H)$ whose $\norm{\cdot}_{A(H)}$-norm is bounded by some $D\geq 1$ and we call such a bounded approximate identity a . If $H$ is a locally compact group, it is know that $A(H)$ has a $1$-bounded approximate identity if and only if $H$ is amenable. Here we study the existence of a bounded approximate identity of $A(H)$ with respect to the properties of the hypergroup $H$. \[t:Leptin-&gt;bai of A(H)\] Let $H$ be a regular Fourier hypergroup which satisfies the $D$-Leptin condition, $(L_D)$, for some $D\geq 1$. Then $A(H)$ has a $D$-bounded approximate identity, $(B_D)$. Fix $\epsilon>0$. Using the $D$-Leptin condition on $H$, for every arbitrary non-void compact set $K$ in $H$, we can find a finite subset $V_K$ of $H$ such that $h(K*V_K)/h(V_K)<D^{2}(1+\epsilon)^2$. Using Lemma \[l:A(H)-properties\], for $$v_K:=\frac{1}{h(V_K)} 1_{K *V_K} *_h \tilde{1}_{V_K}$$ we have $\norm{v_K}_{A(H)} < D(1+\epsilon)$ and $v_{K}|_{K}\equiv 1$. Define for each pair $(K,\epsilon)$, $a_{\epsilon, K}=(1+\epsilon)^{-1}v_K$. We consider the net $\{a_{\epsilon,K}: K\subseteq H\ \text{compact, and $0<\epsilon<1$} \}$ in $A(H)$ where $a_{\epsilon_1,K_1} \preccurlyeq a_{\epsilon_2,K_2}$ whenever $v_{K_1}v_{K_2} = v_{K_1}$ and $\epsilon_2 < \epsilon_1$. So $(a_{\epsilon,K})_{0<\epsilon<1, K\subseteq H}$ forms a $\norm{\cdot}_{A(H)}$-norm $D$-bounded net in $A(H) \cap C_c(H)$. Let $f\in A(H)\cap c_c(H)$ with $K=\supp f$. Then $v_K f= f$. Therefore, $(a_{\epsilon,K})_{0<\epsilon<1, K\subseteq H}$ is a $D$-bounded approximate identity of $A(H)$. Note that if $H$ is a locally compact group such that $A(H)$ has a $D$-bounded approximate identity, one may prove that $H$ satisfies the ($D$-)Leptin condition. But an argument similar to the group one cannot be applied to the hypergroup case, since for every measurable set $E\subseteq H$ and $x\in H$, $L_x1_E \neq 1_{\tilde{x}*E}$ necessarily when $H$ is a hypergroup and $1_E$ denotes the character function on $E$. It is interesting if one can find a regular Fourier hypergroup with $(B_D)$ which does not satisfy $(L_D)$. \[p:b.a.i-of-A(H)-for-strung-hypergroups\] Let $H$ be a commutative hypergroup such that $\wH$, as the dual of $H$, has hypergroup structure. Then $H$ is a regular Fourier hypergroup and $A(H)$ has a $1$-bounded approximate identity. This proposition based on this fact that $A(H)$ is isometrically isomorphic to the hypergroup algebra $L^1(\wH)$ through the Fourier transform. Also it is known that hypergroup algebras have $1$-bounded approximate identities and this finishes the proof. are one-dimensional commutative regular Fourier hypergroups which are self dual i.e. $\wH=H$. So, the Fourier algebra has a $1$-bounded approximate identity. See [@mu1] and [@bl Section 3.5.61]. \[r:b.a.i-of-A(\^G)\] Let $G$ be a compact group. Then the Fourier algebra of $\widehat{G}$, $A(\widehat{G})$, is isometrically Banach algebra isomorphic to $ZL^1(G)$, [@ma Theorem 3.7]. Also since every compact group $G$ is a SIN-group, $ZL^1(G)$ always has a $1$-bounded approximate identity. Therefore, $A(\widehat{G})$ has a $1$-bounded approximate identity. [Leptin’s Theorem for hypergroups]{}\[ss:Reiter-condition\] \[d:Reiter\][@sk p32]\ We say that $H$ satisfies $(P_r)$, if whenever $\epsilon > 0$ and a compact set $E\subseteq H$ are given, then there exists $f\in L^r(H)$, $f \geq 0$, $\norm{f}_r=1$ such that $ \norm{L_xf - f}_r <\epsilon$ for every $x\in E$. We say that $H$ satisfies the if it has property $(P_1)$. For every hypergroup $H$, $(P_1)$ is equivalent to the amenability of $H$; hence, all compact or commutative hypergroups are satisfying $(P_1)$. Although $(P_2)$ implies $(P_1)$, [@sk], $(P_2)$ is not necessarily equivalent to the amenability of the hypergroup. As a counterexample, one may consider the , see [@bl (3.5.66)] and [@sk Example 4.6]. One may note that for a commutative hypergroup $H$ with the Plancherel measure $\varpi$, $H$ satisfies $(P_2)$ if and only if the constant character $1$ belongs to $\supp(\varpi)$, [@sk]. Singh, [@singh-mem Proposition 4.4.3], showed that if a hypergroup $H$ satisfies the ($1$-)Leptin condition, it satisfies $(P_r)$ for any $r\in [1,\infty]$, for $r$ in Definition \[d:Reiter\]. In the following we crucially rely on [@sk Lemma 4.4] which proves that $H$ satisfies $ (P_2)$ if and only if there is a net $(f_\alpha)_\alpha\subseteq L^2(H)$ such that $\norm{f_\alpha}_2=1$ and $f_\alpha* \tilde{f}_\alpha$ converges to $1$ uniformly on compact subsets of $H$. Note that by this lemma, $(P_2)$ implies the existence of a net $(g_\alpha)$ (in the form of $g_\alpha:=f_\alpha*\tilde{f}_\alpha$) which belongs to $A(H)$ while $\norm{g_\alpha}_{A(H)} \leq \norm{f_\alpha}_2^2 =1$. 1.5em The following theorem resembles the Leptin theorem for regular Fourier hypergroups. In the proof, some techniques of the group case (see [@ru Theorem 7.1.3]) have been applied. Recall that a on a $C^*$-algebra is a positive linear functional of norm $1$. Moreover, if $\cA$ is a von Neumann algebra with predual $\cA_*$, every state of $\cA$ can be approximated by a net of states of the elements of pre-dual in the weak$^*$ topology. Therefore, for a hypergroup $H$ each state $u$ on $VN(H)$ which belongs to $A(H)$ is in the form of $f*_h\tilde{f}$ for some $f\in L^2(H)$ such that $1=\norm{u}_{A(H)}=u(e)=\norm{f}_2^2$, by Proposition \[p:Fourier-of-hypergroups\]. \[t:bai-A(H)&lt;=&gt;P-2\] Let $H$ be a regular Fourier hypergroup. Then the following conditions are equivalent. - [ $A(H)$ has a $1$-bounded approximate identity.]{} - [ $A(H)$ has a $D$-bounded approximate identity for some $D\geq 1$.]{} - [ $H$ satisfies $(P_2)$.]{} $(B_D) \Rightarrow (P_2)$.\ For $(e_\alpha)_\alpha$ a $D$-bounded approximate identity of $A(H)$, there exists a $w^*$-cluster point $F\in VN(H)^{*}$. Note that for each $x\in H$, $ \langle\lambda(x),F\rangle = \lim_\alpha \langle\lambda(x),e_\alpha\rangle = \lim_\alpha e_\alpha(x)=1$. So $F|_{L^1(H,h)}$ may be interpreted as the constant function $1$ on $H$ (where $L^1(H,h)$ is observed as a subalgebra of $VN(H)$). Therefore, for each $f,g\in L^1(H,h)$, one gets that $ \langle F, f*g\rangle = \langle F,f\rangle\; \langle F,g\rangle$. Hence $F|_{L^1(H,h)}$ is a multiplicative functional on $L^1(H,h)$. Therefore, for each $f\in L^1(H,h)$, $\langle F, {\tilde{f}}*_hf\rangle = \langle F, \tilde{f} \rangle \langle F, f\rangle = |\langle F, f \rangle |^2 \geq 0$. But $L^1(H,h)$ is dense in the $C^*$-algebra $C^*_\lambda(H)$; hence, $F|_{C^*_\lambda(H)}$ is a positive functional on $C^*_\lambda(H)$ that is $\langle F, f*\tilde{f}\rangle \geq0$ for every $f\in C^*_\lambda(H)$. Also as a multiplicative functional, $\norm{F|_{C^*_\lambda(H)}}=1$. But as a positive norm $1$ functional, $F|_{C^*_\lambda(H)}$ is a state. Thus, by [@ren Corollary 2.3.12], $F|_{C^*_\lambda(H)}$ is extendible to a state $E$ on $VN(H)$. Because states of $VN(H)$ which belong to $A(H)$ are weak$^*$ dense in the set of all states of $VN(H)$, we may find a net $(f_\beta)_\beta$ in $\{ f*_h {\tilde{f}}:\ f\in L^2(H,h)\}$ such that $f_\beta=g_\beta *_h {\tilde{g_\beta}} \rightarrow E$ in weak$^*$ topology for a net $(g_\beta)_\beta \subseteq L^2(H,h)$. Moreover, $1 = \norm{f_\beta}_{A(H)}= f_\beta(e) = g_\beta*_h {\tilde{g}_\beta}(e)=\norm{g_\beta}_2^2$. Since $F|_{C^*_\lambda(H)}= E|_{C^*_\lambda(H)}$, for each $u\in A(H)$ and $f\in L^1(H)$, $uf\in L^1(H)$, we have $$\label{eq:f-beta-properties} \lim_\beta \langle u f_\beta , f\rangle =\langle u \cdot E, f\rangle = \langle F, uf\rangle = \lim_\alpha \langle e_\alpha, uf\rangle=\langle u,f\rangle.$$ Therefore, $uf_\beta \rightarrow u$ with respect to the topology $\sigma(A(H), L^1(H))$. Recall that $L^1(H)$ is dense in $C^*_\lambda(H)$ while $A(H) \subseteq B_\lambda(H)$ and $B_\lambda(H)=C^*_\lambda(H)^*$. Let us fix $u\in A(H)$. Therefore, for some given $\epsilon>0$ and $f\in C^*_\lambda(H)$, there is a $g\in L^1(G)$ such that $\norm{ g-f}_{C^*_\lambda(H)}<\epsilon$. Also there is some $\beta_0$ such that for each $\beta \succcurlyeq \beta_0$, $|\langle u f_\beta - u , g \rangle|<\epsilon$. So, $$\begin{aligned} |\langle f_\beta u - u, f \rangle | & \leq & |\langle f_\beta u - u, f-g\rangle| + |\langle f_\beta u - u, g \rangle |\\ & \leq & \norm{u}_{A(H)}(\norm{f_\beta}_{A(H)}+1) \norm{f-g}_{C^*_\lambda(H)} + \epsilon < (2\norm{u}_{A(H)} + 1) \epsilon.\end{aligned}$$ Therefore, $uf_\beta \rightarrow f$ with respect to the topology $\sigma(A(H), C^*_\lambda(H))$ which corresponds to the weak topology on $B_\lambda(H)$. It is a well-known result of functional analysis that the weak closure of a convex set coincides with its norm closure, so for every $\epsilon > 0$, there exists $\varphi_{\{u_1,\ldots, u_n\},\epsilon} = \varphi \in \operatorname{conv}\{f_\beta\}$ such that $u_i\in A(H)$ for $i = 1,\ldots,n$ and $\norm{u_i \varphi - u_i}_{A(H)}<\epsilon$. Moreover, $$1=\varphi(e) \leq \norm{\varphi}_\infty \leq \norm{\varphi}_{A(H)} \leq 1.$$ Note that $\varphi$ is also is a positive functional in the cone of positive functionals on $VN(H)$; therefore, $\varphi$ is actually a state and $\varphi=\psi*\tilde{\psi}$ for some $\psi \in L^2(H)$. To make the set of all such $\varphi$’s a net, let $I := \{(S,\epsilon): S \subseteq A(H) \ \text{is finite},\ \epsilon> 0\}$ become a directed set by $(S,\epsilon) \leq (S',\epsilon')$ if $S\subseteq S'$ and $\epsilon \geq \epsilon'$. This lets us to render the net $(\varphi_\alpha)_\alpha \subseteq \operatorname{conv}\{f_\beta\}$ that is a bounded approximate identity of $A(H)$. On the other hand, for each compact set $K\subseteq H$, by Lemma \[l:A(H)-properties\], there is some $u_K\in A(H)$ such that $u_K|K\equiv 1$. Therefore, for each $x\in K$, $$\begin{aligned} \lim_\alpha|1-\varphi_\alpha(x)| = \lim_\alpha |u_K(x)-u_K(x)\varphi_\alpha(x)| &\leq& \lim_\alpha \norm{u_K-u_K\varphi_\alpha}_{\infty} \\ &\leq& \lim_\alpha \norm{u_K-u_K\varphi_\alpha}_{A(H)} =0.\end{aligned}$$ So $\varphi_\alpha \rightarrow 1$ uniformly on compact subsets of $H$. Consequently, by [@sk Lemma 4.4], the existence of the net $(\varphi_\alpha)_\alpha$ implies $(P_2)$. 1.0em $(P_2)\Rightarrow (B_1)$.\ Let $(g_\beta)_\beta$ be the net generated by $(P_2)$ in [@sk Lemma 4.4], that is $g_\beta=f_\beta*\tilde{f}_\beta$ for some $f_\beta\in L^2(H)$ while $\norm{f_\beta}_2=1$ for every $\beta$ and $g_\beta\rightarrow 1$ uniformly on compact sets. Therefore, $$1=\norm{f_\beta}_2^2= g_\beta(e) \leq \norm{g_\beta}_{\infty} \leq \norm{g_\beta}_{A(H)} \leq \norm{f_\beta}_2^2 \leq 1.$$ Also for each $u\in A(H) \cap C_c(H)$ and $f\in L^1(H)$, $$\begin{aligned} \lim_\beta |\langle u g_\beta - u,f\rangle| &\leq& \lim_\beta \int_H |u(x)| | g_\beta(x)-1| |f(x)| dx\\ &=& \int_{\supp(u)} |u(x)| |g_\beta(x) -1| |f(x)| dx=0. \end{aligned}$$ Let us fix $u\in A(H)$. For given $\epsilon>0$ and $f\in L^1(H)$, there is some $v\in A(H) \cap C_c(H)$ such that $\norm{u-v}_{A(H)}<\epsilon$ and $\beta_0$ such that for any $\beta \succcurlyeq \beta_0$, $|\langle v g_\beta - v, f\rangle| <\epsilon$. So for any $\beta \succcurlyeq \beta_0$, $$\begin{aligned} |\langle u g_\beta - u, f\rangle| &\leq& |\langle u g_\beta - v g_\beta, f\rangle| +|\langle v g_\beta - v , f\rangle| +|\langle v - u , f\rangle| \\ &\leq & \norm{u - v}_{A(H)} \norm{ g_\beta}_{A(H)} \norm{f}_1 +\epsilon +\norm{v - u}_{A(H)} \norm{ f}_1\\ & <& \epsilon (2\norm{f}_1+1).\end{aligned}$$ Therefore, by one generalization to arbitrary functions on $A(H)$, $\lim_\beta ug_\beta = u$ in the topology $\sigma(A(H), L^1(H))$. But indeed $A(H)\subseteq B_\lambda(H)$ and this topology on bounded subsets of $A(H)$ coincides to the weak topology on $B_\lambda(H)$ i.e. $\sigma(B_\lambda(H), C^*_\lambda(H))$. So similar to the previous part, there is a $(e_\alpha)_\alpha \subset \operatorname{conv}\{g_\beta\}_\beta$ such that $$\lim_\alpha \norm{ue_\alpha - e_\alpha}_{A(H)}=0$$ for every $u\in A(H)$. Also note that for each $\alpha$, $$1=e_\alpha(e) \leq \norm{e_\alpha}_{\infty} \leq \norm{e_\alpha}_{A(H)}\leq 1.$$ 1.0em $(B_1)\Rightarrow (B_D)$ is trivial. \[r:bai-of-A(G)\] Let $G$ be a locally compact group. Then $G$ satisfies the $D$-Leptin condition for each $D>1$ if and only if it satisfies the Leptin condition. To observe this fact, note that the existence of a bounded approximate identity for $A(G)$ is equivalent to satisfaction of the Leptin condition by the group $G$, [@ru Theorem 7.1.3]. Most of the commutative regular Fourier hypergroup examples in [@mu1] including , , and are hypergroups for them the support of the Plancherel measure includes the trivial character; hence, they satisfy $(P_2)$. Thus the Fourier algebra has a $1$-bounded approximate identity. 1.0em Let $G$ be a locally compact group $G$. are defined in [@mu2] using a linear map $\pi : C_c(G) \rightarrow C_c(G)$ which is called and it satisfies some conditions. This class of hypergroups includes the hypergroup structure defined on the double cosets of locally compact groups with respect to a compact subgroup. As it was shown in [@mu2], every ultraspherical hypergroup is a regular Fourier hypergroup and the Fourier algebra is isometrically isomorphic to the subalgebra $A_\pi(G):=\{f\in A(G): f\circ\pi=f\}$ of $A(G)$. Let $H$ be a ultraspherical hypergroup generated by a locally comapct group $G$. The following corollary is an application of Theorem \[t:bai-A(H)&lt;=&gt;P-2\] to this class of hypergroups. Recall that before this result we barely knew anything about amenability notions of double coset hypergroups. \[c:ultraspherical-hypergroups\] Let $H$ be a ultraspherical hypergroup structure admitted by an amenable locally compact group $G$. Then $H$ satisfies $(P_2)$. By [@kan14 Lemma 3.7], the amenability of $G$ implies the existence of a bounded approximate identity for $A(H)$. But as we saw before, $H$ is a regular Fourier algebra and by the existence of a bounded approximate identity for $A(H)$ is equivalent to $(P_2)$. 1.0em Let us summarize the results of this section as well as the previous one in the following theorem. \[t:summary\] Let $H$ be a regular Fourier hypergroup. Where 1. [$H$ satisfies the strong Følner condition.]{}\[Strong-Folner\] 2. [$H$ satisfies the $D$-Leptin condition for some $D\geq 1$.]{}\[Leptin\] 3. [ $A(H)$ has a $D$-bounded approximate identity for some $D\geq 1$.]{}\[A(H)-bai\] 4. [ $H$ satisfies $(P_2)$.]{}\[P-2\] 5. [$H$ satisfies Reiter condition.]{}\[P-1\] 6. [ $L^1(H)$ is an amenable Banach algebra.]{}\[L\^1-amen\] Then $$\xymatrix{ {(SF)} \ar@{=>}[r] & {(L_1)} \ar@{=>}[d] \ar@{=>}[rr] & & (P_2) \ar@{=>}[r]_{\nLeftarrow} & (P_1) & (\AM) \ar@{=>}^{\nRightarrow}[l] \\ & {(L_D)} \ar@{=>}[r] & {(B_D)} \ar@{<=>}[r] & (B_1)\ar@{<=>}[u] & & }$$ 2.0em Recall that the implication $(L)\Rightarrow (P_2)$ is due to [@singh-mem Proposition 4.4.3], as mentioned before. 2.0em [Bounded approximate identities in ideals of $A(H)$]{}\[ss:bai-of-ideals\] For each set $E\subset H$, let us define $I_E:=\{ u\in A(H):\ u(x)=0\ \forall x\in E\}$ which is an ideal of $A(H)$ if $H$ is a regular Fourier hypergroup. Regarding bounded approximate identities of the ideals $I_E$, one may have the following observations. One may notice that some of these observations were proved in [@singh-segal; @singh-Dtikin] for commutative regular Fourier hypergroups. - [Let $H$ be a discrete regular Fourier hypergroup. Then every ideal which is finite dimensional has an identity. To observe this, for a finite dimensional ideal $I$, there is a finite set $F\subseteq H$ such that $I=\operatorname{span}\{\delta_x\}_{x\in F}$. Hence, $u_F:=\sum_{x\in F}\delta_x$ is the identity of $I$.]{} - [ For a discrete regular Fourier hypergroup satisfying $(P_2)$ (or equivalently $A(H)$ has a bounded approximate identity), every ideal which is finite co-dimensional has a bounded approximate identity. To see this, assume that $I$ is finite co-dimensional. Therefore, $I=I_F$ for some finite set $F\subseteq H$. suppose that $(u_\alpha)_\alpha$ is a bounded approximate identity of $A(H)$. For each $\alpha$, define $v_\alpha:=u_\alpha - \sum_{x\in F} u_\alpha(x) \delta_x\in I$. Hence, $(u_\alpha)_\alpha$ is an approximate identity of $I$ while for each $\alpha$, $$\norm{v_\alpha}_{A(H)} \leq \norm{u_\alpha}_{A(H)} + \sum_{x\in F} \norm{u_\alpha}_\infty \norm{\delta_x}_{A(H)}.$$ Note that for each $x\in H$, $\norm{\delta_x}_{A(H)}\leq \norm{\delta_x}_2=h(x)^{1/2}$. So if $\sup_\alpha \norm{u_\alpha}_{A(H)} =1$, $$\norm{v_\alpha}_{A(H)} \leq 1+ \sum_{x\in F} h(x)^{\frac{1}{2}}.$$]{} - [If $H$ is a regular Fourier hypergroup (not necessarily discrete) and $K$ is a compact open sub-hypergroup of $H$ then $I_K$ has the identity $1_K$. It is a direct result of Lemma \[l:A(H)-properties\] where $1_K= 1_{K}*_h1_K \in A(H)$ which is constantly $1$ on $K$. ]{} - [ Let $H$ be a regular Fourier hypergroup satisfying $(P_2)$ and $(u_\alpha)_\alpha$ is a $1$-bounded approximate identity of $A(H)$. Let us define $\mathcal{G}(H):=\{x\in H:\ x*\tilde{x}=e\}$ which is the of the hypergroup $H$. Then $I_{\{x\}}$, for each $x\in \mathcal{G}(H)$, has a bounded approximate identity. To generate a bounded approximate identity of $I_{\{x\}}$, for each $V$ which belongs to $\mathcal{N}_e$, the neighbourhoods of $e$ directed inversely by inclusion, define $\varepsilon_V=1_{xV}*_h \tilde{1}_V\in A(H)$ (see Lemma \[l:A(H)-properties\]). So $\varepsilon_V(x)=1$ and for each $y\neq x$ there is some $V_0\in \mathcal{N}_e$ such that for every $V\subseteq V_0$, $y$ does not belong to the closure of $x*V*\tilde{V}=\supp \varepsilon_V$ while $\norm{\varepsilon_V}_{A(H)}\leq 1$ for each $V$. So for each $V\in \mathcal{N}_e$ and $\alpha$ define $e_{\alpha, V}:=u_\alpha - u_\alpha(x) \varepsilon_V\in I_{\{x\}}$. Now one easily sees that the net $(e_{\alpha, V})_{\alpha, V\in \mathcal{N}_e}$ is a $2$-bounded approximate identity of $I_{\{x\}}$.]{} Note that in the observation (iv), one cannot immediately prove something similar for every $x\in H$, since for $\varepsilon_{x,V}=1_{xV}*_h \tilde{1}_{V}$, $$\norm{\varepsilon_{x,V}}_{A(H)} \leq \left(\frac{\lambda_H(x*V)}{\lambda_H(V)}\right)^{\frac{1}{2}}.$$ But for a general $x\in H$, $\lambda_H(x*V)\geq \lambda_H(V)$ while for $x\in \mathcal{G}(H)$ one gets the equality. Therefore, although the reasoning of the observation (iv) implies the existence of an approximate identity for every $I_x$, it dos not guarantee that the approximate identity is bounded unless $x\in \mathcal{G}(H)$. 1.5em A complex valued function $\phi$ on $H$ is called a of $A(H)$, if $\phi u$ lies in $A(H)$ whenever $u$ belongs to $A(H)$. Let us denote by $MA(H)$ the space of all multipliers of $A(H)$ as defined and studied in [@mu1]. Then [@mu1 Proposition 3.2] proves that every $\phi \in MA(H)$ is continuous. Also clearly $MA(H)$ contains constant functions. Even more, $MA(H)$ is a Banach algebra on functions on $H$ equipped with the operator norm of $\mathcal{B}(A(H), A(H))$. The following proposition is a hypergroup version of [@br92 Lemma 3.9]. \[p:bai-of-ideals\] Let $H$ be a regular Fourier hypergroup such that $A(H)$ satisfying $(P_2)$. If $A$ and $B$ are two disjoint subsets of $H$ such that there is some $u\in MA(H)$ such that $u|_A\equiv 1$ and $u|_B\equiv 0$. Then $I_{A\cup B}$ has a bounded approximate identity if and only if $I(A)$ and $I(B)$ have bounded approximate identities. Let $(u_\alpha)_\alpha$ be a bounded approximate identity of $I_{A\cup B}$ and $(v_\beta)$ be a bounded approximate identity of $A(H)$ proved by Theorem \[t:bai-A(H)&lt;=&gt;P-2\]. Then $\left( u_\alpha u+(1-u)v_\beta\right)_{\alpha,\beta}$ is a bounded approximate identity of $I_A$. Similarly $\left( u_\alpha(1 - u)+ u v_\beta\right)_{\alpha,\beta}$ is a bounded approximate identity of $I_B$. The converse is proved by multiplying the bounded approximate identities of $I_A$ and $I_B$ and it is independent of the existence a bounded approximate identity of $A(H)$ as well as the existence of such a $u$ in $MA(H)$. 2.0em [Amenability of hypergroup algebras]{}\[s:AM-L1(H)\] In this section $H$ is always a discrete hypergroup with a fixed Haar measure $h$. Let us first recall that $L^1(H,h)$ is a subalgebra of $\ell^1(H)$. Also, they are isometrically Banach algebra isomorphic through a mapping $\iota: L^1(H,h) \rightarrow \ell^1(H)$ where $\delta_x \mapsto h(x)\delta_x$. Therefore, for ${\cal F}:L^1(H,h) \rightarrow C(\wH)$ and ${\cal FS}:\ell^1(H)\rightarrow C(\wH)$, the [ Fourier]{} and [ Fourier-Stieltjes transforms]{}, respectively (see Section \[s:hypergroups\]), ${\cal F}(\delta_x)= h(x) {\cal FS}(\delta_x)$ for each $x \in H$. Hence ${\cal FS}(\iota(f))={\cal F}(f)$ for every $f\in L^1(H,h)$. Since the same holds for the hypergroup $H\times H$, one may consider the corresponding Fourier and Fourier-Stieltjes transforms, denoted by ${\cal F}_2$ and ${\cal FS}_2$ respectively. Let us recall that, $L^1(H\times H, h\times h)$ is isometrically Banach algebra isomorphic to $L^1(H,h)\otimes_\gamma L^1( H,h)$ where $\otimes_\gamma$ denotes the projective tensor product. Therefore, ${\cal F}_2(f\otimes g)(\alpha,\beta)={\cal F}(f)(\alpha) {\cal F}(g)(\beta)$ for every $\alpha,\beta \in \wH$. A similar result holds for $\mathcal{FS}_2$. To prove the main theorem of this section we need the following lemma. \[l:cc-separates-characters\] Let $H$ be a discrete commutative hypergroup. If $L^1(H,h)$ is amenable then there is a constant $\epsilon>0$ such that for every $\alpha, \beta \in \wH$, there is a function $g\in c_c(H)$ such that $|{\cal F}(g)(\alpha)-{\cal F}(g)(\beta)|>\epsilon$. If $L^1(H,h)$ is amenable, by [@go], one can separate the elements of the Gelfand spectrum by the elements of $L^1(H,h)$. Therefore, there is a constant $\varepsilon>0$ such that for every $\alpha, \beta \in \wH$ such that $\alpha \neq \beta$, there exists some $f(=f_{\alpha,\beta})\in L^1(H,h)$ such that $|{\cal F}(f)(\alpha)- {\cal F}(f)(\beta)|>\varepsilon$. Since $c_c(H)$ is dense in $L^1(H,h)$, there is some $g\in c_c(H)$ such that $\norm{g-f}_1<\varepsilon/4$. Therefore, $$\begin{aligned} \frac{\varepsilon}{2} &\geq& |{\cal F}(f)(\alpha) - {\cal F}(g)(\alpha)-{\cal F}(f)(\beta) - {\cal F}(g)(\beta)| \\ &\geq& |{\cal F}(f)(\alpha) -{\cal F}(f)(\beta)| - | {\cal F}(g)(\alpha)- {\cal F}(g)(\beta)| \\ &\geq& \varepsilon -| {\cal F}(g)(\alpha)- {\cal F}(g)(\beta)|.\end{aligned}$$ And this proves the lemma for $\epsilon=\varepsilon/2$. The following theorem is the main theorem of this section. The whole idea behind the proof is from a study on weighted group algebras, [@bade]. Due to the similarity of the Haar measure of discrete hypergroups and weights on discrete groups, Lasser in [@la2 Theorem 3] applied a similar argument to prove the following result for polynomial hypergroups. Here we prove Lasser’s result for discrete commutative hypergroups satisfying $(P_2)$. Let us recall that for a commutative hypergroup $H$ with the Plancherel measure $\varpi$, $H$ satisfies $(P_2)$ if and only if the constant character $1$ belongs to $\supp(\varpi)$, [@sk]. \[t:non-amenability-of-hypergroup-algebra\] Let $H$ be an infinite discrete commutative hypergroup which satisfies $(P_2)$. If $L^1(H)$ is amenable then there is some $M$ such that $\{x: h(x)\leq M\}$ is infinite. Let us denote by $c_0(H\times H)$ the Banach subspace of $\ell^\infty(H\times H)$ vanishing at infinity. Indeed $c_0(H,H)$ forms an $L^1(H,h)$-bimodule applying the following definitions $f\cdot \phi :=f\otimes \delta_{e} * \phi$ and $\phi \cdot f := f \otimes \delta_{e} *\phi$ for every $f\in L^1(H,h)$ and $\phi \in c_0(H\times H)$ where $e$ is the trivial element of $H$. Proposition \[p:L\^1-C\_0\] implies that the outcome also belongs to $c_0(H\times H)$. By Riesz representation theorem the dual of $c_0(H\times H)$ is nothing but $\ell^1(H\times H)$. But recall that $L^1(H\times H, h\times h)$ isomorphic to $\ell^1(H\times H)$ through the mapping $\delta_{(x,y)} \mapsto h(x)h(y)\delta_{(x,y)}$. Therefore, for every $\phi \in c_0(H\times H)$ and $\Phi \in L^1(H\times H, h\times h)$, $$\langle \Phi,\phi\rangle = \sum_{x,y\in H} \phi(x,y) \Phi(x,y) h(x) h(y).$$ Now one may consider $L^1(H\times H,h\times h)$ as a dual Banach $L^1(H,h)$-bimodule. $$\begin{aligned} \langle \phi, \Phi \cdot f\rangle &=& \langle f \cdot \phi, \Phi\rangle\\ &=& \sum_{x,y\in H} \sum_{z\in H} f(z) \phi(\delta_{\tilde{z}}*\delta_{x}, y) h(z) \Phi(x,y) h(x)h(y)\\ &=& \sum_{x, y \in H} \sum_{z\in H} f(z) \Phi(\delta_{{z}}*\delta_{x},y) h(z) \phi({x}, y) h(x)h(y)\\ &=& \sum_{x,y \in H} \sum_{z\in H} \tilde{f}(z) \Phi(\delta_{\tilde{z}}*\delta_{x},y) h(z) \phi(x, y) h(x)h(y)\\ &=& \langle\phi, \tilde{f}\otimes \delta_{e}*\Phi\rangle.\end{aligned}$$ Similarly, $f\cdot \Phi =\delta_{e}\otimes \tilde{f}* \Phi$. Toward a contradiction assume that $h(x) \rightarrow \infty$ and $L^1(H,h)$ is amenable. Let $\phi_0\in c_0(H\times H)$ be defined as $\phi_0(x,y)=h(x)^{-1}h(y)^{-1}$. Therefore, $\cX:=\ker(\phi_0)$ is the weak$^*$ closed subspace of $ \ell^1(H\times H)$ consisting of all $\Phi$ such that $\sum_{x,y} \Phi(x,y)=0$. In particular for each $\Phi \in L^1(H\times H, h\times h) \subseteq \ell^1(H\times H)$, ${\cal FS}(\Phi)(1,1)= \sum_{x,y} \Phi(x,y) =0$ if $\Phi\in \ker(\phi_0)$. Also, for each $f\in L^1(H,h)$, ${\cal FS}(f\cdot \Phi)(1,1)={\cal FS}(f)(1) {\cal FS}(\Phi)(1,1)=0$ and similarly, ${\cal FS}(\Phi\cdot f)(1,1)=0$. Hence, since $\cX$ is weak$^*$ closed, $\cX$ forms a dual $L^1(H,h)$-bimodule, see [@bade Proposition 1.3]. Let us define a mapping $D:L^1(H,h)\rightarrow \cX$ by $Df:=f\otimes \delta_{e} - \delta_{e}\otimes f$. It is clear that ${\cal FS}(Df)(1,1)=0$ and $D(f)= f\cdot \delta_{e,e} - \delta_{e,e} \cdot f$ for every $f\in L^1(H,h)$; $D$ is a derivation. (Note that based on the formula of $D$ with respect to $\delta_{(e,e)}$ before, $D$ is not necessarily an inner derivation into $\cX$.) If $L^1(H,h)$ is amenable, then there is some $\Phi \in \cX$ such that $Df=f\cdot \Phi - \Phi\cdot f$ for every $f$. Also by Lemma \[l:cc-separates-characters\], there exists some $\epsilon>0$ such that for every $\alpha,\beta \in \wH$ when $\alpha \neq \beta$ there is some $f(=f_{(\alpha, \beta)}) \in c_c(H)$ such that $|{\cal F}(f )(\alpha)-{\cal F}(f )(\beta)|>\epsilon$. Therefore, for $g=\iota(f )\in c_c(H)$ we have $$\begin{aligned} 0\neq {\cal F}(f )(\alpha)-{\cal F}(f )(\beta) &=& {\cal FS}(g)(\alpha) - {\cal FS}(g)(\beta) \\ &=& {\cal FS}_2(Dg){(\alpha, \beta)} \\ &=& {\cal FS}(g)(\alpha) {\cal FS}(\Phi){(\alpha, \beta)} -{\cal FS}(g)(\beta) {\cal FS}_2(\Phi){(\alpha, \beta)} \\ &=& \left({\cal FS}(g)(\alpha)-{\cal FS}(g)(\beta)\right) \ {\cal FS}_2(\Phi){(\alpha, \beta)} \\ &=& \left({\cal F}(f)(\alpha)-{\cal F}(f )(\beta)\right) {\cal FS}_2(\Phi){(\alpha, \beta)} .\end{aligned}$$ Therefore, ${\cal FS}_2(\Phi)(\alpha,\beta)=1$ for all $\alpha\neq \beta$ while ${\cal FS}_2(\Phi)(1,1)=0$. The continuity of ${\cal FS}_2(\Phi)$ implies that $(1,1)\in \wH\times\wH$ is an isolated point. Since $H$ and consequently $H\times H$ satisfy $(P_2)$, $(1,1) \in \supp(\varpi)$; therefore, $\varpi(1,1)>0$ where $\varpi$ denotes the Plancherel measure on $\wH\times\wH$. So, $0\neq \delta_{(1,1)}\in L^2(\wH\times\wH, \varpi)$. But by Plancherel theorem for commutative hypergroups (see [@bl Theorem 2.2.2]), ${\cal F}^{-1}_2(\delta_{(1,1)})\neq 0$ belongs to $L^2(H,h)$. Also based on the definition of the Fourier inverse, [@bl Definition 2.2.30], $$\Psi(x,y):={\cal F}^{-1}_2(\delta_{(1,1)})(x,y)=\sum_{\alpha, \beta \in \wH} \delta_{(1,1)}(\alpha,\beta) \overline{\alpha(x)\beta(y)} \varpi(\alpha,\beta)= \varpi(1,1).$$ Therefore, the constant function $\Psi\equiv \varpi(1,1)$ belongs to $L^2(H\times H, h\times h)$ which contradicts our assumption regarding the unboundedness of $h$. In the remaining of this section, we prove the amenability of the hypergroup algebra of . This hypergroup structure is defined on $\Nat_0^d$ for some integer $d\geq 1$. Let us recall that the convolution action for any $(n_1,\ldots,n_d)$ and $(m_1,\ldots, m_d)$ in $\Nat_0^d$ is defined $$\label{eq:Chebychev} \delta_{(n_1,\ldots,n_2)}*\delta_{(m_1,\ldots,m_d)}=\frac{1}{2^d}\sum \delta_{(|\pm n_1\pm m_1|,\ldots, |\pm n_d\pm m_d|)}$$ when the sigma is taken over all $2^d$ possibilities of $\pm n_i$ and $\pm m_j$ for $1\leq i,j \leq d$. For $d=1$, this hypergroup is simply called and the amenability of the hypergroup algebra is proved in [@la2]. In his proof, Lasser constructs a bounded approximate diagonal of the hypergroup algebra. The following proof though, applies some results for amenable algebras and one observation on hypergroup algebra of this class of polynomial hypergroups to generalize Lasser’s result. The author appreciate Yemon Choi and Nico Spronk’s help for the following proposition. The hypergroup algebra of the multivariable Chebychev polynomial hypergroups is amenable. Let $\Bbb{T}$ denote the torus group and $\Bbb{F}_2=\{e,\alpha\}$ be the group of order two of automorphisms of $\Bbb{T}$ where $\alpha(\theta)=\theta^{-1}$. Hence, $\Bbb{F}_2^d$ is a subgroup of automorphisms of $\Bbb{T}^d$ for any integer $d\geq 1$. Let us recall that the dual group of $\Bbb{T}^d$ is nothing but $\Bbb{Z}^d$ and $A(\Bbb{T}^d)$, the Fourier algebra of $\Bbb{T}^d$, is isometrically Banach algebra isomorphism to $\ell^1(\Bbb{Z}^d)$ and therefore is amenable. Let $\chi_{(n_1,\ldots, n_d)}$ be the Fourier transform of $\delta_{(n_1,\ldots, n_d)}\in \ell^1(\Bbb{Z}^d)$. One may consider $Z_{\Bbb{F}^d_2}A(\Bbb{T}^d)$ which is the subalgebra of $A(\Bbb{T}^d)$ consisting of all functions which are invariant with respect to the group $\Bbb{F}_2^d$ which forms a finite group of automorphisms for the algebra $A(\Bbb{T}^d)$. One simple observation implies that this algebra is generated by all characters of the form $ \psi_{(n_1,n_2,\ldots, n_d)}:=\sum \chi_{(\pm n_1, \ldots, \pm n_d)}$ for all $(n_1,\ldots, n_d)\in \Bbb{N}_0^d$ when the sigma is taken over all $2^d$ possibilities of $\pm n_i$ for $1\leq i \leq d$. Since $\norm{\psi_{(n_1,\ldots,n_d)}}_{A(\Bbb{T}^d)}=d$ for every $(n_1,\ldots, n_d)$ and regarding the convolution (\[eq:Chebychev\]), one can easily show that $Z_{\Bbb{F}^d_s}A(\Bbb{T}^d)$ is isometrically isomorphic to the $d$-variable Chebychev polynomial hypergroup algebra. Note that as an abelian group, the Fourier algebra of $\Bbb{T}^d$ is amenable. Also via a result by Kepert, in [@ke], for every finite group of automorphisms of an amenable algebra, the subalgebra of all invariant elements is also amenable. And $Z_{\Bbb{F}^d_2}A(\Bbb{T}^d)$ is such a subalgebra for amenable algebra $A(\Bbb{T}^d)$; hence, it is amenable. Note that for each $d$-variable Chebychev polynomial hypergroup $\Bbb{N}_0^d$, the hypergroup algebra satisfies $(P_2)$, [@la-t], and the Haar measure is constantly $2^d$. Comparing this observation and Theorem \[t:non-amenability-of-hypergroup-algebra\], one may conjecture that for this family of hypergroups, the amenability should be equivalent to the boundedness of the Haar measure. In a subsequent work, we study this conjecture for dual of hypergroups. [Applications to compact and discrete groups]{}\[s:Applications\] [Approximate amenability of proper Segal algebras]{}\[ss:AA-Segals\] In [@ma], it was shown that for every proper Segal algebra of a compact group $G$ is not approximately amenable if $\widehat{G}$ satisfies the Leptin condition. The proof of [@ma Theorem 5.3] is also correct for all proper Segal algebras on compact group $G$ when $\widehat{G}$ satisfies the $D$-Leptin condition for some $D\geq 1$. Basically, the $D$-Leptin condition helps us to generate a norm bounded approximate identity for the Fourier algebra of $\widehat{G}$ which satisfies some extra conditions. Here we omit the proof as it is identical to the one in [@ma]. \[t:Segal-of-S\^1(G)-G-Leptin\] Let $G$ be a compact group and $\wG$ satisfies the $D$-Leptin condition for some $D\geq 1$. Then every proper Segal algebra of $G$ is not approximately amenable. \[c:Segal-of-S\^1(SU(2))\] Every proper Segal algebra on every connected simply connected compact real Lie group is not approximately amenable. [Amenability of $ZA(G)$ for compact groups]{}\[ss:AM-ZA(G)\] It was known that every compact group is a regular Fourier hypergroup, [@he2]. Let us denote by $ZA(G)$, the subspace of $A(G)$, of all functions $f$ such that $f$ is constant on every conjugacy class. Therefore, $$\label{eq:ZA(G)} ZA(G)=\{ f\in A(G):\ f(yxy^{-1})=f(x)\ \forall x,y\in G\}$$ which forms a closed subspace of $A(G)$. \[t:AM-of-ZA(G))\] Let $G$ be a non-discrete compact group such that $\{\pi\in \wG: d_\pi = n\}$ is finite for each positive integer $n$. Then $ZA(G)$ is not amenable. In [@ma], it was proved that $ZA(G)$, as a Banach algebra, is isometrically isomorphic to the hypergroup algebra of $\wG$ where $h(\pi)=d_\pi^2$ for every $\pi \in \wG$. Moreover, $\wG$ is a regular Fourier algebra and its Fourier algebra isometrically isomorphic to $ZL^1(G)$, the center of the group algebra of $G$. But as a SIN group, $ZL^1(G)$ has a $1$-bounded approximate identity. Hence, $A(\wG)$ has a bounded approximate identity; therefore, by Theorem \[t:bai-A(H)&lt;=&gt;P-2\], $\wG$ satisfies $(P_2)$. Now one can apply Theorem \[t:non-amenability-of-hypergroup-algebra\] for the hypergroup $\wG$ to finish the proof. One may compare Theorem \[t:AM-of-ZA(G))\] to [@jo1 Theorem 6.1] where Johnson proved a similar result for amenability of $A(G)$. In his proof, though, he used a property of a subalgebra of $A(G)$ denoted by $A_\gamma(G)$. The author’s computations imply that such a procedure fails for hypergroups. One may see [@nico-ser] for a survey on the amenability notions of $A(G)$. 1.0em The condition of Theorem \[t:AM-of-ZA(G))\] is far from being necessary for the amenability of $ZA(G)$ as Johnson, in [@jo1], stated a similar remark for $A(G)$. For example let $G=\Bbb{T}\times \operatorname{SU}(2)$. One can show that $ZA(\operatorname{SU}(2))$ is not even weakly amenable because it has a non-zero bounded inner derivation, say $D_\theta$. Therefore, $D_\theta\otimes \varepsilon_e$ forms a symmetric non-zero bounded derivation on $ZA(\operatorname{SU}(2)) \otimes_\gamma A(\Bbb{T})$ when $\varepsilon_e(g):=g(e)$ for $e$ the identity of the group $\Bbb{T}$; hence, $ZA(\operatorname{SU}(2)\times \Bbb{T})\cong ZA(\operatorname{SU}(2)) \otimes_\gamma A(\Bbb{T})$ is not weakly amenable. On the other hand, for each $n$ there are infinitely many $\pi \in \wG$ such that $d_\pi =n$. The details of this remark will appear in a subsequent paper which is currently in preparation. Let $G$ be a compact group such that $d_\pi \rightarrow \infty$. Then $G$ is called a . Some properties of tall groups, specially profinite tall groups, have been studied in [@tall1; @tall2; @tall3]. \[eg:amenability-of-Lie-groups\] Let us use the notations and facts mentioned in the proof of Proposition \[p:Leptin-number-of-SU(3)\]. So based on Weyl’s formula and applying a rough estimate, one can easily show that $\tau(\pi) \leq |B|d_\pi$. Hence, $\{d_\pi\}_{\pi \in \widehat{\bG}}$ cannot be bounded when $B$ is finite. Therefore, $ZA(\bG)$ is not amenable. This class of compact Lie groups includes $\operatorname{SU}(n)$’s for $n\geq 2$. [Amenability of $\Zlg$ for FC groups]{}\[ss:amenability-of-zl1(G)\] In this subsection, we are interested in applying the hypergroup tools that we have developed before to study the amenability of $\Zlg$ for discrete FC groups. In this section for locally compact group $G$, $\Aut(G)$ is the group of all bicontinuous automorphisms of $G$. Recall that $G$ is a group provided $B$ is a subgroup of $\Aut(G)$ which has compact closure in $\Aut(G)$ with respect to the . Let us assume that $B$ always includes the inner automorphisms. With a natural operation the set $\conjB$ of $B$-orbits $C_x^B:=\{\beta(x): \beta \in B\}$, $x \in G$, is a commutative hypergroup, see [@je 8.1]. When $B$ collapses to the subgroup of all inner automorphisms, is denoted by and $\conjB$ simply by $\conj$. Here, we rely on some results of [@mosak1] regarding groups. Recall that for a locally compact group $G$, $B$ as a subgroup of $\Aut(G)$ has a compact closure in $\Aut (G)$ if and only if $G$ is a $[SIN]$ group and G is $[FC\overline{]}$ (that is, the conjugacy classes of $G$ have compact closure). The two concepts $[FC\overline{]}$ and are equivalent for discrete groups. Let $Z_BL^p(G)$ be as defined by Mosak in [@mosak1] for $p\in [1,\infty)$. It can be shown that the canonical natural map $\iota:Z_BL^p(G) \rightarrow L^p(\conjB)$ where $\iota(f)(C_x^B):=f(x)$ is an isometric Banach space isomorphic. Moreover, if $p=1$, the mapping $\iota$ is also is a Banach algebra isomorphism. On the other hand, $\conjB$ forms a regular Fourier hypergroup. One may see [@ross-78; @mu1] for more details regarding these claims. Specially to prove that $A(\conjB)$ is an algebra, one may apply [@hart] which proves that the dual structure of $\conjB$ is another hypergroup denoted by $H_B$ here. Therefore, $A(\conjB)$ is isometrically Banach algebra isomorphism to $L^1(H_B)$ and hence $A(\conjB)$ has a bounded approximate identity, by Proposition \[p:b.a.i-of-A(H)-for-strung-hypergroups\]. Let $\iota^*: C_c(\conjB)*C_c(\conjB) \rightarrow Z_BC_c(G)*Z_BC_c(G)$ be the canonical restriction of $\iota^{-1}$ to compact supported functions where $Z_BC_c(G)$ is $C_c(G) \cap Z_BL^1(G)$. Therefore, for each $u\in C_c(\conjB)*C_c(\conjB) \subseteq A(\conjB)$, $$\begin{aligned} \norm{u}_{A(\conjB)} &=& \inf\{ \norm{\xi}_2\norm{\eta}_2:\ \ \ u=\xi*\tilde{\eta},\ \ {\xi,\eta} \in L^2(\conjB)\} \\ &\geq& \inf\{ \norm{\xi}_2\norm{\eta}_2: \iota^*(u)=\xi*\tilde{\eta}, \ \ {\xi,\eta} \in L^2(G)\} =\norm{\iota^*(u)}_{A(G)}.\end{aligned}$$ Therefore, $\iota^*$ can be extended to a norm decreasing linear mapping $\iota^*:A(\conjB) \rightarrow Z_BA(G)$ for $Z_BA(G):=\{f\in A(G): f\circ \beta=f, \ \forall \beta\in B\}$. Noting this fact that both of the algebras $A(\conjB)$ and $Z_BA(G)$ are equipped with pointwise multiplication, one can conclude that $\iota^*$ is an algebra homomorphism. Due to the definition of $\iota^*$ over compact supported functions of $\conjB$ into $Z_BC_c(G)$, $\iota^*$ is an injection with a dense range, even though it is not immediate that $\iota^*$ is a bijection necessarily. Although this is true for every compact group $G$ and $B$ the group of inner automorphisms, [@ma]. It is an interesting question that for which groups these two algebras are isomorphic. Let us recall again that for a discrete group $G$, it is called or if for each $x\in G$, $C_x:=\{yxy^{-1}:\ y\in G\}$ is finite. Every FC group $G$ is actually . Note that for an FC group $G$, $\conj$ is a discrete commutative hypergroup. Moreover, then the weight $h$ which is defined on $\conj$ by $h(C)=|C|$, $(C\in \conj)$, is a Haar measure on $\conj$, [@ma-the]. \[t:ZL-AM-FC-groups\] Let $G$ be an infinite FC group such that for every integer $n$ there are just finitely many conjugacy classes $C$ such that $|C|=n$. Then $Z\ell^1(G)$ is not amenable. As we saw before, $\conj$ is a regular Fourier discrete commutative hypergroup. Also, $A(\conj)$ has a bounded approximate identity and therefore, $\conj$ satisfies $(P_2)$ by Theorem \[t:bai-A(H)&lt;=&gt;P-2\]. Now one applies Theorem \[t:non-amenability-of-hypergroup-algebra\] and isomorphism $L^1(\conj,h)\cong \Zlg$ to finish the proof. For a group $G$, let $G'$ denote the of $G$. It is immediate that if $G'$ is finite, for every $C \in \conj$, $|C|\leq |G'|$. The converse is also true i.e. if $\sup_{C\in \conj} |C|<\infty$, then $|G'|<\infty$. Recall that in [@AzSaSp], it is proven that for a locally compact group $G$ with a finite derived subgroup $G'$, $ZL^1(G)$ is amenable. Also for a specific class of FC groups, called RDPF groups, the amenability of $\Zlg$ is characterized in [@ma2]. The result is that, for a RDPF group $G$, $\Zlg$ is amenable if and only if $|G'|<\infty$. These two studies suggest that the later characterization for RDPF groups may be extendible to general FC groups. 2.0em Acknowledgements {#acknowledgements .unnumbered} ================ This research was supported by a Ph.D. Dean’s Scholarship at University of Saskatchewan and a Postdoctoral Fellowship form the Fields Institute For Research In Mathematical Sciences and University of Waterloo. These supports are gratefully acknowledged. The author also would like to express his deep gratitude to Yemon Choi, Ebrahim Samei, and Nico Spronk for several constructive discussions and suggestions which improved the paper significantly. The author also thanks Nico Spronk for directing him to [@zw-the], Ajit Singh for directing him to [@singh-segal; @singh-Dtikin; @singh-mem], and Jason Crann for directing him to [@izu]. 1.5em [ [Mahmood Alaghmandan]{}0.1em The Fields Institute For Research In Mathematical Sciences,222 College St,Toronto, ON M5T 3J1, Canada]{} 1em & 1em Department of Pure Mathematics, University of Waterloo, Waterloo, ON N2L 3G1, Canada 1em Email: `[email protected]` [^1]: Note that in general groups the equality holds, but this side of the inclusion suffices.
--- abstract: 'The concept of signature was introduced by Samaniego for systems whose components have i.i.d. lifetimes. This concept proved to be useful in the analysis of theoretical behaviors of systems. In particular, it provides an interesting signature-based representation of the system reliability in terms of reliabilities of $k$-out-of-$n$ systems. In the non-i.i.d. case, we show that, at any time, this representation still holds true for every coherent system if and only if the component states are exchangeable. We also discuss conditions for obtaining an alternative representation of the system reliability in which the signature is replaced by its non-i.i.d. extension. Finally, we discuss conditions for the system reliability to have both representations.' address: - 'Mathematics Research Unit, FSTC, University of Luxembourg, 6, rue Coudenhove-Kalergi, L-1359 Luxembourg, Luxembourg' - 'Mathematics Research Unit, FSTC, University of Luxembourg, 6, rue Coudenhove-Kalergi, L-1359 Luxembourg, Luxembourg' - 'Mathematics Research Unit, FSTC, University of Luxembourg, 6, rue Coudenhove-Kalergi, L-1359 Luxembourg, Luxembourg and Bolyai Institute, University of Szeged, Aradi vértanúk tere 1, H-6720 Szeged, Hungary' author: - 'Jean-Luc Marichal' - Pierre Mathonet - Tamás Waldhauser date: 'February 17, 2011' title: 'On signature-based expressions of system reliability' --- \[theorem\][Lemma]{} \[theorem\][Proposition]{} \[theorem\][Corollary]{} \[theorem\][Fact]{} \[theorem\][Definition]{} \[theorem\][Example]{} Introduction ============ Consider a system made up of $n$ $(n\geqslant 3)$ components and let $\phi\colon\{0,1\}^n\to\{0,1\}$ be its *structure function*, which expresses the state of the system in terms of the states of its components. Denote the set of components by $[n]=\{1,\ldots,n\}$. We assume that the system is *coherent*, which means that $\phi$ is nondecreasing in each variable and has only essential variables, i.e., for every $k\in [n]$, there exists ${\mathbf{x}}=(x_1,\ldots,x_n)\in\{0,1\}^n$ such that $\phi({\mathbf{x}})|_{x_k=0}\neq\phi({\mathbf{x}})|_{x_k=1}$. Let $X_1,\ldots,X_n$ denote the component lifetimes and let $X_{1:n},\ldots,X_{n:n}$ be the order statistics obtained by rearranging the variables $X_1,\ldots,X_n$ in ascending order of magnitude; that is, $X_{1:n}\leqslant\cdots\leqslant X_{n:n}$. Denote also the system lifetime by $T$ and the system reliability at time $t>0$ by $\overline{F}_S(t)=\Pr(T>t)$. Assuming that the component lifetimes are independent and identically distributed (i.i.d.) according to an absolutely continuous joint c.d.f.$F$, one can show (see Samaniego [@Sam85]) that $$\label{eq:sdf76} \overline{F}_S(t) = \sum_{k=1}^n\Pr(T=X_{k:n})\,\overline{F}_{k:n}(t)$$ for every $t>0$, where $\overline{F}_{k:n}(t)=\Pr(X_{k:n}>t)$. Under this i.i.d. assumption, Samaniego [@Sam85] introduced the *signature* of the system as the $n$-tuple $\mathbf{s}=(s_1,\ldots,s_n)$, where $$s_k=\Pr(T=X_{k:n}), \qquad k\in [n],$$ is the probability that the $k$th component failure causes the system to fail. It turned out that the signature is a feature of the system design in the sense that it depends only on the structure function $\phi$ (and not on the c.d.f. $F$). Boland [@Bol01] obtained the explicit formula $$s_k=\phi_{n-k+1}-\phi_{n-k}$$ where $$\label{eq:phikaa} \phi_k=\frac{1}{{n\choose k}}\,\sum_{\textstyle{{\mathbf{x}}\in\{0,1\}^n\atop |{\mathbf{x}}|=k}}\phi({\mathbf{x}})$$ and $|{\mathbf{x}}|=\sum_{i=1}^nx_i$. Thus, under the i.i.d. assumption, the system reliability can be calculated by the formula $$\label{eq:as897ds} \overline{F}_S(t) = \sum_{k=1}^n\big(\phi_{n-k+1}-\phi_{n-k}\big)\,\overline{F}_{k:n}(t).$$ Since formula (\[eq:as897ds\]) provides a simple and useful way to compute the system reliability through the concept of signature, it is natural to relax the i.i.d. assumption (as Samaniego [@Sam07 Section 8.3] rightly suggested) and search for necessary and sufficient conditions on the joint c.d.f. $F$ for formulas (\[eq:sdf76\]) and/or (\[eq:as897ds\]) to still hold for every system design. On this issue, Kochar et al. [@KocMukSam99 p. 513] mentioned that (\[eq:sdf76\]) and (\[eq:as897ds\]) still hold when the continuous variables $X_1,\ldots,X_n$ are exchangeable (i.e., when $F$ is invariant under any permutation of indexes); see also [@NavRuiSan05; @Zha10] (and [@NavRyc07 Lemma 1] for a detailed proof). It is also noteworthy that Navarro et al. [@NavSamBalBha08 Thm. 3.6] showed that (\[eq:sdf76\]) still holds when the joint c.d.f. $F$ has no ties (i.e., $\Pr(X_i=X_j)=0$ for every $i\neq j$) and the variables $X_1,\ldots,X_n$ are “weakly exchangeable” (see Remark \[remarkweak\] below). As we will show, all these conditions are not necessary. Let $\Phi_n$ denote the family of nondecreasing functions $\phi\colon\{0,1\}^n\to\{0,1\}$ whose variables are all essential. In this paper, without any assumption on the joint c.d.f. $F$, we show that, for every $t>0$, the representation in (\[eq:as897ds\]) of the system reliability holds for every $\phi\in\Phi_n$ if and only if the variables $\chi_1(t),\ldots,\chi_n(t)$ are exchangeable, where $$\chi_k(t)=\mathrm{Ind}(X_k>t)$$ denotes the *random state* of the $k$th component at time $t$ (i.e., $\chi_k(t)$ is the indicator variable of the event ($X_k>t$)). This result is stated in Theorem \[thm:aasd78\]. Assuming that the joint c.d.f. $F$ has no ties, we also yield necessary and sufficient conditions on $F$ for formula (\[eq:sdf76\]) to hold for every $\phi\in\Phi_n$ (Theorem \[thm:aasd78zz\]). These conditions can be interpreted in terms of symmetry of certain conditional probabilities. We also show (Proposition \[prop:aasd78zzz\]) that the condition[^1] $$\label{eq:wer876} \Pr(T=X_{k:n})=\phi_{n-k+1}-\phi_{n-k}\, ,\qquad k\in [n]$$ holds for every $\phi\in\Phi_n$ if and only if $$\label{eq:sf86} \Pr\Big(\max_{i\in [n]\setminus A}X_i<\min_{i\in A}X_i\Big) = \frac{1}{{n\choose |A|}}~,\qquad A\subseteq [n].$$ Finally, we show that both (\[eq:sdf76\]) and (\[eq:as897ds\]) hold for every $t>0$ and every $\phi\in\Phi_n$ if and only if (\[eq:sf86\]) holds and the variables $\chi_1(t),\ldots,\chi_n(t)$ are exchangeable for every $t>0$ (Theorem \[thm:aasd78zzzz\]). Through the usual identification of the elements of $\{0,1\}^n$ with the subsets of $[n]$, a pseudo-Boolean function $f\colon\{0,1\}^n\to{\mathbb{R}}$ can be described equivalently by a set function $v_f\colon 2^{[n]}\to{\mathbb{R}}$. We simply write $v_f(A)=f(\mathbf{1}_A)$, where $\mathbf{1}_A$ denotes the $n$-tuple whose $i$th coordinate ($i\in [n]$) is $1$, if $i\in A$, and $0$, otherwise. To avoid cumbersome notation, we henceforth use the same symbol to denote both a given pseudo-Boolean function and its underlying set function, thus writing $f\colon\{0,1\}^n\to{\mathbb{R}}$ or $f\colon 2^{[n]}\to{\mathbb{R}}$ interchangeably. Recall that the $k$th order statistic function ${\mathbf{x}}\mapsto x_{k:n}$ of $n$ Boolean variables is defined by $x_{k:n}=1$, if $|\mathbf{x}|\geqslant n-k+1$, and $0$, otherwise. As a matter of convenience, we also formally define $x_{0:n}\equiv 0$ and $x_{n+1:n}\equiv 1$. Signature-based decomposition of the system reliability ======================================================= In the present section, without any assumption on the joint c.d.f. $F$, we show that, for every $t>0$, (\[eq:as897ds\]) holds true for every $\phi\in\Phi_n$ if and only if the state variables $\chi_1(t),\ldots,\chi_n(t)$ are exchangeable. The following result (see Dukhovny [@Duk07 Thm. 2]) gives a useful expression for the system reliability in terms of the underlying structure function and the component states. We provide a shorter proof here. For every $t>0$, we set $\boldsymbol{\chi}(t)=(\chi_1(t),\ldots,\chi_n(t))$. \[thm:thm1\] For every $t>0$, we have $$\label{eq:sf676} \overline{F}_S(t)=\sum_{\mathbf{x}\in\{0,1\}^n}\phi(\mathbf{x})\,\Pr(\boldsymbol{\chi}(t)=\mathbf{x}).$$ We simply have $$\overline{F}_S(t) = \Pr(\phi(\boldsymbol{\chi}(t))=1)=\sum_{\textstyle{{\mathbf{x}}\in\{0,1\}^n\atop \phi({\mathbf{x}})=1}}\Pr(\boldsymbol{\chi}(t)=\mathbf{x}),$$ which immediately leads to (\[eq:sf676\]). Applying (\[eq:sf676\]) to the $k$-out-of-$n$ system $\phi({\mathbf{x}})=x_{k:n}$, we obtain $$\overline{F}_{k:n}(t)=\sum_{|\mathbf{x}|\geqslant n-k+1}\Pr(\boldsymbol{\chi}(t)=\mathbf{x})$$ from which we immediately derive (see [@DukMar08 Prop. 13]) $$\label{eq:sf676xy} \overline{F}_{n-k+1:n}(t)-\overline{F}_{n-k:n}(t)=\sum_{|\mathbf{x}|=k}\Pr(\boldsymbol{\chi}(t)=\mathbf{x}).$$ The following proposition, a key result of this paper, provides necessary and sufficient conditions on $F$ for $\overline{F}_S(t)$ to be a certain weighted sum of the $\overline{F}_{k:n}(t)$, $k\in [n]$. We first consider a lemma. \[lemma:sd876\] Let $\lambda\colon\{0,1\}^n\to{\mathbb{R}}$ be a given function. We have $$\label{eq:d7df} \sum_{\mathbf{x}\in\{0,1\}^n}\lambda({\mathbf{x}})\,\phi(\mathbf{x})=0\qquad \mbox{for every}~\phi\in\Phi_n$$ if and only if $\lambda({\mathbf{x}})=0$ for all ${\mathbf{x}}\neq\mathbf{0}$. Condition (\[eq:d7df\]) defines a system of linear equations with the $2^n$ unknowns $\lambda({\mathbf{x}})$, ${\mathbf{x}}\in \{0,1\}^n$. We observe that there exist $2^n-1$ functions $\phi_A\in\Phi_n$, $A\not=\varnothing$, which are linearly independent when considered as real functions (for details, see Appendix \[app:lemma\]). It follows that the vectors of their values are also linearly independent. Therefore the equations in (\[eq:d7df\]) corresponding to the functions $\phi_A$, $A\not=\varnothing$, are linearly independent and hence the system has a rank at least $2^n-1$. This shows that its solutions are multiples of the immediate solution $\lambda_0$ defined by $\lambda_0({\mathbf{x}})=0$, if ${\mathbf{x}}\not=\mathbf{0}$, and $\lambda_0(\mathbf{0})=1$. Let $w\colon\{0,1\}^n\to{\mathbb{R}}$ be a given function. For every $k\in [n]$ and every $\phi\in\Phi_n$, define $$\label{eq:sdf65} \phi_k^w=\sum_{|{\mathbf{x}}|=k}w({\mathbf{x}})\, \phi({\mathbf{x}}).$$ \[lemma:aasd78y\] For every $t>0$, we have $$\overline{F}_S(t)=\sum_{k=1}^n \big(\phi^w_{n-k+1}-\phi^w_{n-k}\big)\,\overline{F}_{k:n}(t)\qquad \mbox{for every}~\phi\in\Phi_n$$ if and only if $$\label{eq:sad75fdf} \Pr(\boldsymbol{\chi}(t)=\mathbf{x}) ~=~ w({\mathbf{x}})\,\sum_{|\mathbf{z}|=|{\mathbf{x}}|}\Pr(\boldsymbol{\chi}(t)=\mathbf{z})\qquad \mbox{for every}~{\mathbf{x}}\neq\mathbf{0}.$$ First observe that we have $$\label{eq:dfrt778} \sum_{k=1}^n \big(\phi^w_{n-k+1}-\phi^w_{n-k}\big)\,\overline{F}_{k:n}(t)=\sum_{k=1}^n\phi^w_k\,\big(\overline{F}_{n-k+1:n}(t)-\overline{F}_{n-k:n}(t)\big).$$ This immediately follows from the elementary algebraic identity $$\sum_{k=1}^na_k\, (b_{n-k+1}-b_{n-k})=\sum_{k=1}^n b_k\, (a_{n-k+1}-a_{n-k})$$ which holds for all real tuples $(a_0,a_1,\ldots,a_n)$ and $(b_0,b_1,\ldots,b_n)$ such that $a_0=b_0=0$. Combining (\[eq:sf676xy\]) with (\[eq:sdf65\]) and (\[eq:dfrt778\]), we then obtain $$\begin{aligned} \sum_{k=1}^n \big(\phi^w_{n-k+1}-\phi^w_{n-k}\big)\,\overline{F}_{k:n}(t) &=& \sum_{k=1}^n\sum_{|\mathbf{x}|=k}w({\mathbf{x}})\,\phi(\mathbf{x})\,\sum_{|\mathbf{z}|=k}\Pr(\boldsymbol{\chi}(t)=\mathbf{z})\\ &=& \sum_{\mathbf{x}\in\{0,1\}^n} w({\mathbf{x}})\,\phi(\mathbf{x})\,\sum_{|\mathbf{z}|=|{\mathbf{x}}|}\Pr(\boldsymbol{\chi}(t)=\mathbf{z}).\end{aligned}$$ The result then follows from Proposition \[thm:thm1\] and Lemma \[lemma:sd876\]. We observe that the existence of a c.d.f. $F$ satisfying (\[eq:sad75fdf\]) with $\Pr(\boldsymbol{\chi}(t)=\mathbf{x})>0$ for some ${\mathbf{x}}\neq\mathbf{0}$ is only possible when $\sum_{|\mathbf{z}|=|\mathbf{x}|}w(\mathbf{z})=1$. In this paper we will actually make use of (\[eq:sad75fdf\]) only when this condition holds (see (\[eq:ds67dd\]) and (\[eq:yx6czz\])). We now apply Proposition \[lemma:aasd78y\] to obtain necessary and sufficient conditions on $F$ for (\[eq:as897ds\]) to hold for every $\phi\in\Phi_n$. \[thm:aasd78\] For every $t>0$, the representation (\[eq:as897ds\]) holds for every $\phi\in\Phi_n$ if and only if the indicator variables $\chi_1(t),\ldots,\chi_n(t)$ are exchangeable. Using (\[eq:phikaa\]) and Proposition \[lemma:aasd78y\], we see that condition (\[eq:as897ds\]) is equivalent to $$\label{eq:ds67dd} \Pr(\boldsymbol{\chi}(t)=\mathbf{x}) ~=~ \frac{1}{{n\choose |{\mathbf{x}}|}}\,\sum_{|\mathbf{z}|=|{\mathbf{x}}|}\Pr(\boldsymbol{\chi}(t)=\mathbf{z}).$$ Equivalently, we have $\Pr(\boldsymbol{\chi}(t)=\mathbf{x})=\Pr(\boldsymbol{\chi}(t)=\mathbf{x}')$ for every ${\mathbf{x}},{\mathbf{x}}'\in\{0,1\}^n$ such that $|{\mathbf{x}}|=|{\mathbf{x}}'|$. This condition clearly means that $\chi_1(t),\ldots,\chi_n(t)$ are exchangeable. The following well-known proposition (see for instance [@Spi01 Chap. 1] and [@Duk07 Section 2]) yields an interesting interpretation of the exchangeability of the component states $\chi_1(t),\ldots,\chi_n(t)$. For the sake of self-containment, a proof is given here. For every $t>0$, the component states $\chi_1(t),\ldots,\chi_n(t)$ are exchangeable if and only if the probability that a group of components survives beyond $t$ (i.e., the reliability of this group at time $t$) depends only on the number of components in the group. Let $A\subseteq [n]$ be a group of components. The exchangeability of the component states means that, for every $B\subseteq [n]$, the probability $\Pr(\boldsymbol{\chi}(t)=\mathbf{1}_B)$ depends only on $|B|$. In this case, the probability that the group $A$ survives beyond $t$, that is $$\overline{F}_A(t)=\sum_{B\supseteq A}\Pr(\boldsymbol{\chi}(t)=\mathbf{1}_B),$$ depends only on $|A|$. Conversely, if $\overline{F}_B(t)$ depends only on $|B|$ for every $B\subseteq [n]$, then $$\Pr(\boldsymbol{\chi}(t)=\mathbf{1}_A)=\sum_{B\supseteq A} (-1)^{|B|-|A|}\,\overline{F}_B(t)$$ depends only on $|A|$. \[rem:sd8f7\] Theorem \[thm:aasd78\] shows that the exchangeability of the component lifetimes is sufficient but not necessary for (\[eq:as897ds\]) to hold for every $\phi\in\Phi_n$ and every $t>0$. Indeed, the exchangeability of the component lifetimes entails the exchangeability of the component states. This follows for instance from the identity (see [@DukMar08 Eq. (6)]) $$\Pr(\boldsymbol{\chi}(t)=\mathbf{1}_A)=\sum_{B\subseteq A} (-1)^{|A|-|B|}\, F(t\mathbf{1}_{[n]\setminus B}+\infty\mathbf{1}_B).$$ However, the converse statement is not true in general. As an example, consider the random vector $(X_1,X_2)$ which takes each of the values $(2,1)$, $(4,2)$, $(1,3)$ and $(3,4)$ with probability 1/4. The state variables $\chi_1(t)$ and $\chi_2(t)$ are exchangeable at any time $t$. Indeed, one can easily see that, for $|{\mathbf{x}}|=1$, $$\Pr(\boldsymbol{\chi}(t)={\mathbf{x}})= \begin{cases} 1/4, & \mbox{if $t\in [1,4)$},\\ 0, & \mbox{otherwise}. \end{cases}$$ However, the variables $X_1$ and $X_2$ are not exchangeable since, for instance, $$0=F(1.5, 2.5)\neq F(2.5,1.5)=1/4.$$ Alternative decomposition of the system reliability =================================================== Assuming only that $F$ has no ties (i.e., $\Pr(X_i=X_j)=0$ for every $i\neq j$), we now provide necessary and sufficient conditions on $F$ for formula (\[eq:sdf76\]) to hold for every $\phi\in\Phi_n$, thus answering a question raised implicitly in [@NavSamBalBha08 p. 320]. Let $q\colon 2^{[n]}\to [0,1]$ be the *relative quality function* (associated with $F$), which is defined as $$q(A)=\Pr\Big(\max_{i\in [n]\setminus A}X_i<\min_{i\in A}X_i\Big)$$ with the convention that $q(\varnothing)=q([n])=1$ (see [@MarMatb Section 2]). By definition, $q(A)$ is the probability that the $|A|$ components having the longest lifetimes are exactly those in $A$. It then immediately follows that the function $q$ satisfies the following important property: $$\label{eq:sdf5sds} \sum_{|{\mathbf{x}}|=k}q({\mathbf{x}})=1,\qquad k\in [n].$$ Under the assumption that $F$ has no ties, the authors [@MarMatb Thm. 3] proved that $$\label{eq:asd7} \Pr(T=X_{k:n})=\phi^q_{n-k+1}-\phi^q_{n-k}\, ,$$ where $\phi^q_k$ is defined in (\[eq:sdf65\]). Combining (\[eq:asd7\]) with Proposition \[lemma:aasd78y\], we immediately derive the following result. \[thm:aasd78zz\] Assume that $F$ has no ties. For every $t>0$, the representation (\[eq:sdf76\]) holds for every $\phi\in\Phi_n$ if and only if $$\label{eq:yx6czz} \Pr(\boldsymbol{\chi}(t)=\mathbf{x}) ~=~ q({\mathbf{x}})\,\sum_{|\mathbf{z}|=|{\mathbf{x}}|}\Pr(\boldsymbol{\chi}(t)=\mathbf{z}).$$ Condition (\[eq:yx6czz\]) has the following interpretation. We first observe that, for every $A\subseteq [n]$, $$\boldsymbol{\chi}(t)=\mathbf{1}_A \quad\Leftrightarrow\quad \max_{i\in [n]\setminus A} X_i\leqslant t<\min_{i\in A}X_i\, .$$ Assuming that $q$ is a strictly positive function, condition (\[eq:yx6czz\]) then means that the conditional probability $$\frac{\Pr(\boldsymbol{\chi}(t)=\mathbf{1}_A)}{q(A)}=\Pr\Big(\max_{i\in [n]\setminus A} X_i\leqslant t<\min_{i\in A}X_i\,\Big|\, \max_{i\in [n]\setminus A} X_i<\min_{i\in A}X_i\Big)$$ depends only on $|A|$. \[remarkweak\] The concept of weak exchangeability was introduced in Navarro et al. [@NavSamBalBha08 p. 320] as follows. A random vector $(X_1,\ldots,X_n)$ is said to be *weakly exchangeable* if $$\Pr(X_{k:n}\leqslant t)=\Pr(X_{k:n}\leqslant t \mid X_{\sigma(1)}<\cdots<X_{\sigma(n)}),$$ for every $t>0$, every $k\in [n]$, and every permutation $\sigma$ on $[n]$. Theorem 3.6 in [@NavSamBalBha08] states that if $F$ has no ties and $(X_1,\ldots,X_n)$ is weakly exchangeable, then (\[eq:sdf76\]) holds for every $\phi\in\Phi_n$. By Theorem \[thm:aasd78zz\], we see that weak exchangeability implies condition (\[eq:yx6czz\]) whenever $F$ has no ties. However, the converse is not true in general. Indeed, in the example of Remark \[rem:sd8f7\], we can easily see that condition (\[eq:yx6czz\]) holds, while the lifetimes $X_1$ and $X_2$ are not weakly exchangeable. We now investigate condition (\[eq:wer876\]) under the sole assumption that $F$ has no ties. Navarro and Rychlik [@NavRyc07 Lemma 1] (see also [@MarMatb Rem. 4]) proved that this condition holds for every $\phi\in\Phi_n$ whenever the component lifetimes $X_1,\ldots,X_n$ are exchangeable. The following proposition gives a necessary and sufficient condition on $F$ (in terms of the function $q$) for (\[eq:wer876\]) to hold for every $\phi\in\Phi_n$. The function $q$ is said to be *symmetric* if $q({\mathbf{x}})=q({\mathbf{x}}')$ whenever $|{\mathbf{x}}|=|{\mathbf{x}}'|$. By (\[eq:sdf5sds\]) it follows that $q$ is symmetric if and only if $q({\mathbf{x}})=1/{n\choose|{\mathbf{x}}|}$ for every ${\mathbf{x}}\in\{0,1\}^n$. \[prop:aasd78zzz\] Assume that $F$ has no ties. Condition (\[eq:wer876\]) holds for every $\phi\in\Phi_n$ if and only if $q$ is symmetric. By (\[eq:asd7\]) we have $$\Pr(T=X_{k:n})=\sum_{{\mathbf{x}}\in\{0,1\}^n}\big(\delta_{|{\mathbf{x}}|,n-k+1}-\delta_{|{\mathbf{x}}|,n-k}\big)\, q({\mathbf{x}})\,\phi({\mathbf{x}}),\qquad k\in [n],$$ where $\delta$ stands for the Kronecker delta. Similarly, by (\[eq:phikaa\]) we have $$\phi_{n-k+1}-\phi_{n-k}=\sum_{{\mathbf{x}}\in\{0,1\}^n}\big(\delta_{|{\mathbf{x}}|,n-k+1}-\delta_{|{\mathbf{x}}|,n-k}\big)\, \frac{1}{{n\choose |{\mathbf{x}}|}}\,\phi({\mathbf{x}}),\qquad k\in [n].$$ The result then follows from Lemma \[lemma:sd876\]. We end this paper by studying the special case where both conditions (\[eq:sdf76\]) and (\[eq:as897ds\]) hold. We have the following result. \[thm:aasd78zzzz\] Assume that $F$ has no ties. The following assertions are equivalent. 1. Conditions (\[eq:sdf76\]) and (\[eq:as897ds\]) hold for every $\phi\in\Phi_n$ and every $t>0$. 2. Condition (\[eq:wer876\]) holds for every $\phi\in\Phi_n$ and the variables $\chi_1(t),\ldots,\chi_n(t)$ are exchangeable for every $t>0$. 3. The function $q$ is symmetric and the variables $\chi_1(t),\ldots,\chi_n(t)$ are exchangeable for every $t>0$. $(ii)\Leftrightarrow (iii)$ Follows from Proposition \[prop:aasd78zzz\]. $(ii)\Rightarrow (i)$ Follows from Theorem \[thm:aasd78\]. $(i)\Rightarrow (iii)$ By Theorem \[thm:aasd78\], we only need to prove that $q$ is symmetric. Combining (\[eq:ds67dd\]) with (\[eq:yx6czz\]), we obtain $$\bigg(q({\mathbf{x}})-\frac{1}{{n\choose |{\mathbf{x}}|}}\bigg)\,\sum_{|\mathbf{z}|=|{\mathbf{x}}|}\Pr(\boldsymbol{\chi}(t)=\mathbf{z})=0.$$ To conclude, we only need to prove that, for every $k\in [n-1]$, there exists $t>0$ such that $$\sum_{|\mathbf{z}|=k}\Pr(\boldsymbol{\chi}(t)=\mathbf{z})>0.$$ Suppose that this is not true. By (\[eq:sf676xy\]), there exists $k\in [n-1]$ such that $$0=\overline{F}_{n-k+1:n}(t)-\overline{F}_{n-k:n}(t)=\Pr(X_{n-k:n}\leqslant t< X_{n-k+1:n})$$ for every $t>0$. Then, denoting the set of positive rational numbers by ${\mathbb{Q}}^+$, the sequence of events $$E_m=(X_{n-k:n}\leqslant t_m< X_{n-k+1:n}),\qquad m\in{\mathbb{N}},$$ where $\{t_m : m\in{\mathbb{N}}\}={\mathbb{Q}}^+$, satisfies $\Pr(E_m)=0$. Since ${\mathbb{Q}}^+$ is dense in $(0,\infty)$, we obtain $$\Pr(X_{n-k:n}< X_{n-k+1:n})=\Pr\bigg(\bigcup_{m\in{\mathbb{N}}}\, E_m\bigg)=0,$$ which contradicts the assumption that $F$ has no ties. The following two examples show that neither of the conditions (\[eq:sdf76\]) and (\[eq:as897ds\]) implies the other. Let $(X_1,X_2,X_3)$ be the random vector which takes the values $(1, 2, 3)$, $(1, 3, 2)$, $(2, 1, 3)$, $(2, 3, 1)$, $(3, 2, 1)$, $(3, 1, 2)$, with probabilities $p_1,\ldots,p_6$, respectively. It was shown in [@NavSamBalBha08 Example 3.7] that (\[eq:sdf76\]) holds for every $\phi\in\Phi_n$ and every $t>0$. However, we can easily see that $\chi_1(t)$, $\chi_2(t)$, $\chi_3(t)$ are exchangeable for every $t>0$ if and only if $(p_1,\ldots,p_6)$ is a convex combination of $(0,1/3,1/3,0,1/3,0)$ and $(1/3,0,0,1/3,0,1/3)$. Hence, when the latter condition is not satisfied, (\[eq:as897ds\]) does not hold for every $\phi\in\Phi_n$ by Theorem \[thm:aasd78\]. Let $(X_1,X_2,X_3)$ be the random vector which takes the values $(1, 2, 4)$, $(2, 4, 5)$, $(3, 1, 2)$, $(4, 2, 3)$, $(5, 3, 4)$, $(2, 3, 1)$, $(3, 4, 2)$, $(4, 5, 3)$ with probabilities $p_1=\cdots =p_8=1/8$. We have $$q(\{1\})=q(\{2\})=q(\{1,2\})=q(\{1,3\})=3/8\quad\mbox{and}\quad q(\{3\})=q(\{2,3\})=2/8,$$ which shows that $q$ is not symmetric. However, we can easily see that $\chi_1(t)$, $\chi_2(t)$, $\chi_3(t)$ are exchangeable for every $t>0$. Indeed, we have $$\Pr(\boldsymbol{\chi}(t)={\mathbf{x}})= \begin{cases} 1/8, & \mbox{if $t\in [\alpha,\beta)$},\\ 0, & \mbox{otherwise}, \end{cases}$$ where $(\alpha,\beta)=(2,5)$ whenever $|{\mathbf{x}}|=1$ and $(\alpha,\beta)=(1,4)$ whenever $|{\mathbf{x}}|=2$. Thus (\[eq:as897ds\]) holds for every $\phi\in\Phi_n$ and every $t>0$ by Theorem \[thm:aasd78\]. However, (\[eq:sdf76\]) does not hold for every $\phi\in\Phi_n$ and every $t>0$ by Theorem \[thm:aasd78zzzz\]. Let $\Phi'_n$ be the class of structure functions of $n$-component *semicoherent* systems, that is, the class of nondecreasing functions $\phi\colon\{0,1\}^n\to\{0,1\}$ satisfying the boundary conditions $\phi(\mathbf{0})=0$ and $\phi(\mathbf{1})=1$. It is clear that Proposition \[thm:thm1\] and Lemma \[lemma:sd876\] still hold, even for $n=2$, if we extend the set $\Phi_n$ to $\Phi'_n$ (in the proof of Lemma \[lemma:sd876\] it is then sufficient to consider the $2^n-1$ functions $\phi_A({\mathbf{x}})=\prod_{i\in A}x_i$, $A\neq\varnothing$). We then observe that Propositions \[lemma:aasd78y\] and \[prop:aasd78zzz\] and Theorems \[thm:aasd78\], \[thm:aasd78zz\], and \[thm:aasd78zzzz\] (which use Proposition \[thm:thm1\] and Lemma \[lemma:sd876\] to provide conditions on $F$ for certain identities to hold for every $\phi\in\Phi_n$) are still valid for $n\geqslant 2$ if we replace $\Phi_n$ with $\Phi'_n$ (that is, if we consider semicoherent systems instead of coherent systems only). This observation actually strengthens these results. For instance, from Theorem \[thm:aasd78\] we can state that, for every fixed $t>0$, if (\[eq:as897ds\]) holds for every $\phi\in\Phi_n$, then the variables $\chi_1(t),\ldots,\chi_n(t)$ are exchangeable; conversely, for every $n\geqslant 2$ and every $t>0$, the latter condition implies that (\[eq:as897ds\]) holds for every $\phi\in\Phi'_n$. We also observe that the “semicoherent” version of Theorem \[thm:aasd78\] (i.e., where $\Phi_n$ is replaced with $\Phi'_n$) was proved by Dukhovny [@Duk07 Thm. 4]. Acknowledgments {#acknowledgments .unnumbered} =============== The authors wish to thank M. Couceiro, G. Peccati, and F. Spizzichino for fruitful discussions. Jean-Luc Marichal and Pierre Mathonet are supported by the internal research project F1R-MTH-PUL-09MRDO of the University of Luxembourg. Tamás Waldhauser is supported by the National Research Fund of Luxembourg, the Marie Curie Actions of the European Commission (FP7-COFUND), and the Hungarian National Foundation for Scientific Research under grant no. K77409. {#app:lemma} In this appendix we construct $2^n-1$ functions in $\Phi_n$ which are linearly independent when considered as real functions. Here the assumption $n\geqslant 3$ is crucial. Assume first that $n\neq 4$ and let $\pi$ be the permutation on $[n]$ defined by the following cycles $$\pi =\begin{cases} (1,2,\ldots,n), & \mbox{if $n$ is odd},\\ (1,2,3)\circ(4,5,\ldots,n), & \mbox{if $n$ is even}. \end{cases}$$ With every $A\varsubsetneq [n]$, $A\neq\varnothing$, we associate $A^*\subseteq [n]$ in the following way: - if $|A|\leqslant n-2$, then we choose any set $A^*$ such that $|A^*|=n-1$ and $A\cup A^*=[n]$; - if $A=[n]\setminus\{k\}$ for some $k\in [n]$, then we take $A^*=[n]\setminus\{\pi(k)\}$. We now show that the $2^n-1$ functions $\phi_A\in\Phi_n$, $A\subseteq [n]$, $A\neq\varnothing$, defined by $$\phi_A({\mathbf{x}})= \begin{cases} \big(\prod_{i\in A}x_i\big)\amalg\big(\prod_{i\in A^*}x_i\big), & \mbox{if $A\neq [n]$},\\ \prod_{i\in [n]}x_i\, , & \mbox{if $A=[n]$}, \end{cases}$$ where $\amalg$ denotes the coproduct (i.e., $x\amalg y=x+y-xy$), are linearly independent when considered as real functions. Suppose there exist real numbers $c_A$, $A\subseteq [n]$, $A\neq\varnothing$, such that $$\sum_{A\neq\varnothing} c_A\,\phi_A=0.$$ Expanding the left-hand side of this equation as a linear combination of the functions $\prod_{i\in B}x_i$, $B\subseteq [n]$, $B\neq\varnothing$, we first see that, if $|A|\leqslant n-2$, the coefficient of $\prod_{i\in A}x_i$ is $c_A$ and hence $c_A=0$ whenever $0<|A|\leqslant n-2$. Next, considering the coefficient of $\prod_{i\in A}x_i$ for $A=[n]\setminus\{k\}$, $k\in [n]$, we obtain $$c_{[n]\setminus\{k\}}+c_{[n]\setminus\{\pi^{-1}(k)\}}=0.$$ Since $\pi$ is made up of odd-length cycles only, it follows that $c_A=0$ whenever $|A|=n-1$. For $n=4$ we consider the function $\pi\colon [4]\to [4]$ defined by $\pi(1)=\pi(4)=2$, $\pi(2)=3$, and $\pi(3)=4$, and choose the functions $\phi_A$ as above. We then easily check that these functions are linearly independent. [1]{} P. J. Boland. Signatures of indirect majority systems. , 38:597–603, 2001. A. Dukhovny. Lattice polynomials of random variables. , 77(10):989–994, 2007. A. Dukhovny and J.-L. Marichal. System reliability and weighted lattice polynomials. , 22(3):373–388, 2008. S. Kochar, H. Mukerjee, and F. J. Samaniego. The “signature” of a coherent system and its application to comparisons among systems. , 46(5):507–523, 1999. J.-L. Marichal and P. Mathonet. . , 102(5):931–936, 2011. J. Navarro, J. M. Ruiz, and C. J. Sandoval. A note on comparisons among coherent systems with dependent components using signatures. , 72:179–185, 2005. J. Navarro and T. Rychlik. Reliability and expectation bounds for coherent systems with exchangeable components. , 98(1):102–113, 2007. J. Navarro, F. J. Samaniego, N. Balakrishnan, and D. Bhattacharya. On the application and extension of system signatures in engineering reliability. , 55:313–327, 2008. J. Navarro, F. Spizzichino, and N. Balakrishnan. Applications of average and projected systems to the study of coherent systems. , 101(6):1471–1482, 2010. F. Spizzichino. , 2001. F. J. Samaniego. On closure of the IFR class under formation of coherent systems. , 34:69–72, 1985. F. J. Samaniego. , 2007. Z. Zhang. Ordering conditional general coherent systems with exchangeable components. , 140:454–460, 2010. [^1]: Note that, according to the terminology used in [@NavSpiBal10], the left-hand side of (\[eq:wer876\]) is the $k$th coordinate of the *probability signature*, while the right-hand side is the $k$th coordinate of the *system signature*.